I0215 12:56:10.246750 8 e2e.go:243] Starting e2e run "eb19dab3-29cc-43ae-8042-7ddfc54b072f" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1581771369 - Will randomize all specs Will run 215 of 4412 specs Feb 15 12:56:10.519: INFO: >>> kubeConfig: /root/.kube/config Feb 15 12:56:10.544: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 15 12:56:10.592: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 15 12:56:10.635: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 15 12:56:10.635: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 15 12:56:10.635: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 15 12:56:10.646: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 15 12:56:10.646: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Feb 15 12:56:10.646: INFO: e2e test version: v1.15.7 Feb 15 12:56:10.647: INFO: kube-apiserver version: v1.15.1 SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 12:56:10.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected Feb 15 12:56:10.741: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-debb15b4-421b-46bd-b0be-f7e30c570006 STEP: Creating a pod to test consume configMaps Feb 15 12:56:10.762: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-34409dc8-f054-4157-954b-e36254945a1e" in namespace "projected-9590" to be "success or failure" Feb 15 12:56:10.781: INFO: Pod "pod-projected-configmaps-34409dc8-f054-4157-954b-e36254945a1e": Phase="Pending", Reason="", readiness=false. Elapsed: 19.279424ms Feb 15 12:56:14.588: INFO: Pod "pod-projected-configmaps-34409dc8-f054-4157-954b-e36254945a1e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.825976792s Feb 15 12:56:16.605: INFO: Pod "pod-projected-configmaps-34409dc8-f054-4157-954b-e36254945a1e": Phase="Pending", Reason="", readiness=false. Elapsed: 5.842748551s Feb 15 12:56:18.627: INFO: Pod "pod-projected-configmaps-34409dc8-f054-4157-954b-e36254945a1e": Phase="Pending", Reason="", readiness=false. Elapsed: 7.865317608s Feb 15 12:56:20.644: INFO: Pod "pod-projected-configmaps-34409dc8-f054-4157-954b-e36254945a1e": Phase="Pending", Reason="", readiness=false. Elapsed: 9.8824477s Feb 15 12:56:22.655: INFO: Pod "pod-projected-configmaps-34409dc8-f054-4157-954b-e36254945a1e": Phase="Pending", Reason="", readiness=false. Elapsed: 11.892848878s Feb 15 12:56:24.673: INFO: Pod "pod-projected-configmaps-34409dc8-f054-4157-954b-e36254945a1e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.911302469s STEP: Saw pod success Feb 15 12:56:24.673: INFO: Pod "pod-projected-configmaps-34409dc8-f054-4157-954b-e36254945a1e" satisfied condition "success or failure" Feb 15 12:56:24.676: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-34409dc8-f054-4157-954b-e36254945a1e container projected-configmap-volume-test: STEP: delete the pod Feb 15 12:56:24.754: INFO: Waiting for pod pod-projected-configmaps-34409dc8-f054-4157-954b-e36254945a1e to disappear Feb 15 12:56:24.764: INFO: Pod pod-projected-configmaps-34409dc8-f054-4157-954b-e36254945a1e no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 12:56:24.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9590" for this suite. Feb 15 12:56:30.813: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 12:56:30.952: INFO: namespace projected-9590 deletion completed in 6.182062466s • [SLOW TEST:20.305 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 12:56:30.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-6150 I0215 12:56:31.116107 8 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-6150, replica count: 1 I0215 12:56:32.167232 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0215 12:56:33.167715 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0215 12:56:34.168151 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0215 12:56:35.168741 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0215 12:56:36.169348 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0215 12:56:37.169744 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0215 12:56:38.170150 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0215 12:56:39.170541 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 15 12:56:39.356: INFO: Created: latency-svc-5gm5g Feb 15 12:56:39.372: INFO: Got endpoints: latency-svc-5gm5g [101.69948ms] Feb 15 12:56:39.435: INFO: Created: latency-svc-km5x2 Feb 15 12:56:39.557: INFO: Got endpoints: latency-svc-km5x2 [183.53753ms] Feb 15 12:56:39.589: INFO: Created: latency-svc-wlt2q Feb 15 12:56:39.589: INFO: Got endpoints: latency-svc-wlt2q [216.233767ms] Feb 15 12:56:39.641: INFO: Created: latency-svc-p9htn Feb 15 12:56:39.783: INFO: Got endpoints: latency-svc-p9htn [409.991704ms] Feb 15 12:56:39.807: INFO: Created: latency-svc-cw8cf Feb 15 12:56:39.865: INFO: Got endpoints: latency-svc-cw8cf [490.15153ms] Feb 15 12:56:39.875: INFO: Created: latency-svc-6lttw Feb 15 12:56:39.883: INFO: Got endpoints: latency-svc-6lttw [507.792143ms] Feb 15 12:56:39.974: INFO: Created: latency-svc-t9jtm Feb 15 12:56:39.978: INFO: Got endpoints: latency-svc-t9jtm [603.27564ms] Feb 15 12:56:40.018: INFO: Created: latency-svc-pwwvf Feb 15 12:56:40.032: INFO: Got endpoints: latency-svc-pwwvf [657.634287ms] Feb 15 12:56:40.110: INFO: Created: latency-svc-scmzk Feb 15 12:56:40.124: INFO: Got endpoints: latency-svc-scmzk [749.612781ms] Feb 15 12:56:40.175: INFO: Created: latency-svc-xb4mc Feb 15 12:56:40.182: INFO: Got endpoints: latency-svc-xb4mc [807.834221ms] Feb 15 12:56:40.286: INFO: Created: latency-svc-gmbnw Feb 15 12:56:40.287: INFO: Got endpoints: latency-svc-gmbnw [911.958276ms] Feb 15 12:56:40.347: INFO: Created: latency-svc-tbt4r Feb 15 12:56:40.358: INFO: Got endpoints: latency-svc-tbt4r [984.351376ms] Feb 15 12:56:40.482: INFO: Created: latency-svc-v9459 Feb 15 12:56:40.521: INFO: Created: latency-svc-mcdb9 Feb 15 12:56:40.530: INFO: Got endpoints: latency-svc-v9459 [1.155140291s] Feb 15 12:56:40.644: INFO: Got endpoints: latency-svc-mcdb9 [1.271017999s] Feb 15 12:56:40.645: INFO: Created: latency-svc-5zt26 Feb 15 12:56:40.659: INFO: Got endpoints: latency-svc-5zt26 [1.28509043s] Feb 15 12:56:40.833: INFO: Created: latency-svc-2k957 Feb 15 12:56:40.837: INFO: Got endpoints: latency-svc-2k957 [1.462571625s] Feb 15 12:56:40.909: INFO: Created: latency-svc-5w7rj Feb 15 12:56:40.987: INFO: Got endpoints: latency-svc-5w7rj [1.429939057s] Feb 15 12:56:41.022: INFO: Created: latency-svc-lkbql Feb 15 12:56:41.043: INFO: Got endpoints: latency-svc-lkbql [1.453699813s] Feb 15 12:56:41.255: INFO: Created: latency-svc-tqtrr Feb 15 12:56:41.266: INFO: Got endpoints: latency-svc-tqtrr [1.48268133s] Feb 15 12:56:41.313: INFO: Created: latency-svc-mhm94 Feb 15 12:56:41.319: INFO: Got endpoints: latency-svc-mhm94 [1.453766176s] Feb 15 12:56:41.401: INFO: Created: latency-svc-bqcpr Feb 15 12:56:41.411: INFO: Got endpoints: latency-svc-bqcpr [1.528053613s] Feb 15 12:56:41.461: INFO: Created: latency-svc-5jl94 Feb 15 12:56:41.461: INFO: Got endpoints: latency-svc-5jl94 [1.483052125s] Feb 15 12:56:41.479: INFO: Created: latency-svc-cx95s Feb 15 12:56:41.485: INFO: Got endpoints: latency-svc-cx95s [1.453073013s] Feb 15 12:56:41.562: INFO: Created: latency-svc-8bfj8 Feb 15 12:56:41.564: INFO: Got endpoints: latency-svc-8bfj8 [1.439613636s] Feb 15 12:56:41.617: INFO: Created: latency-svc-qzz4p Feb 15 12:56:41.622: INFO: Got endpoints: latency-svc-qzz4p [1.439861189s] Feb 15 12:56:41.641: INFO: Created: latency-svc-vrlc9 Feb 15 12:56:42.140: INFO: Got endpoints: latency-svc-vrlc9 [1.852863929s] Feb 15 12:56:42.204: INFO: Created: latency-svc-bp5gf Feb 15 12:56:42.876: INFO: Got endpoints: latency-svc-bp5gf [2.51743462s] Feb 15 12:56:42.951: INFO: Created: latency-svc-gjj9t Feb 15 12:56:42.970: INFO: Got endpoints: latency-svc-gjj9t [2.439468003s] Feb 15 12:56:43.100: INFO: Created: latency-svc-gfr9l Feb 15 12:56:43.138: INFO: Got endpoints: latency-svc-gfr9l [2.492416191s] Feb 15 12:56:43.253: INFO: Created: latency-svc-6cblm Feb 15 12:56:43.263: INFO: Got endpoints: latency-svc-6cblm [2.603438916s] Feb 15 12:56:43.313: INFO: Created: latency-svc-nlcsh Feb 15 12:56:43.316: INFO: Got endpoints: latency-svc-nlcsh [2.478685647s] Feb 15 12:56:43.384: INFO: Created: latency-svc-zf75s Feb 15 12:56:43.386: INFO: Got endpoints: latency-svc-zf75s [2.398192855s] Feb 15 12:56:43.420: INFO: Created: latency-svc-5g57f Feb 15 12:56:43.451: INFO: Got endpoints: latency-svc-5g57f [2.407281754s] Feb 15 12:56:43.468: INFO: Created: latency-svc-qtblz Feb 15 12:56:43.480: INFO: Created: latency-svc-ggqg9 Feb 15 12:56:43.480: INFO: Got endpoints: latency-svc-qtblz [2.214262345s] Feb 15 12:56:43.533: INFO: Got endpoints: latency-svc-ggqg9 [2.214111599s] Feb 15 12:56:43.554: INFO: Created: latency-svc-74xc9 Feb 15 12:56:43.563: INFO: Got endpoints: latency-svc-74xc9 [2.151323158s] Feb 15 12:56:43.590: INFO: Created: latency-svc-q6h6s Feb 15 12:56:43.632: INFO: Got endpoints: latency-svc-q6h6s [2.170784314s] Feb 15 12:56:43.635: INFO: Created: latency-svc-6lr25 Feb 15 12:56:43.691: INFO: Got endpoints: latency-svc-6lr25 [2.20534302s] Feb 15 12:56:43.712: INFO: Created: latency-svc-sgws8 Feb 15 12:56:43.713: INFO: Got endpoints: latency-svc-sgws8 [2.148596848s] Feb 15 12:56:43.749: INFO: Created: latency-svc-s8l44 Feb 15 12:56:43.757: INFO: Got endpoints: latency-svc-s8l44 [2.134560208s] Feb 15 12:56:43.802: INFO: Created: latency-svc-64dcr Feb 15 12:56:43.901: INFO: Got endpoints: latency-svc-64dcr [1.761108806s] Feb 15 12:56:43.918: INFO: Created: latency-svc-zl5jw Feb 15 12:56:43.933: INFO: Got endpoints: latency-svc-zl5jw [1.056862944s] Feb 15 12:56:43.984: INFO: Created: latency-svc-7xhn4 Feb 15 12:56:44.049: INFO: Got endpoints: latency-svc-7xhn4 [146.721736ms] Feb 15 12:56:44.057: INFO: Created: latency-svc-ztvjq Feb 15 12:56:44.081: INFO: Got endpoints: latency-svc-ztvjq [1.110732619s] Feb 15 12:56:44.103: INFO: Created: latency-svc-pl6rt Feb 15 12:56:44.117: INFO: Got endpoints: latency-svc-pl6rt [978.333318ms] Feb 15 12:56:44.193: INFO: Created: latency-svc-rnvmp Feb 15 12:56:44.204: INFO: Got endpoints: latency-svc-rnvmp [941.217184ms] Feb 15 12:56:44.249: INFO: Created: latency-svc-nzp8j Feb 15 12:56:44.250: INFO: Got endpoints: latency-svc-nzp8j [933.981133ms] Feb 15 12:56:44.288: INFO: Created: latency-svc-8wxll Feb 15 12:56:44.340: INFO: Got endpoints: latency-svc-8wxll [954.569924ms] Feb 15 12:56:44.375: INFO: Created: latency-svc-pnkgq Feb 15 12:56:44.434: INFO: Got endpoints: latency-svc-pnkgq [982.652817ms] Feb 15 12:56:44.459: INFO: Created: latency-svc-n8c2h Feb 15 12:56:44.555: INFO: Got endpoints: latency-svc-n8c2h [1.075037596s] Feb 15 12:56:44.566: INFO: Created: latency-svc-2svv4 Feb 15 12:56:44.603: INFO: Got endpoints: latency-svc-2svv4 [1.06947394s] Feb 15 12:56:44.666: INFO: Created: latency-svc-6p2bh Feb 15 12:56:44.692: INFO: Got endpoints: latency-svc-6p2bh [1.129200112s] Feb 15 12:56:44.858: INFO: Created: latency-svc-pc9cc Feb 15 12:56:44.873: INFO: Got endpoints: latency-svc-pc9cc [1.241284844s] Feb 15 12:56:44.909: INFO: Created: latency-svc-bqppp Feb 15 12:56:44.999: INFO: Got endpoints: latency-svc-bqppp [1.307663425s] Feb 15 12:56:45.041: INFO: Created: latency-svc-mzvls Feb 15 12:56:45.049: INFO: Got endpoints: latency-svc-mzvls [1.336296977s] Feb 15 12:56:45.197: INFO: Created: latency-svc-5xpt4 Feb 15 12:56:45.220: INFO: Got endpoints: latency-svc-5xpt4 [1.462276546s] Feb 15 12:56:45.471: INFO: Created: latency-svc-ps4mk Feb 15 12:56:45.489: INFO: Got endpoints: latency-svc-ps4mk [1.554901757s] Feb 15 12:56:45.550: INFO: Created: latency-svc-kghcv Feb 15 12:56:45.631: INFO: Got endpoints: latency-svc-kghcv [1.581536312s] Feb 15 12:56:45.634: INFO: Created: latency-svc-dc7fm Feb 15 12:56:45.643: INFO: Got endpoints: latency-svc-dc7fm [1.562108468s] Feb 15 12:56:45.876: INFO: Created: latency-svc-5bwxx Feb 15 12:56:45.891: INFO: Got endpoints: latency-svc-5bwxx [1.774493604s] Feb 15 12:56:46.102: INFO: Created: latency-svc-79kkq Feb 15 12:56:46.129: INFO: Got endpoints: latency-svc-79kkq [1.924423843s] Feb 15 12:56:46.196: INFO: Created: latency-svc-md8vq Feb 15 12:56:46.308: INFO: Got endpoints: latency-svc-md8vq [2.058689686s] Feb 15 12:56:46.337: INFO: Created: latency-svc-hzlt2 Feb 15 12:56:46.348: INFO: Got endpoints: latency-svc-hzlt2 [2.006554083s] Feb 15 12:56:46.399: INFO: Created: latency-svc-7xn9n Feb 15 12:56:46.519: INFO: Got endpoints: latency-svc-7xn9n [2.084270947s] Feb 15 12:56:46.598: INFO: Created: latency-svc-8gqwn Feb 15 12:56:46.607: INFO: Got endpoints: latency-svc-8gqwn [2.051557017s] Feb 15 12:56:46.767: INFO: Created: latency-svc-nt5nx Feb 15 12:56:46.771: INFO: Got endpoints: latency-svc-nt5nx [2.167179798s] Feb 15 12:56:46.857: INFO: Created: latency-svc-5tmd4 Feb 15 12:56:46.957: INFO: Got endpoints: latency-svc-5tmd4 [2.264382746s] Feb 15 12:56:46.961: INFO: Created: latency-svc-flsl7 Feb 15 12:56:46.968: INFO: Got endpoints: latency-svc-flsl7 [2.09393674s] Feb 15 12:56:47.154: INFO: Created: latency-svc-6dsvp Feb 15 12:56:47.159: INFO: Got endpoints: latency-svc-6dsvp [2.159964378s] Feb 15 12:56:47.319: INFO: Created: latency-svc-vs5qj Feb 15 12:56:47.324: INFO: Got endpoints: latency-svc-vs5qj [2.274825461s] Feb 15 12:56:47.403: INFO: Created: latency-svc-2hkcj Feb 15 12:56:47.497: INFO: Got endpoints: latency-svc-2hkcj [2.277249339s] Feb 15 12:56:47.512: INFO: Created: latency-svc-cpdrb Feb 15 12:56:47.521: INFO: Got endpoints: latency-svc-cpdrb [2.032064243s] Feb 15 12:56:47.585: INFO: Created: latency-svc-trgz2 Feb 15 12:56:47.715: INFO: Got endpoints: latency-svc-trgz2 [2.08388073s] Feb 15 12:56:47.746: INFO: Created: latency-svc-pp7xc Feb 15 12:56:47.759: INFO: Got endpoints: latency-svc-pp7xc [2.115124466s] Feb 15 12:56:47.805: INFO: Created: latency-svc-s8vt6 Feb 15 12:56:47.913: INFO: Got endpoints: latency-svc-s8vt6 [2.021733153s] Feb 15 12:56:47.952: INFO: Created: latency-svc-fstkr Feb 15 12:56:47.964: INFO: Got endpoints: latency-svc-fstkr [1.83503112s] Feb 15 12:56:48.015: INFO: Created: latency-svc-gvz22 Feb 15 12:56:48.120: INFO: Got endpoints: latency-svc-gvz22 [1.81111783s] Feb 15 12:56:48.160: INFO: Created: latency-svc-4xtxw Feb 15 12:56:48.177: INFO: Got endpoints: latency-svc-4xtxw [1.829306098s] Feb 15 12:56:48.359: INFO: Created: latency-svc-j5mc8 Feb 15 12:56:48.424: INFO: Got endpoints: latency-svc-j5mc8 [1.905165072s] Feb 15 12:56:48.514: INFO: Created: latency-svc-w4nrd Feb 15 12:56:48.528: INFO: Got endpoints: latency-svc-w4nrd [1.920945459s] Feb 15 12:56:48.572: INFO: Created: latency-svc-jkdv4 Feb 15 12:56:48.589: INFO: Got endpoints: latency-svc-jkdv4 [1.817380573s] Feb 15 12:56:48.620: INFO: Created: latency-svc-6pwpx Feb 15 12:56:48.774: INFO: Created: latency-svc-5ck45 Feb 15 12:56:48.793: INFO: Got endpoints: latency-svc-5ck45 [1.825030716s] Feb 15 12:56:48.793: INFO: Got endpoints: latency-svc-6pwpx [1.835098375s] Feb 15 12:56:48.964: INFO: Created: latency-svc-cp75x Feb 15 12:56:48.985: INFO: Got endpoints: latency-svc-cp75x [1.82614517s] Feb 15 12:56:49.258: INFO: Created: latency-svc-qmbtx Feb 15 12:56:49.270: INFO: Got endpoints: latency-svc-qmbtx [1.945538685s] Feb 15 12:56:49.382: INFO: Created: latency-svc-hv2qd Feb 15 12:56:49.390: INFO: Got endpoints: latency-svc-hv2qd [1.892017347s] Feb 15 12:56:49.445: INFO: Created: latency-svc-g4tnq Feb 15 12:56:49.459: INFO: Got endpoints: latency-svc-g4tnq [1.937363873s] Feb 15 12:56:49.600: INFO: Created: latency-svc-h7p2l Feb 15 12:56:49.629: INFO: Got endpoints: latency-svc-h7p2l [1.912962591s] Feb 15 12:56:49.700: INFO: Created: latency-svc-hglsh Feb 15 12:56:51.117: INFO: Got endpoints: latency-svc-hglsh [3.358426898s] Feb 15 12:56:51.523: INFO: Created: latency-svc-mqcpt Feb 15 12:56:51.545: INFO: Got endpoints: latency-svc-mqcpt [3.63110599s] Feb 15 12:56:51.627: INFO: Created: latency-svc-r4zmn Feb 15 12:56:51.817: INFO: Got endpoints: latency-svc-r4zmn [3.852562182s] Feb 15 12:56:51.882: INFO: Created: latency-svc-s62xb Feb 15 12:56:51.912: INFO: Got endpoints: latency-svc-s62xb [3.792101081s] Feb 15 12:56:52.152: INFO: Created: latency-svc-szbp8 Feb 15 12:56:52.179: INFO: Got endpoints: latency-svc-szbp8 [4.001749852s] Feb 15 12:56:52.330: INFO: Created: latency-svc-jkn8g Feb 15 12:56:52.352: INFO: Got endpoints: latency-svc-jkn8g [3.927405281s] Feb 15 12:56:52.416: INFO: Created: latency-svc-2bpcv Feb 15 12:56:53.133: INFO: Got endpoints: latency-svc-2bpcv [4.604013359s] Feb 15 12:56:53.725: INFO: Created: latency-svc-qj5sc Feb 15 12:56:53.745: INFO: Got endpoints: latency-svc-qj5sc [5.156136156s] Feb 15 12:56:53.803: INFO: Created: latency-svc-fz2zp Feb 15 12:56:53.934: INFO: Got endpoints: latency-svc-fz2zp [5.140712779s] Feb 15 12:56:53.965: INFO: Created: latency-svc-2fhp6 Feb 15 12:56:53.965: INFO: Got endpoints: latency-svc-2fhp6 [5.17226799s] Feb 15 12:56:54.006: INFO: Created: latency-svc-pz7pk Feb 15 12:56:54.028: INFO: Got endpoints: latency-svc-pz7pk [5.042074772s] Feb 15 12:56:54.389: INFO: Created: latency-svc-vv5pw Feb 15 12:56:54.400: INFO: Got endpoints: latency-svc-vv5pw [5.130176216s] Feb 15 12:56:54.588: INFO: Created: latency-svc-2mrsd Feb 15 12:56:54.661: INFO: Got endpoints: latency-svc-2mrsd [5.270398437s] Feb 15 12:56:54.672: INFO: Created: latency-svc-qh2g7 Feb 15 12:56:54.908: INFO: Got endpoints: latency-svc-qh2g7 [5.449372805s] Feb 15 12:56:54.917: INFO: Created: latency-svc-4n95g Feb 15 12:56:54.952: INFO: Got endpoints: latency-svc-4n95g [5.323626875s] Feb 15 12:56:55.138: INFO: Created: latency-svc-kkv44 Feb 15 12:56:55.153: INFO: Got endpoints: latency-svc-kkv44 [4.035261039s] Feb 15 12:56:55.199: INFO: Created: latency-svc-p2796 Feb 15 12:56:55.211: INFO: Got endpoints: latency-svc-p2796 [3.665404125s] Feb 15 12:56:55.410: INFO: Created: latency-svc-4wkx4 Feb 15 12:56:55.427: INFO: Got endpoints: latency-svc-4wkx4 [3.609146092s] Feb 15 12:56:55.496: INFO: Created: latency-svc-tgbrm Feb 15 12:56:55.663: INFO: Got endpoints: latency-svc-tgbrm [3.750071377s] Feb 15 12:56:55.675: INFO: Created: latency-svc-fx97g Feb 15 12:56:55.700: INFO: Got endpoints: latency-svc-fx97g [3.52055484s] Feb 15 12:56:55.914: INFO: Created: latency-svc-kns6x Feb 15 12:56:55.989: INFO: Created: latency-svc-ssxt5 Feb 15 12:56:55.989: INFO: Got endpoints: latency-svc-kns6x [3.636327043s] Feb 15 12:56:56.212: INFO: Got endpoints: latency-svc-ssxt5 [3.079058083s] Feb 15 12:56:56.241: INFO: Created: latency-svc-wm8td Feb 15 12:56:56.274: INFO: Got endpoints: latency-svc-wm8td [2.528529048s] Feb 15 12:56:56.483: INFO: Created: latency-svc-nbz8d Feb 15 12:56:56.501: INFO: Got endpoints: latency-svc-nbz8d [2.566189223s] Feb 15 12:56:56.688: INFO: Created: latency-svc-s67t9 Feb 15 12:56:56.706: INFO: Got endpoints: latency-svc-s67t9 [2.740447546s] Feb 15 12:56:56.784: INFO: Created: latency-svc-trjr9 Feb 15 12:56:56.989: INFO: Got endpoints: latency-svc-trjr9 [2.960600128s] Feb 15 12:56:57.009: INFO: Created: latency-svc-ffpcq Feb 15 12:56:57.268: INFO: Got endpoints: latency-svc-ffpcq [2.867520753s] Feb 15 12:56:57.275: INFO: Created: latency-svc-dkszk Feb 15 12:56:57.281: INFO: Got endpoints: latency-svc-dkszk [2.619577481s] Feb 15 12:56:57.359: INFO: Created: latency-svc-xlhlc Feb 15 12:56:57.610: INFO: Got endpoints: latency-svc-xlhlc [2.701571725s] Feb 15 12:56:57.628: INFO: Created: latency-svc-r54lm Feb 15 12:56:57.659: INFO: Got endpoints: latency-svc-r54lm [2.706041272s] Feb 15 12:56:57.912: INFO: Created: latency-svc-ksxfj Feb 15 12:56:57.922: INFO: Got endpoints: latency-svc-ksxfj [2.768296956s] Feb 15 12:56:58.132: INFO: Created: latency-svc-jwwkc Feb 15 12:56:58.154: INFO: Got endpoints: latency-svc-jwwkc [2.943564696s] Feb 15 12:56:58.210: INFO: Created: latency-svc-kk9q6 Feb 15 12:56:58.346: INFO: Got endpoints: latency-svc-kk9q6 [2.91821031s] Feb 15 12:56:58.353: INFO: Created: latency-svc-87qf4 Feb 15 12:56:58.361: INFO: Got endpoints: latency-svc-87qf4 [2.697415308s] Feb 15 12:56:58.421: INFO: Created: latency-svc-2zm4f Feb 15 12:56:58.427: INFO: Got endpoints: latency-svc-2zm4f [2.726390975s] Feb 15 12:56:58.692: INFO: Created: latency-svc-g8jp2 Feb 15 12:56:58.700: INFO: Got endpoints: latency-svc-g8jp2 [2.710974118s] Feb 15 12:56:58.913: INFO: Created: latency-svc-57fts Feb 15 12:56:58.928: INFO: Got endpoints: latency-svc-57fts [2.714988741s] Feb 15 12:56:59.115: INFO: Created: latency-svc-hwfkf Feb 15 12:56:59.161: INFO: Got endpoints: latency-svc-hwfkf [2.886537641s] Feb 15 12:56:59.167: INFO: Created: latency-svc-zkt4t Feb 15 12:56:59.196: INFO: Got endpoints: latency-svc-zkt4t [2.694605095s] Feb 15 12:56:59.420: INFO: Created: latency-svc-xlfpj Feb 15 12:56:59.470: INFO: Got endpoints: latency-svc-xlfpj [2.763431895s] Feb 15 12:56:59.503: INFO: Created: latency-svc-plk57 Feb 15 12:56:59.665: INFO: Got endpoints: latency-svc-plk57 [2.675284884s] Feb 15 12:56:59.700: INFO: Created: latency-svc-5vb9b Feb 15 12:56:59.753: INFO: Got endpoints: latency-svc-5vb9b [2.484541035s] Feb 15 12:56:59.890: INFO: Created: latency-svc-cqmm6 Feb 15 12:56:59.929: INFO: Got endpoints: latency-svc-cqmm6 [2.6485018s] Feb 15 12:57:00.148: INFO: Created: latency-svc-bft9h Feb 15 12:57:00.167: INFO: Got endpoints: latency-svc-bft9h [2.556625582s] Feb 15 12:57:00.236: INFO: Created: latency-svc-trlb2 Feb 15 12:57:00.384: INFO: Got endpoints: latency-svc-trlb2 [2.724879761s] Feb 15 12:57:00.406: INFO: Created: latency-svc-9trtx Feb 15 12:57:00.508: INFO: Got endpoints: latency-svc-9trtx [2.586121693s] Feb 15 12:57:00.563: INFO: Created: latency-svc-qmmgr Feb 15 12:57:00.572: INFO: Got endpoints: latency-svc-qmmgr [2.417110826s] Feb 15 12:57:00.762: INFO: Created: latency-svc-7b79b Feb 15 12:57:00.775: INFO: Got endpoints: latency-svc-7b79b [2.429313893s] Feb 15 12:57:00.855: INFO: Created: latency-svc-wt5nn Feb 15 12:57:00.944: INFO: Got endpoints: latency-svc-wt5nn [2.582693689s] Feb 15 12:57:00.999: INFO: Created: latency-svc-xvqbh Feb 15 12:57:01.253: INFO: Got endpoints: latency-svc-xvqbh [2.825296162s] Feb 15 12:57:01.259: INFO: Created: latency-svc-przqv Feb 15 12:57:01.267: INFO: Got endpoints: latency-svc-przqv [2.566411653s] Feb 15 12:57:01.502: INFO: Created: latency-svc-tnmch Feb 15 12:57:01.517: INFO: Got endpoints: latency-svc-tnmch [2.588646258s] Feb 15 12:57:01.599: INFO: Created: latency-svc-zpn9w Feb 15 12:57:01.782: INFO: Got endpoints: latency-svc-zpn9w [2.619941613s] Feb 15 12:57:01.822: INFO: Created: latency-svc-jngtx Feb 15 12:57:02.075: INFO: Created: latency-svc-kl4mz Feb 15 12:57:02.085: INFO: Got endpoints: latency-svc-jngtx [2.889423576s] Feb 15 12:57:02.093: INFO: Got endpoints: latency-svc-kl4mz [2.622570796s] Feb 15 12:57:02.154: INFO: Created: latency-svc-stlwr Feb 15 12:57:02.341: INFO: Got endpoints: latency-svc-stlwr [2.676022309s] Feb 15 12:57:02.437: INFO: Created: latency-svc-t4hjn Feb 15 12:57:02.552: INFO: Got endpoints: latency-svc-t4hjn [2.798730535s] Feb 15 12:57:02.655: INFO: Created: latency-svc-czlqx Feb 15 12:57:02.746: INFO: Got endpoints: latency-svc-czlqx [2.816555932s] Feb 15 12:57:02.788: INFO: Created: latency-svc-4bxrc Feb 15 12:57:02.793: INFO: Got endpoints: latency-svc-4bxrc [2.625286267s] Feb 15 12:57:02.906: INFO: Created: latency-svc-lj7z9 Feb 15 12:57:02.915: INFO: Got endpoints: latency-svc-lj7z9 [2.530877064s] Feb 15 12:57:02.999: INFO: Created: latency-svc-2dzj9 Feb 15 12:57:02.999: INFO: Got endpoints: latency-svc-2dzj9 [2.490082669s] Feb 15 12:57:03.148: INFO: Created: latency-svc-wtqt9 Feb 15 12:57:03.162: INFO: Got endpoints: latency-svc-wtqt9 [2.589684983s] Feb 15 12:57:03.226: INFO: Created: latency-svc-ch6hw Feb 15 12:57:03.234: INFO: Got endpoints: latency-svc-ch6hw [2.458425307s] Feb 15 12:57:03.359: INFO: Created: latency-svc-5fq9w Feb 15 12:57:03.446: INFO: Got endpoints: latency-svc-5fq9w [2.501591927s] Feb 15 12:57:03.456: INFO: Created: latency-svc-ftkvc Feb 15 12:57:03.577: INFO: Got endpoints: latency-svc-ftkvc [2.323359037s] Feb 15 12:57:03.607: INFO: Created: latency-svc-bbmfh Feb 15 12:57:03.608: INFO: Got endpoints: latency-svc-bbmfh [2.339804975s] Feb 15 12:57:03.669: INFO: Created: latency-svc-wndtf Feb 15 12:57:03.669: INFO: Got endpoints: latency-svc-wndtf [2.151933319s] Feb 15 12:57:03.862: INFO: Created: latency-svc-vrpqv Feb 15 12:57:03.896: INFO: Got endpoints: latency-svc-vrpqv [2.113647766s] Feb 15 12:57:03.902: INFO: Created: latency-svc-lrvrz Feb 15 12:57:03.916: INFO: Got endpoints: latency-svc-lrvrz [1.830335369s] Feb 15 12:57:04.079: INFO: Created: latency-svc-4qpkk Feb 15 12:57:04.094: INFO: Got endpoints: latency-svc-4qpkk [2.001608143s] Feb 15 12:57:04.229: INFO: Created: latency-svc-vd5t6 Feb 15 12:57:04.240: INFO: Got endpoints: latency-svc-vd5t6 [1.898587674s] Feb 15 12:57:04.307: INFO: Created: latency-svc-cq2nh Feb 15 12:57:04.312: INFO: Got endpoints: latency-svc-cq2nh [1.758643032s] Feb 15 12:57:04.500: INFO: Created: latency-svc-rpgjn Feb 15 12:57:04.525: INFO: Got endpoints: latency-svc-rpgjn [1.777928324s] Feb 15 12:57:04.664: INFO: Created: latency-svc-8txs7 Feb 15 12:57:04.736: INFO: Got endpoints: latency-svc-8txs7 [1.942438332s] Feb 15 12:57:04.742: INFO: Created: latency-svc-d9jvv Feb 15 12:57:04.844: INFO: Got endpoints: latency-svc-d9jvv [1.928025117s] Feb 15 12:57:04.890: INFO: Created: latency-svc-hrj6v Feb 15 12:57:04.928: INFO: Got endpoints: latency-svc-hrj6v [1.92927211s] Feb 15 12:57:05.052: INFO: Created: latency-svc-frkgh Feb 15 12:57:05.065: INFO: Got endpoints: latency-svc-frkgh [1.902828033s] Feb 15 12:57:05.149: INFO: Created: latency-svc-6ch9z Feb 15 12:57:05.260: INFO: Got endpoints: latency-svc-6ch9z [2.025470924s] Feb 15 12:57:05.421: INFO: Created: latency-svc-qn47p Feb 15 12:57:05.462: INFO: Created: latency-svc-qgc8l Feb 15 12:57:05.462: INFO: Got endpoints: latency-svc-qn47p [2.015333781s] Feb 15 12:57:05.479: INFO: Got endpoints: latency-svc-qgc8l [1.901658993s] Feb 15 12:57:05.525: INFO: Created: latency-svc-p9xmv Feb 15 12:57:05.597: INFO: Got endpoints: latency-svc-p9xmv [1.988963457s] Feb 15 12:57:05.626: INFO: Created: latency-svc-f4rf6 Feb 15 12:57:05.636: INFO: Got endpoints: latency-svc-f4rf6 [1.966542637s] Feb 15 12:57:05.676: INFO: Created: latency-svc-djmcj Feb 15 12:57:05.691: INFO: Got endpoints: latency-svc-djmcj [1.795236033s] Feb 15 12:57:05.763: INFO: Created: latency-svc-bqkhp Feb 15 12:57:05.815: INFO: Got endpoints: latency-svc-bqkhp [1.898257936s] Feb 15 12:57:05.818: INFO: Created: latency-svc-4z5t2 Feb 15 12:57:05.859: INFO: Created: latency-svc-kngrq Feb 15 12:57:05.860: INFO: Got endpoints: latency-svc-4z5t2 [1.765667972s] Feb 15 12:57:05.932: INFO: Got endpoints: latency-svc-kngrq [1.691530843s] Feb 15 12:57:05.939: INFO: Created: latency-svc-mgr4d Feb 15 12:57:05.944: INFO: Got endpoints: latency-svc-mgr4d [1.632241217s] Feb 15 12:57:06.010: INFO: Created: latency-svc-7zn5n Feb 15 12:57:06.019: INFO: Got endpoints: latency-svc-7zn5n [1.493884631s] Feb 15 12:57:06.121: INFO: Created: latency-svc-l9rkv Feb 15 12:57:06.177: INFO: Created: latency-svc-xzbsr Feb 15 12:57:06.177: INFO: Got endpoints: latency-svc-l9rkv [1.441058296s] Feb 15 12:57:06.186: INFO: Got endpoints: latency-svc-xzbsr [1.342214821s] Feb 15 12:57:06.340: INFO: Created: latency-svc-7lqtw Feb 15 12:57:06.343: INFO: Got endpoints: latency-svc-7lqtw [1.414865321s] Feb 15 12:57:06.401: INFO: Created: latency-svc-q7bf5 Feb 15 12:57:06.495: INFO: Created: latency-svc-hr8gm Feb 15 12:57:06.495: INFO: Got endpoints: latency-svc-q7bf5 [1.429606264s] Feb 15 12:57:06.510: INFO: Got endpoints: latency-svc-hr8gm [1.250324553s] Feb 15 12:57:06.527: INFO: Created: latency-svc-r24md Feb 15 12:57:06.547: INFO: Got endpoints: latency-svc-r24md [1.084401556s] Feb 15 12:57:06.598: INFO: Created: latency-svc-2kd4v Feb 15 12:57:06.651: INFO: Got endpoints: latency-svc-2kd4v [1.172023149s] Feb 15 12:57:06.699: INFO: Created: latency-svc-7sf8l Feb 15 12:57:06.708: INFO: Got endpoints: latency-svc-7sf8l [1.110779491s] Feb 15 12:57:06.873: INFO: Created: latency-svc-zfxgs Feb 15 12:57:06.891: INFO: Got endpoints: latency-svc-zfxgs [1.254640526s] Feb 15 12:57:07.191: INFO: Created: latency-svc-bp26k Feb 15 12:57:07.199: INFO: Got endpoints: latency-svc-bp26k [1.507137358s] Feb 15 12:57:07.287: INFO: Created: latency-svc-gcd8x Feb 15 12:57:07.456: INFO: Got endpoints: latency-svc-gcd8x [1.64064843s] Feb 15 12:57:07.457: INFO: Created: latency-svc-t64qw Feb 15 12:57:07.471: INFO: Got endpoints: latency-svc-t64qw [1.610676953s] Feb 15 12:57:07.689: INFO: Created: latency-svc-6sdmk Feb 15 12:57:07.703: INFO: Got endpoints: latency-svc-6sdmk [1.770822939s] Feb 15 12:57:07.787: INFO: Created: latency-svc-d9tcp Feb 15 12:57:07.988: INFO: Created: latency-svc-gwkmw Feb 15 12:57:07.988: INFO: Got endpoints: latency-svc-d9tcp [2.044310508s] Feb 15 12:57:07.992: INFO: Got endpoints: latency-svc-gwkmw [1.972419483s] Feb 15 12:57:08.045: INFO: Created: latency-svc-chpm8 Feb 15 12:57:08.174: INFO: Got endpoints: latency-svc-chpm8 [1.996890232s] Feb 15 12:57:08.236: INFO: Created: latency-svc-v97ht Feb 15 12:57:08.259: INFO: Got endpoints: latency-svc-v97ht [2.072423011s] Feb 15 12:57:08.345: INFO: Created: latency-svc-f7rcn Feb 15 12:57:08.394: INFO: Got endpoints: latency-svc-f7rcn [2.050242764s] Feb 15 12:57:08.399: INFO: Created: latency-svc-rkq82 Feb 15 12:57:08.501: INFO: Got endpoints: latency-svc-rkq82 [2.005959544s] Feb 15 12:57:08.534: INFO: Created: latency-svc-jf9nd Feb 15 12:57:08.548: INFO: Got endpoints: latency-svc-jf9nd [2.037694887s] Feb 15 12:57:08.580: INFO: Created: latency-svc-fl9q4 Feb 15 12:57:08.595: INFO: Got endpoints: latency-svc-fl9q4 [2.047598599s] Feb 15 12:57:08.757: INFO: Created: latency-svc-nwpfv Feb 15 12:57:08.783: INFO: Got endpoints: latency-svc-nwpfv [2.131409405s] Feb 15 12:57:08.911: INFO: Created: latency-svc-gv4rm Feb 15 12:57:08.953: INFO: Created: latency-svc-m7tpx Feb 15 12:57:08.960: INFO: Got endpoints: latency-svc-gv4rm [2.251871226s] Feb 15 12:57:09.150: INFO: Got endpoints: latency-svc-m7tpx [2.258149031s] Feb 15 12:57:09.167: INFO: Created: latency-svc-42g9n Feb 15 12:57:09.172: INFO: Got endpoints: latency-svc-42g9n [1.972734756s] Feb 15 12:57:09.172: INFO: Latencies: [146.721736ms 183.53753ms 216.233767ms 409.991704ms 490.15153ms 507.792143ms 603.27564ms 657.634287ms 749.612781ms 807.834221ms 911.958276ms 933.981133ms 941.217184ms 954.569924ms 978.333318ms 982.652817ms 984.351376ms 1.056862944s 1.06947394s 1.075037596s 1.084401556s 1.110732619s 1.110779491s 1.129200112s 1.155140291s 1.172023149s 1.241284844s 1.250324553s 1.254640526s 1.271017999s 1.28509043s 1.307663425s 1.336296977s 1.342214821s 1.414865321s 1.429606264s 1.429939057s 1.439613636s 1.439861189s 1.441058296s 1.453073013s 1.453699813s 1.453766176s 1.462276546s 1.462571625s 1.48268133s 1.483052125s 1.493884631s 1.507137358s 1.528053613s 1.554901757s 1.562108468s 1.581536312s 1.610676953s 1.632241217s 1.64064843s 1.691530843s 1.758643032s 1.761108806s 1.765667972s 1.770822939s 1.774493604s 1.777928324s 1.795236033s 1.81111783s 1.817380573s 1.825030716s 1.82614517s 1.829306098s 1.830335369s 1.83503112s 1.835098375s 1.852863929s 1.892017347s 1.898257936s 1.898587674s 1.901658993s 1.902828033s 1.905165072s 1.912962591s 1.920945459s 1.924423843s 1.928025117s 1.92927211s 1.937363873s 1.942438332s 1.945538685s 1.966542637s 1.972419483s 1.972734756s 1.988963457s 1.996890232s 2.001608143s 2.005959544s 2.006554083s 2.015333781s 2.021733153s 2.025470924s 2.032064243s 2.037694887s 2.044310508s 2.047598599s 2.050242764s 2.051557017s 2.058689686s 2.072423011s 2.08388073s 2.084270947s 2.09393674s 2.113647766s 2.115124466s 2.131409405s 2.134560208s 2.148596848s 2.151323158s 2.151933319s 2.159964378s 2.167179798s 2.170784314s 2.20534302s 2.214111599s 2.214262345s 2.251871226s 2.258149031s 2.264382746s 2.274825461s 2.277249339s 2.323359037s 2.339804975s 2.398192855s 2.407281754s 2.417110826s 2.429313893s 2.439468003s 2.458425307s 2.478685647s 2.484541035s 2.490082669s 2.492416191s 2.501591927s 2.51743462s 2.528529048s 2.530877064s 2.556625582s 2.566189223s 2.566411653s 2.582693689s 2.586121693s 2.588646258s 2.589684983s 2.603438916s 2.619577481s 2.619941613s 2.622570796s 2.625286267s 2.6485018s 2.675284884s 2.676022309s 2.694605095s 2.697415308s 2.701571725s 2.706041272s 2.710974118s 2.714988741s 2.724879761s 2.726390975s 2.740447546s 2.763431895s 2.768296956s 2.798730535s 2.816555932s 2.825296162s 2.867520753s 2.886537641s 2.889423576s 2.91821031s 2.943564696s 2.960600128s 3.079058083s 3.358426898s 3.52055484s 3.609146092s 3.63110599s 3.636327043s 3.665404125s 3.750071377s 3.792101081s 3.852562182s 3.927405281s 4.001749852s 4.035261039s 4.604013359s 5.042074772s 5.130176216s 5.140712779s 5.156136156s 5.17226799s 5.270398437s 5.323626875s 5.449372805s] Feb 15 12:57:09.172: INFO: 50 %ile: 2.044310508s Feb 15 12:57:09.172: INFO: 90 %ile: 3.52055484s Feb 15 12:57:09.172: INFO: 99 %ile: 5.323626875s Feb 15 12:57:09.172: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 12:57:09.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-6150" for this suite. Feb 15 12:57:57.201: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 12:57:57.299: INFO: namespace svc-latency-6150 deletion completed in 48.11545637s • [SLOW TEST:86.346 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 12:57:57.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-98fee4cb-1548-430d-8869-f6a7ed24be85 STEP: Creating a pod to test consume secrets Feb 15 12:57:57.472: INFO: Waiting up to 5m0s for pod "pod-secrets-55d571aa-e94a-419e-bfd3-b6bfb1e33e09" in namespace "secrets-881" to be "success or failure" Feb 15 12:57:57.484: INFO: Pod "pod-secrets-55d571aa-e94a-419e-bfd3-b6bfb1e33e09": Phase="Pending", Reason="", readiness=false. Elapsed: 11.99703ms Feb 15 12:57:59.491: INFO: Pod "pod-secrets-55d571aa-e94a-419e-bfd3-b6bfb1e33e09": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01955691s Feb 15 12:58:01.499: INFO: Pod "pod-secrets-55d571aa-e94a-419e-bfd3-b6bfb1e33e09": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027055569s Feb 15 12:58:03.508: INFO: Pod "pod-secrets-55d571aa-e94a-419e-bfd3-b6bfb1e33e09": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035889759s Feb 15 12:58:05.515: INFO: Pod "pod-secrets-55d571aa-e94a-419e-bfd3-b6bfb1e33e09": Phase="Pending", Reason="", readiness=false. Elapsed: 8.042988595s Feb 15 12:58:07.537: INFO: Pod "pod-secrets-55d571aa-e94a-419e-bfd3-b6bfb1e33e09": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.06562577s STEP: Saw pod success Feb 15 12:58:07.538: INFO: Pod "pod-secrets-55d571aa-e94a-419e-bfd3-b6bfb1e33e09" satisfied condition "success or failure" Feb 15 12:58:07.544: INFO: Trying to get logs from node iruya-node pod pod-secrets-55d571aa-e94a-419e-bfd3-b6bfb1e33e09 container secret-volume-test: STEP: delete the pod Feb 15 12:58:07.633: INFO: Waiting for pod pod-secrets-55d571aa-e94a-419e-bfd3-b6bfb1e33e09 to disappear Feb 15 12:58:07.637: INFO: Pod pod-secrets-55d571aa-e94a-419e-bfd3-b6bfb1e33e09 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 12:58:07.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-881" for this suite. Feb 15 12:58:13.662: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 12:58:13.774: INFO: namespace secrets-881 deletion completed in 6.132731937s • [SLOW TEST:16.474 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 12:58:13.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-6e4f7742-2de9-40f3-8e53-fd5903b517b9 STEP: Creating secret with name s-test-opt-upd-ab095264-b9b4-414a-8b1e-34c35b513bd5 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-6e4f7742-2de9-40f3-8e53-fd5903b517b9 STEP: Updating secret s-test-opt-upd-ab095264-b9b4-414a-8b1e-34c35b513bd5 STEP: Creating secret with name s-test-opt-create-dcda480d-d535-4468-9266-e979eeee9495 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 12:59:43.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4816" for this suite. Feb 15 13:00:05.751: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:00:05.914: INFO: namespace projected-4816 deletion completed in 22.208780656s • [SLOW TEST:112.139 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:00:05.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Feb 15 13:00:06.933: INFO: Number of nodes with available pods: 0 Feb 15 13:00:06.933: INFO: Node iruya-node is running more than one daemon pod Feb 15 13:00:08.236: INFO: Number of nodes with available pods: 0 Feb 15 13:00:08.236: INFO: Node iruya-node is running more than one daemon pod Feb 15 13:00:08.947: INFO: Number of nodes with available pods: 0 Feb 15 13:00:08.947: INFO: Node iruya-node is running more than one daemon pod Feb 15 13:00:09.946: INFO: Number of nodes with available pods: 0 Feb 15 13:00:09.946: INFO: Node iruya-node is running more than one daemon pod Feb 15 13:00:10.970: INFO: Number of nodes with available pods: 0 Feb 15 13:00:10.970: INFO: Node iruya-node is running more than one daemon pod Feb 15 13:00:11.965: INFO: Number of nodes with available pods: 0 Feb 15 13:00:11.965: INFO: Node iruya-node is running more than one daemon pod Feb 15 13:00:13.172: INFO: Number of nodes with available pods: 0 Feb 15 13:00:13.172: INFO: Node iruya-node is running more than one daemon pod Feb 15 13:00:13.955: INFO: Number of nodes with available pods: 0 Feb 15 13:00:13.956: INFO: Node iruya-node is running more than one daemon pod Feb 15 13:00:14.946: INFO: Number of nodes with available pods: 1 Feb 15 13:00:14.946: INFO: Node iruya-node is running more than one daemon pod Feb 15 13:00:15.950: INFO: Number of nodes with available pods: 1 Feb 15 13:00:15.950: INFO: Node iruya-node is running more than one daemon pod Feb 15 13:00:16.954: INFO: Number of nodes with available pods: 2 Feb 15 13:00:16.954: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Feb 15 13:00:17.028: INFO: Number of nodes with available pods: 2 Feb 15 13:00:17.028: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1264, will wait for the garbage collector to delete the pods Feb 15 13:00:18.190: INFO: Deleting DaemonSet.extensions daemon-set took: 10.806025ms Feb 15 13:00:18.491: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.783578ms Feb 15 13:00:25.610: INFO: Number of nodes with available pods: 0 Feb 15 13:00:25.610: INFO: Number of running nodes: 0, number of available pods: 0 Feb 15 13:00:25.622: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1264/daemonsets","resourceVersion":"24445443"},"items":null} Feb 15 13:00:25.625: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1264/pods","resourceVersion":"24445444"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:00:25.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1264" for this suite. Feb 15 13:00:33.674: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:00:33.810: INFO: namespace daemonsets-1264 deletion completed in 8.166571972s • [SLOW TEST:27.895 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:00:33.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Feb 15 13:00:33.965: INFO: Waiting up to 5m0s for pod "pod-7d1cd31e-6a8c-4031-8170-9845e06d79d9" in namespace "emptydir-8186" to be "success or failure" Feb 15 13:00:33.975: INFO: Pod "pod-7d1cd31e-6a8c-4031-8170-9845e06d79d9": Phase="Pending", Reason="", readiness=false. Elapsed: 9.339131ms Feb 15 13:00:35.983: INFO: Pod "pod-7d1cd31e-6a8c-4031-8170-9845e06d79d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01741278s Feb 15 13:00:37.990: INFO: Pod "pod-7d1cd31e-6a8c-4031-8170-9845e06d79d9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024754672s Feb 15 13:00:39.999: INFO: Pod "pod-7d1cd31e-6a8c-4031-8170-9845e06d79d9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033185857s Feb 15 13:00:42.006: INFO: Pod "pod-7d1cd31e-6a8c-4031-8170-9845e06d79d9": Phase="Running", Reason="", readiness=true. Elapsed: 8.041014906s Feb 15 13:00:44.014: INFO: Pod "pod-7d1cd31e-6a8c-4031-8170-9845e06d79d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.049012624s STEP: Saw pod success Feb 15 13:00:44.015: INFO: Pod "pod-7d1cd31e-6a8c-4031-8170-9845e06d79d9" satisfied condition "success or failure" Feb 15 13:00:44.019: INFO: Trying to get logs from node iruya-node pod pod-7d1cd31e-6a8c-4031-8170-9845e06d79d9 container test-container: STEP: delete the pod Feb 15 13:00:44.165: INFO: Waiting for pod pod-7d1cd31e-6a8c-4031-8170-9845e06d79d9 to disappear Feb 15 13:00:44.179: INFO: Pod pod-7d1cd31e-6a8c-4031-8170-9845e06d79d9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:00:44.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8186" for this suite. Feb 15 13:00:50.234: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:00:50.358: INFO: namespace emptydir-8186 deletion completed in 6.158735s • [SLOW TEST:16.548 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:00:50.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 15 13:00:50.467: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:00:51.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7410" for this suite. Feb 15 13:00:57.791: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:00:57.966: INFO: namespace custom-resource-definition-7410 deletion completed in 6.366961251s • [SLOW TEST:7.607 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:00:57.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 15 13:00:58.108: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Feb 15 13:00:58.133: INFO: Number of nodes with available pods: 0 Feb 15 13:00:58.133: INFO: Node iruya-node is running more than one daemon pod Feb 15 13:00:59.149: INFO: Number of nodes with available pods: 0 Feb 15 13:00:59.149: INFO: Node iruya-node is running more than one daemon pod Feb 15 13:01:00.159: INFO: Number of nodes with available pods: 0 Feb 15 13:01:00.159: INFO: Node iruya-node is running more than one daemon pod Feb 15 13:01:01.147: INFO: Number of nodes with available pods: 0 Feb 15 13:01:01.147: INFO: Node iruya-node is running more than one daemon pod Feb 15 13:01:02.189: INFO: Number of nodes with available pods: 0 Feb 15 13:01:02.189: INFO: Node iruya-node is running more than one daemon pod Feb 15 13:01:04.629: INFO: Number of nodes with available pods: 0 Feb 15 13:01:04.629: INFO: Node iruya-node is running more than one daemon pod Feb 15 13:01:05.193: INFO: Number of nodes with available pods: 0 Feb 15 13:01:05.193: INFO: Node iruya-node is running more than one daemon pod Feb 15 13:01:06.475: INFO: Number of nodes with available pods: 0 Feb 15 13:01:06.475: INFO: Node iruya-node is running more than one daemon pod Feb 15 13:01:07.149: INFO: Number of nodes with available pods: 0 Feb 15 13:01:07.149: INFO: Node iruya-node is running more than one daemon pod Feb 15 13:01:08.253: INFO: Number of nodes with available pods: 2 Feb 15 13:01:08.253: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Feb 15 13:01:08.328: INFO: Wrong image for pod: daemon-set-mcvmd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 15 13:01:08.328: INFO: Wrong image for pod: daemon-set-vqtlt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 15 13:01:09.347: INFO: Wrong image for pod: daemon-set-mcvmd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 15 13:01:09.347: INFO: Wrong image for pod: daemon-set-vqtlt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 15 13:01:10.343: INFO: Wrong image for pod: daemon-set-mcvmd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 15 13:01:10.343: INFO: Wrong image for pod: daemon-set-vqtlt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 15 13:01:11.624: INFO: Wrong image for pod: daemon-set-mcvmd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 15 13:01:11.624: INFO: Wrong image for pod: daemon-set-vqtlt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 15 13:01:12.341: INFO: Wrong image for pod: daemon-set-mcvmd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 15 13:01:12.341: INFO: Wrong image for pod: daemon-set-vqtlt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 15 13:01:13.569: INFO: Wrong image for pod: daemon-set-mcvmd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 15 13:01:13.569: INFO: Wrong image for pod: daemon-set-vqtlt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 15 13:01:14.341: INFO: Wrong image for pod: daemon-set-mcvmd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 15 13:01:14.341: INFO: Wrong image for pod: daemon-set-vqtlt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 15 13:01:15.346: INFO: Wrong image for pod: daemon-set-mcvmd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 15 13:01:15.346: INFO: Wrong image for pod: daemon-set-vqtlt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 15 13:01:15.346: INFO: Pod daemon-set-vqtlt is not available Feb 15 13:01:16.345: INFO: Wrong image for pod: daemon-set-mcvmd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 15 13:01:16.345: INFO: Wrong image for pod: daemon-set-vqtlt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 15 13:01:16.345: INFO: Pod daemon-set-vqtlt is not available Feb 15 13:01:17.347: INFO: Wrong image for pod: daemon-set-mcvmd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 15 13:01:17.347: INFO: Wrong image for pod: daemon-set-vqtlt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 15 13:01:17.347: INFO: Pod daemon-set-vqtlt is not available Feb 15 13:01:18.343: INFO: Wrong image for pod: daemon-set-mcvmd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 15 13:01:18.343: INFO: Pod daemon-set-n98tl is not available Feb 15 13:01:19.347: INFO: Wrong image for pod: daemon-set-mcvmd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 15 13:01:19.347: INFO: Pod daemon-set-n98tl is not available Feb 15 13:01:20.353: INFO: Wrong image for pod: daemon-set-mcvmd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 15 13:01:20.354: INFO: Pod daemon-set-n98tl is not available Feb 15 13:01:21.347: INFO: Wrong image for pod: daemon-set-mcvmd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 15 13:01:21.347: INFO: Pod daemon-set-n98tl is not available Feb 15 13:01:22.345: INFO: Wrong image for pod: daemon-set-mcvmd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 15 13:01:22.345: INFO: Pod daemon-set-n98tl is not available Feb 15 13:01:23.459: INFO: Wrong image for pod: daemon-set-mcvmd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 15 13:01:23.459: INFO: Pod daemon-set-n98tl is not available Feb 15 13:01:24.399: INFO: Wrong image for pod: daemon-set-mcvmd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 15 13:01:24.399: INFO: Pod daemon-set-n98tl is not available Feb 15 13:01:25.343: INFO: Wrong image for pod: daemon-set-mcvmd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 15 13:01:25.343: INFO: Pod daemon-set-n98tl is not available Feb 15 13:01:26.346: INFO: Wrong image for pod: daemon-set-mcvmd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 15 13:01:27.393: INFO: Wrong image for pod: daemon-set-mcvmd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 15 13:01:28.343: INFO: Wrong image for pod: daemon-set-mcvmd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 15 13:01:29.365: INFO: Wrong image for pod: daemon-set-mcvmd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 15 13:01:30.343: INFO: Wrong image for pod: daemon-set-mcvmd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 15 13:01:31.341: INFO: Wrong image for pod: daemon-set-mcvmd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 15 13:01:31.341: INFO: Pod daemon-set-mcvmd is not available Feb 15 13:01:32.341: INFO: Pod daemon-set-k24tx is not available STEP: Check that daemon pods are still running on every node of the cluster. Feb 15 13:01:32.350: INFO: Number of nodes with available pods: 1 Feb 15 13:01:32.350: INFO: Node iruya-node is running more than one daemon pod Feb 15 13:01:33.363: INFO: Number of nodes with available pods: 1 Feb 15 13:01:33.364: INFO: Node iruya-node is running more than one daemon pod Feb 15 13:01:34.367: INFO: Number of nodes with available pods: 1 Feb 15 13:01:34.367: INFO: Node iruya-node is running more than one daemon pod Feb 15 13:01:35.361: INFO: Number of nodes with available pods: 1 Feb 15 13:01:35.361: INFO: Node iruya-node is running more than one daemon pod Feb 15 13:01:36.367: INFO: Number of nodes with available pods: 1 Feb 15 13:01:36.367: INFO: Node iruya-node is running more than one daemon pod Feb 15 13:01:37.371: INFO: Number of nodes with available pods: 1 Feb 15 13:01:37.371: INFO: Node iruya-node is running more than one daemon pod Feb 15 13:01:38.370: INFO: Number of nodes with available pods: 1 Feb 15 13:01:38.371: INFO: Node iruya-node is running more than one daemon pod Feb 15 13:01:39.366: INFO: Number of nodes with available pods: 2 Feb 15 13:01:39.366: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1628, will wait for the garbage collector to delete the pods Feb 15 13:01:39.531: INFO: Deleting DaemonSet.extensions daemon-set took: 73.475149ms Feb 15 13:01:39.932: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.512291ms Feb 15 13:01:47.947: INFO: Number of nodes with available pods: 0 Feb 15 13:01:47.947: INFO: Number of running nodes: 0, number of available pods: 0 Feb 15 13:01:47.985: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1628/daemonsets","resourceVersion":"24445704"},"items":null} Feb 15 13:01:47.990: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1628/pods","resourceVersion":"24445704"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:01:48.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1628" for this suite. Feb 15 13:01:54.081: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:01:54.311: INFO: namespace daemonsets-1628 deletion completed in 6.302771774s • [SLOW TEST:56.345 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:01:54.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 15 13:01:54.427: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3a0a89b1-0fed-4bf2-86fa-abe58aa2c394" in namespace "downward-api-5382" to be "success or failure" Feb 15 13:01:54.435: INFO: Pod "downwardapi-volume-3a0a89b1-0fed-4bf2-86fa-abe58aa2c394": Phase="Pending", Reason="", readiness=false. Elapsed: 7.364769ms Feb 15 13:01:56.446: INFO: Pod "downwardapi-volume-3a0a89b1-0fed-4bf2-86fa-abe58aa2c394": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018662626s Feb 15 13:01:58.455: INFO: Pod "downwardapi-volume-3a0a89b1-0fed-4bf2-86fa-abe58aa2c394": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028003234s Feb 15 13:02:00.467: INFO: Pod "downwardapi-volume-3a0a89b1-0fed-4bf2-86fa-abe58aa2c394": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03942037s Feb 15 13:02:02.488: INFO: Pod "downwardapi-volume-3a0a89b1-0fed-4bf2-86fa-abe58aa2c394": Phase="Pending", Reason="", readiness=false. Elapsed: 8.061313073s Feb 15 13:02:04.502: INFO: Pod "downwardapi-volume-3a0a89b1-0fed-4bf2-86fa-abe58aa2c394": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.075269893s STEP: Saw pod success Feb 15 13:02:04.503: INFO: Pod "downwardapi-volume-3a0a89b1-0fed-4bf2-86fa-abe58aa2c394" satisfied condition "success or failure" Feb 15 13:02:04.507: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-3a0a89b1-0fed-4bf2-86fa-abe58aa2c394 container client-container: STEP: delete the pod Feb 15 13:02:04.638: INFO: Waiting for pod downwardapi-volume-3a0a89b1-0fed-4bf2-86fa-abe58aa2c394 to disappear Feb 15 13:02:04.643: INFO: Pod downwardapi-volume-3a0a89b1-0fed-4bf2-86fa-abe58aa2c394 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:02:04.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5382" for this suite. Feb 15 13:02:10.725: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:02:10.891: INFO: namespace downward-api-5382 deletion completed in 6.240161365s • [SLOW TEST:16.579 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:02:10.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Feb 15 13:02:29.384: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 15 13:02:29.402: INFO: Pod pod-with-prestop-http-hook still exists Feb 15 13:02:31.403: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 15 13:02:31.413: INFO: Pod pod-with-prestop-http-hook still exists Feb 15 13:02:33.403: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 15 13:02:33.411: INFO: Pod pod-with-prestop-http-hook still exists Feb 15 13:02:35.403: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 15 13:02:35.421: INFO: Pod pod-with-prestop-http-hook still exists Feb 15 13:02:37.403: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 15 13:02:37.415: INFO: Pod pod-with-prestop-http-hook still exists Feb 15 13:02:39.403: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 15 13:02:39.411: INFO: Pod pod-with-prestop-http-hook still exists Feb 15 13:02:41.403: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 15 13:02:41.413: INFO: Pod pod-with-prestop-http-hook still exists Feb 15 13:02:43.404: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 15 13:02:43.413: INFO: Pod pod-with-prestop-http-hook still exists Feb 15 13:02:45.403: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 15 13:02:45.411: INFO: Pod pod-with-prestop-http-hook still exists Feb 15 13:02:47.403: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 15 13:02:47.409: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:02:47.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4432" for this suite. Feb 15 13:03:11.492: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:03:11.638: INFO: namespace container-lifecycle-hook-4432 deletion completed in 24.18846196s • [SLOW TEST:60.746 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:03:11.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-1179 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Feb 15 13:03:11.905: INFO: Found 0 stateful pods, waiting for 3 Feb 15 13:03:22.325: INFO: Found 2 stateful pods, waiting for 3 Feb 15 13:03:31.917: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 15 13:03:31.917: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 15 13:03:31.917: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Feb 15 13:03:41.924: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 15 13:03:41.924: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 15 13:03:41.924: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Feb 15 13:03:41.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1179 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 15 13:03:44.696: INFO: stderr: "I0215 13:03:44.287741 35 log.go:172] (0xc00072e370) (0xc00069e8c0) Create stream\nI0215 13:03:44.288218 35 log.go:172] (0xc00072e370) (0xc00069e8c0) Stream added, broadcasting: 1\nI0215 13:03:44.297908 35 log.go:172] (0xc00072e370) Reply frame received for 1\nI0215 13:03:44.298146 35 log.go:172] (0xc00072e370) (0xc0006840a0) Create stream\nI0215 13:03:44.298205 35 log.go:172] (0xc00072e370) (0xc0006840a0) Stream added, broadcasting: 3\nI0215 13:03:44.300109 35 log.go:172] (0xc00072e370) Reply frame received for 3\nI0215 13:03:44.300180 35 log.go:172] (0xc00072e370) (0xc000388000) Create stream\nI0215 13:03:44.300200 35 log.go:172] (0xc00072e370) (0xc000388000) Stream added, broadcasting: 5\nI0215 13:03:44.301644 35 log.go:172] (0xc00072e370) Reply frame received for 5\nI0215 13:03:44.459234 35 log.go:172] (0xc00072e370) Data frame received for 5\nI0215 13:03:44.459269 35 log.go:172] (0xc000388000) (5) Data frame handling\nI0215 13:03:44.459291 35 log.go:172] (0xc000388000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0215 13:03:44.582917 35 log.go:172] (0xc00072e370) Data frame received for 3\nI0215 13:03:44.582955 35 log.go:172] (0xc0006840a0) (3) Data frame handling\nI0215 13:03:44.582975 35 log.go:172] (0xc0006840a0) (3) Data frame sent\nI0215 13:03:44.681906 35 log.go:172] (0xc00072e370) (0xc0006840a0) Stream removed, broadcasting: 3\nI0215 13:03:44.682447 35 log.go:172] (0xc00072e370) Data frame received for 1\nI0215 13:03:44.682739 35 log.go:172] (0xc00072e370) (0xc000388000) Stream removed, broadcasting: 5\nI0215 13:03:44.682883 35 log.go:172] (0xc00069e8c0) (1) Data frame handling\nI0215 13:03:44.682970 35 log.go:172] (0xc00069e8c0) (1) Data frame sent\nI0215 13:03:44.683046 35 log.go:172] (0xc00072e370) (0xc00069e8c0) Stream removed, broadcasting: 1\nI0215 13:03:44.683109 35 log.go:172] (0xc00072e370) Go away received\nI0215 13:03:44.684110 35 log.go:172] (0xc00072e370) (0xc00069e8c0) Stream removed, broadcasting: 1\nI0215 13:03:44.684134 35 log.go:172] (0xc00072e370) (0xc0006840a0) Stream removed, broadcasting: 3\nI0215 13:03:44.684148 35 log.go:172] (0xc00072e370) (0xc000388000) Stream removed, broadcasting: 5\n" Feb 15 13:03:44.696: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 15 13:03:44.696: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Feb 15 13:03:54.761: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Feb 15 13:04:04.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1179 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 13:04:05.325: INFO: stderr: "I0215 13:04:05.072997 59 log.go:172] (0xc0008900b0) (0xc0007428c0) Create stream\nI0215 13:04:05.073201 59 log.go:172] (0xc0008900b0) (0xc0007428c0) Stream added, broadcasting: 1\nI0215 13:04:05.076639 59 log.go:172] (0xc0008900b0) Reply frame received for 1\nI0215 13:04:05.076684 59 log.go:172] (0xc0008900b0) (0xc0007b8000) Create stream\nI0215 13:04:05.076704 59 log.go:172] (0xc0008900b0) (0xc0007b8000) Stream added, broadcasting: 3\nI0215 13:04:05.079846 59 log.go:172] (0xc0008900b0) Reply frame received for 3\nI0215 13:04:05.079909 59 log.go:172] (0xc0008900b0) (0xc0002b8000) Create stream\nI0215 13:04:05.079926 59 log.go:172] (0xc0008900b0) (0xc0002b8000) Stream added, broadcasting: 5\nI0215 13:04:05.081587 59 log.go:172] (0xc0008900b0) Reply frame received for 5\nI0215 13:04:05.184969 59 log.go:172] (0xc0008900b0) Data frame received for 5\nI0215 13:04:05.185694 59 log.go:172] (0xc0002b8000) (5) Data frame handling\nI0215 13:04:05.185778 59 log.go:172] (0xc0002b8000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0215 13:04:05.186307 59 log.go:172] (0xc0008900b0) Data frame received for 3\nI0215 13:04:05.186407 59 log.go:172] (0xc0007b8000) (3) Data frame handling\nI0215 13:04:05.186480 59 log.go:172] (0xc0007b8000) (3) Data frame sent\nI0215 13:04:05.303154 59 log.go:172] (0xc0008900b0) (0xc0007b8000) Stream removed, broadcasting: 3\nI0215 13:04:05.303502 59 log.go:172] (0xc0008900b0) Data frame received for 1\nI0215 13:04:05.303547 59 log.go:172] (0xc0007428c0) (1) Data frame handling\nI0215 13:04:05.303623 59 log.go:172] (0xc0007428c0) (1) Data frame sent\nI0215 13:04:05.303647 59 log.go:172] (0xc0008900b0) (0xc0002b8000) Stream removed, broadcasting: 5\nI0215 13:04:05.303818 59 log.go:172] (0xc0008900b0) (0xc0007428c0) Stream removed, broadcasting: 1\nI0215 13:04:05.303897 59 log.go:172] (0xc0008900b0) Go away received\nI0215 13:04:05.305497 59 log.go:172] (0xc0008900b0) (0xc0007428c0) Stream removed, broadcasting: 1\nI0215 13:04:05.305522 59 log.go:172] (0xc0008900b0) (0xc0007b8000) Stream removed, broadcasting: 3\nI0215 13:04:05.305537 59 log.go:172] (0xc0008900b0) (0xc0002b8000) Stream removed, broadcasting: 5\n" Feb 15 13:04:05.325: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 15 13:04:05.325: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 15 13:04:15.364: INFO: Waiting for StatefulSet statefulset-1179/ss2 to complete update Feb 15 13:04:15.365: INFO: Waiting for Pod statefulset-1179/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 15 13:04:15.365: INFO: Waiting for Pod statefulset-1179/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 15 13:04:25.398: INFO: Waiting for StatefulSet statefulset-1179/ss2 to complete update Feb 15 13:04:25.398: INFO: Waiting for Pod statefulset-1179/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 15 13:04:25.398: INFO: Waiting for Pod statefulset-1179/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 15 13:04:35.384: INFO: Waiting for StatefulSet statefulset-1179/ss2 to complete update Feb 15 13:04:35.384: INFO: Waiting for Pod statefulset-1179/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 15 13:04:35.384: INFO: Waiting for Pod statefulset-1179/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 15 13:04:45.423: INFO: Waiting for StatefulSet statefulset-1179/ss2 to complete update Feb 15 13:04:45.423: INFO: Waiting for Pod statefulset-1179/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 15 13:04:55.381: INFO: Waiting for StatefulSet statefulset-1179/ss2 to complete update Feb 15 13:04:55.381: INFO: Waiting for Pod statefulset-1179/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 15 13:05:05.395: INFO: Waiting for StatefulSet statefulset-1179/ss2 to complete update STEP: Rolling back to a previous revision Feb 15 13:05:15.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1179 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 15 13:05:15.975: INFO: stderr: "I0215 13:05:15.593729 74 log.go:172] (0xc000a8e420) (0xc000664a00) Create stream\nI0215 13:05:15.593884 74 log.go:172] (0xc000a8e420) (0xc000664a00) Stream added, broadcasting: 1\nI0215 13:05:15.597260 74 log.go:172] (0xc000a8e420) Reply frame received for 1\nI0215 13:05:15.597310 74 log.go:172] (0xc000a8e420) (0xc0008f2000) Create stream\nI0215 13:05:15.597330 74 log.go:172] (0xc000a8e420) (0xc0008f2000) Stream added, broadcasting: 3\nI0215 13:05:15.598909 74 log.go:172] (0xc000a8e420) Reply frame received for 3\nI0215 13:05:15.598986 74 log.go:172] (0xc000a8e420) (0xc0008f20a0) Create stream\nI0215 13:05:15.598999 74 log.go:172] (0xc000a8e420) (0xc0008f20a0) Stream added, broadcasting: 5\nI0215 13:05:15.600102 74 log.go:172] (0xc000a8e420) Reply frame received for 5\nI0215 13:05:15.783547 74 log.go:172] (0xc000a8e420) Data frame received for 5\nI0215 13:05:15.783612 74 log.go:172] (0xc0008f20a0) (5) Data frame handling\nI0215 13:05:15.783642 74 log.go:172] (0xc0008f20a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0215 13:05:15.880380 74 log.go:172] (0xc000a8e420) Data frame received for 3\nI0215 13:05:15.880484 74 log.go:172] (0xc0008f2000) (3) Data frame handling\nI0215 13:05:15.880510 74 log.go:172] (0xc0008f2000) (3) Data frame sent\nI0215 13:05:15.967031 74 log.go:172] (0xc000a8e420) (0xc0008f2000) Stream removed, broadcasting: 3\nI0215 13:05:15.967995 74 log.go:172] (0xc000a8e420) (0xc0008f20a0) Stream removed, broadcasting: 5\nI0215 13:05:15.968098 74 log.go:172] (0xc000a8e420) Data frame received for 1\nI0215 13:05:15.968149 74 log.go:172] (0xc000664a00) (1) Data frame handling\nI0215 13:05:15.968217 74 log.go:172] (0xc000664a00) (1) Data frame sent\nI0215 13:05:15.968269 74 log.go:172] (0xc000a8e420) (0xc000664a00) Stream removed, broadcasting: 1\nI0215 13:05:15.968309 74 log.go:172] (0xc000a8e420) Go away received\nI0215 13:05:15.969198 74 log.go:172] (0xc000a8e420) (0xc000664a00) Stream removed, broadcasting: 1\nI0215 13:05:15.969224 74 log.go:172] (0xc000a8e420) (0xc0008f2000) Stream removed, broadcasting: 3\nI0215 13:05:15.969235 74 log.go:172] (0xc000a8e420) (0xc0008f20a0) Stream removed, broadcasting: 5\n" Feb 15 13:05:15.976: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 15 13:05:15.976: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 15 13:05:26.038: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Feb 15 13:05:36.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1179 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 13:05:36.407: INFO: stderr: "I0215 13:05:36.243153 94 log.go:172] (0xc000528b00) (0xc000592aa0) Create stream\nI0215 13:05:36.243372 94 log.go:172] (0xc000528b00) (0xc000592aa0) Stream added, broadcasting: 1\nI0215 13:05:36.246216 94 log.go:172] (0xc000528b00) Reply frame received for 1\nI0215 13:05:36.246248 94 log.go:172] (0xc000528b00) (0xc000592b40) Create stream\nI0215 13:05:36.246260 94 log.go:172] (0xc000528b00) (0xc000592b40) Stream added, broadcasting: 3\nI0215 13:05:36.247292 94 log.go:172] (0xc000528b00) Reply frame received for 3\nI0215 13:05:36.247332 94 log.go:172] (0xc000528b00) (0xc00091c000) Create stream\nI0215 13:05:36.247353 94 log.go:172] (0xc000528b00) (0xc00091c000) Stream added, broadcasting: 5\nI0215 13:05:36.248199 94 log.go:172] (0xc000528b00) Reply frame received for 5\nI0215 13:05:36.332880 94 log.go:172] (0xc000528b00) Data frame received for 5\nI0215 13:05:36.332935 94 log.go:172] (0xc00091c000) (5) Data frame handling\nI0215 13:05:36.332948 94 log.go:172] (0xc00091c000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0215 13:05:36.332960 94 log.go:172] (0xc000528b00) Data frame received for 3\nI0215 13:05:36.332966 94 log.go:172] (0xc000592b40) (3) Data frame handling\nI0215 13:05:36.332972 94 log.go:172] (0xc000592b40) (3) Data frame sent\nI0215 13:05:36.398513 94 log.go:172] (0xc000528b00) Data frame received for 1\nI0215 13:05:36.398750 94 log.go:172] (0xc000528b00) (0xc000592b40) Stream removed, broadcasting: 3\nI0215 13:05:36.398826 94 log.go:172] (0xc000592aa0) (1) Data frame handling\nI0215 13:05:36.398880 94 log.go:172] (0xc000592aa0) (1) Data frame sent\nI0215 13:05:36.398946 94 log.go:172] (0xc000528b00) (0xc000592aa0) Stream removed, broadcasting: 1\nI0215 13:05:36.400557 94 log.go:172] (0xc000528b00) (0xc00091c000) Stream removed, broadcasting: 5\nI0215 13:05:36.400663 94 log.go:172] (0xc000528b00) Go away received\nI0215 13:05:36.400713 94 log.go:172] (0xc000528b00) (0xc000592aa0) Stream removed, broadcasting: 1\nI0215 13:05:36.400737 94 log.go:172] (0xc000528b00) (0xc000592b40) Stream removed, broadcasting: 3\nI0215 13:05:36.400747 94 log.go:172] (0xc000528b00) (0xc00091c000) Stream removed, broadcasting: 5\n" Feb 15 13:05:36.407: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 15 13:05:36.407: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 15 13:05:47.750: INFO: Waiting for StatefulSet statefulset-1179/ss2 to complete update Feb 15 13:05:47.751: INFO: Waiting for Pod statefulset-1179/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 15 13:05:47.751: INFO: Waiting for Pod statefulset-1179/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 15 13:05:47.751: INFO: Waiting for Pod statefulset-1179/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 15 13:05:58.145: INFO: Waiting for StatefulSet statefulset-1179/ss2 to complete update Feb 15 13:05:58.145: INFO: Waiting for Pod statefulset-1179/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 15 13:05:58.145: INFO: Waiting for Pod statefulset-1179/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 15 13:06:07.769: INFO: Waiting for StatefulSet statefulset-1179/ss2 to complete update Feb 15 13:06:07.769: INFO: Waiting for Pod statefulset-1179/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 15 13:06:07.769: INFO: Waiting for Pod statefulset-1179/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 15 13:06:17.774: INFO: Waiting for StatefulSet statefulset-1179/ss2 to complete update Feb 15 13:06:17.774: INFO: Waiting for Pod statefulset-1179/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 15 13:06:27.764: INFO: Waiting for StatefulSet statefulset-1179/ss2 to complete update Feb 15 13:06:27.764: INFO: Waiting for Pod statefulset-1179/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 15 13:06:37.782: INFO: Waiting for StatefulSet statefulset-1179/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Feb 15 13:06:47.764: INFO: Deleting all statefulset in ns statefulset-1179 Feb 15 13:06:47.768: INFO: Scaling statefulset ss2 to 0 Feb 15 13:07:17.815: INFO: Waiting for statefulset status.replicas updated to 0 Feb 15 13:07:17.825: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:07:17.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1179" for this suite. Feb 15 13:07:25.949: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:07:26.105: INFO: namespace statefulset-1179 deletion completed in 8.223402918s • [SLOW TEST:254.466 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:07:26.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container Feb 15 13:07:35.716: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7584 pod-service-account-92419530-0a9a-427d-a83b-e6e05df5381f -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Feb 15 13:07:36.250: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7584 pod-service-account-92419530-0a9a-427d-a83b-e6e05df5381f -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Feb 15 13:07:36.943: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7584 pod-service-account-92419530-0a9a-427d-a83b-e6e05df5381f -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:07:37.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-7584" for this suite. Feb 15 13:07:43.341: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:07:43.423: INFO: namespace svcaccounts-7584 deletion completed in 6.098931565s • [SLOW TEST:17.317 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:07:43.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-a6a74747-43ed-4106-94d1-fcfd637eaefa STEP: Creating a pod to test consume secrets Feb 15 13:07:43.620: INFO: Waiting up to 5m0s for pod "pod-secrets-9562e9cb-77d4-4892-beea-fca7d087ee87" in namespace "secrets-13" to be "success or failure" Feb 15 13:07:43.623: INFO: Pod "pod-secrets-9562e9cb-77d4-4892-beea-fca7d087ee87": Phase="Pending", Reason="", readiness=false. Elapsed: 3.270662ms Feb 15 13:07:45.634: INFO: Pod "pod-secrets-9562e9cb-77d4-4892-beea-fca7d087ee87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014424995s Feb 15 13:07:47.644: INFO: Pod "pod-secrets-9562e9cb-77d4-4892-beea-fca7d087ee87": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02378379s Feb 15 13:07:49.661: INFO: Pod "pod-secrets-9562e9cb-77d4-4892-beea-fca7d087ee87": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040704645s Feb 15 13:07:51.722: INFO: Pod "pod-secrets-9562e9cb-77d4-4892-beea-fca7d087ee87": Phase="Pending", Reason="", readiness=false. Elapsed: 8.102018801s Feb 15 13:07:53.731: INFO: Pod "pod-secrets-9562e9cb-77d4-4892-beea-fca7d087ee87": Phase="Pending", Reason="", readiness=false. Elapsed: 10.110754955s Feb 15 13:07:55.739: INFO: Pod "pod-secrets-9562e9cb-77d4-4892-beea-fca7d087ee87": Phase="Pending", Reason="", readiness=false. Elapsed: 12.119377918s Feb 15 13:07:57.750: INFO: Pod "pod-secrets-9562e9cb-77d4-4892-beea-fca7d087ee87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.130055046s STEP: Saw pod success Feb 15 13:07:57.750: INFO: Pod "pod-secrets-9562e9cb-77d4-4892-beea-fca7d087ee87" satisfied condition "success or failure" Feb 15 13:07:57.754: INFO: Trying to get logs from node iruya-node pod pod-secrets-9562e9cb-77d4-4892-beea-fca7d087ee87 container secret-volume-test: STEP: delete the pod Feb 15 13:07:58.041: INFO: Waiting for pod pod-secrets-9562e9cb-77d4-4892-beea-fca7d087ee87 to disappear Feb 15 13:07:58.052: INFO: Pod pod-secrets-9562e9cb-77d4-4892-beea-fca7d087ee87 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:07:58.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-13" for this suite. Feb 15 13:08:04.095: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:08:04.250: INFO: namespace secrets-13 deletion completed in 6.186622617s STEP: Destroying namespace "secret-namespace-7619" for this suite. Feb 15 13:08:10.278: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:08:10.431: INFO: namespace secret-namespace-7619 deletion completed in 6.180206817s • [SLOW TEST:27.007 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:08:10.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 15 13:08:10.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5999' Feb 15 13:08:10.970: INFO: stderr: "" Feb 15 13:08:10.970: INFO: stdout: "replicationcontroller/redis-master created\n" Feb 15 13:08:10.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5999' Feb 15 13:08:11.545: INFO: stderr: "" Feb 15 13:08:11.545: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Feb 15 13:08:12.564: INFO: Selector matched 1 pods for map[app:redis] Feb 15 13:08:12.564: INFO: Found 0 / 1 Feb 15 13:08:13.559: INFO: Selector matched 1 pods for map[app:redis] Feb 15 13:08:13.559: INFO: Found 0 / 1 Feb 15 13:08:14.563: INFO: Selector matched 1 pods for map[app:redis] Feb 15 13:08:14.563: INFO: Found 0 / 1 Feb 15 13:08:15.557: INFO: Selector matched 1 pods for map[app:redis] Feb 15 13:08:15.557: INFO: Found 0 / 1 Feb 15 13:08:16.563: INFO: Selector matched 1 pods for map[app:redis] Feb 15 13:08:16.563: INFO: Found 0 / 1 Feb 15 13:08:17.561: INFO: Selector matched 1 pods for map[app:redis] Feb 15 13:08:17.561: INFO: Found 0 / 1 Feb 15 13:08:18.566: INFO: Selector matched 1 pods for map[app:redis] Feb 15 13:08:18.566: INFO: Found 0 / 1 Feb 15 13:08:19.552: INFO: Selector matched 1 pods for map[app:redis] Feb 15 13:08:19.552: INFO: Found 0 / 1 Feb 15 13:08:20.562: INFO: Selector matched 1 pods for map[app:redis] Feb 15 13:08:20.562: INFO: Found 1 / 1 Feb 15 13:08:20.562: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Feb 15 13:08:20.571: INFO: Selector matched 1 pods for map[app:redis] Feb 15 13:08:20.571: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Feb 15 13:08:20.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-4ng72 --namespace=kubectl-5999' Feb 15 13:08:20.728: INFO: stderr: "" Feb 15 13:08:20.728: INFO: stdout: "Name: redis-master-4ng72\nNamespace: kubectl-5999\nPriority: 0\nNode: iruya-node/10.96.3.65\nStart Time: Sat, 15 Feb 2020 13:08:11 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.44.0.1\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: docker://2d246122f214d7966c68394150ab28edd555d6be824f90653155ff212498b7bf\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Sat, 15 Feb 2020 13:08:19 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-vqt9b (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-vqt9b:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-vqt9b\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 9s default-scheduler Successfully assigned kubectl-5999/redis-master-4ng72 to iruya-node\n Normal Pulled 5s kubelet, iruya-node Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 2s kubelet, iruya-node Created container redis-master\n Normal Started 1s kubelet, iruya-node Started container redis-master\n" Feb 15 13:08:20.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-5999' Feb 15 13:08:20.892: INFO: stderr: "" Feb 15 13:08:20.892: INFO: stdout: "Name: redis-master\nNamespace: kubectl-5999\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 10s replication-controller Created pod: redis-master-4ng72\n" Feb 15 13:08:20.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-5999' Feb 15 13:08:21.043: INFO: stderr: "" Feb 15 13:08:21.043: INFO: stdout: "Name: redis-master\nNamespace: kubectl-5999\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.99.145.23\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.44.0.1:6379\nSession Affinity: None\nEvents: \n" Feb 15 13:08:21.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-node' Feb 15 13:08:21.199: INFO: stderr: "" Feb 15 13:08:21.199: INFO: stdout: "Name: iruya-node\nRoles: \nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-node\n kubernetes.io/os=linux\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 04 Aug 2019 09:01:39 +0000\nTaints: \nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Sat, 12 Oct 2019 11:56:49 +0000 Sat, 12 Oct 2019 11:56:49 +0000 WeaveIsUp Weave pod has set this\n MemoryPressure False Sat, 15 Feb 2020 13:07:32 +0000 Sun, 04 Aug 2019 09:01:39 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sat, 15 Feb 2020 13:07:32 +0000 Sun, 04 Aug 2019 09:01:39 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sat, 15 Feb 2020 13:07:32 +0000 Sun, 04 Aug 2019 09:01:39 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sat, 15 Feb 2020 13:07:32 +0000 Sun, 04 Aug 2019 09:02:19 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled\nAddresses:\n InternalIP: 10.96.3.65\n Hostname: iruya-node\nCapacity:\n cpu: 4\n ephemeral-storage: 20145724Ki\n hugepages-2Mi: 0\n memory: 4039076Ki\n pods: 110\nAllocatable:\n cpu: 4\n ephemeral-storage: 18566299208\n hugepages-2Mi: 0\n memory: 3936676Ki\n pods: 110\nSystem Info:\n Machine ID: f573dcf04d6f4a87856a35d266a2fa7a\n System UUID: F573DCF0-4D6F-4A87-856A-35D266A2FA7A\n Boot ID: 8baf4beb-8391-43e6-b17b-b1e184b5370a\n Kernel Version: 4.15.0-52-generic\n OS Image: Ubuntu 18.04.2 LTS\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://18.9.7\n Kubelet Version: v1.15.1\n Kube-Proxy Version: v1.15.1\nPodCIDR: 10.96.1.0/24\nNon-terminated Pods: (3 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system kube-proxy-976zl 0 (0%) 0 (0%) 0 (0%) 0 (0%) 195d\n kube-system weave-net-rlp57 20m (0%) 0 (0%) 0 (0%) 0 (0%) 126d\n kubectl-5999 redis-master-4ng72 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11s\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 20m (0%) 0 (0%)\n memory 0 (0%) 0 (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Feb 15 13:08:21.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-5999' Feb 15 13:08:21.301: INFO: stderr: "" Feb 15 13:08:21.301: INFO: stdout: "Name: kubectl-5999\nLabels: e2e-framework=kubectl\n e2e-run=eb19dab3-29cc-43ae-8042-7ddfc54b072f\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:08:21.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5999" for this suite. Feb 15 13:08:43.334: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:08:43.432: INFO: namespace kubectl-5999 deletion completed in 22.127876709s • [SLOW TEST:33.000 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:08:43.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-a0903351-004a-4359-8f1d-2d49152ea758 STEP: Creating secret with name secret-projected-all-test-volume-b0c0d227-6bca-41f4-b325-ead1ca1d4e19 STEP: Creating a pod to test Check all projections for projected volume plugin Feb 15 13:08:43.675: INFO: Waiting up to 5m0s for pod "projected-volume-31a4ed2d-bf47-457c-a641-ee5a8ae58342" in namespace "projected-5926" to be "success or failure" Feb 15 13:08:43.712: INFO: Pod "projected-volume-31a4ed2d-bf47-457c-a641-ee5a8ae58342": Phase="Pending", Reason="", readiness=false. Elapsed: 36.689401ms Feb 15 13:08:45.722: INFO: Pod "projected-volume-31a4ed2d-bf47-457c-a641-ee5a8ae58342": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046951849s Feb 15 13:08:47.734: INFO: Pod "projected-volume-31a4ed2d-bf47-457c-a641-ee5a8ae58342": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058595502s Feb 15 13:08:49.744: INFO: Pod "projected-volume-31a4ed2d-bf47-457c-a641-ee5a8ae58342": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069206855s Feb 15 13:08:51.759: INFO: Pod "projected-volume-31a4ed2d-bf47-457c-a641-ee5a8ae58342": Phase="Pending", Reason="", readiness=false. Elapsed: 8.08458548s Feb 15 13:08:53.779: INFO: Pod "projected-volume-31a4ed2d-bf47-457c-a641-ee5a8ae58342": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.103782038s STEP: Saw pod success Feb 15 13:08:53.779: INFO: Pod "projected-volume-31a4ed2d-bf47-457c-a641-ee5a8ae58342" satisfied condition "success or failure" Feb 15 13:08:53.799: INFO: Trying to get logs from node iruya-node pod projected-volume-31a4ed2d-bf47-457c-a641-ee5a8ae58342 container projected-all-volume-test: STEP: delete the pod Feb 15 13:08:53.873: INFO: Waiting for pod projected-volume-31a4ed2d-bf47-457c-a641-ee5a8ae58342 to disappear Feb 15 13:08:53.906: INFO: Pod projected-volume-31a4ed2d-bf47-457c-a641-ee5a8ae58342 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:08:53.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5926" for this suite. Feb 15 13:09:00.099: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:09:00.203: INFO: namespace projected-5926 deletion completed in 6.292344073s • [SLOW TEST:16.771 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:09:00.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 15 13:09:00.426: INFO: Waiting up to 5m0s for pod "downwardapi-volume-296a33f2-a83d-4480-8001-5fcafe258e88" in namespace "downward-api-2333" to be "success or failure" Feb 15 13:09:00.449: INFO: Pod "downwardapi-volume-296a33f2-a83d-4480-8001-5fcafe258e88": Phase="Pending", Reason="", readiness=false. Elapsed: 22.996045ms Feb 15 13:09:02.462: INFO: Pod "downwardapi-volume-296a33f2-a83d-4480-8001-5fcafe258e88": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036328604s Feb 15 13:09:04.479: INFO: Pod "downwardapi-volume-296a33f2-a83d-4480-8001-5fcafe258e88": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052813305s Feb 15 13:09:06.491: INFO: Pod "downwardapi-volume-296a33f2-a83d-4480-8001-5fcafe258e88": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065334511s Feb 15 13:09:08.977: INFO: Pod "downwardapi-volume-296a33f2-a83d-4480-8001-5fcafe258e88": Phase="Pending", Reason="", readiness=false. Elapsed: 8.551161568s Feb 15 13:09:10.986: INFO: Pod "downwardapi-volume-296a33f2-a83d-4480-8001-5fcafe258e88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.560439272s STEP: Saw pod success Feb 15 13:09:10.986: INFO: Pod "downwardapi-volume-296a33f2-a83d-4480-8001-5fcafe258e88" satisfied condition "success or failure" Feb 15 13:09:10.995: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-296a33f2-a83d-4480-8001-5fcafe258e88 container client-container: STEP: delete the pod Feb 15 13:09:11.069: INFO: Waiting for pod downwardapi-volume-296a33f2-a83d-4480-8001-5fcafe258e88 to disappear Feb 15 13:09:11.093: INFO: Pod downwardapi-volume-296a33f2-a83d-4480-8001-5fcafe258e88 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:09:11.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2333" for this suite. Feb 15 13:09:17.170: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:09:17.291: INFO: namespace downward-api-2333 deletion completed in 6.191661209s • [SLOW TEST:17.087 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:09:17.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Feb 15 13:09:29.091: INFO: Successfully updated pod "annotationupdate855b1505-aafb-4a3f-8e46-4ba2401dc435" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:09:31.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1751" for this suite. Feb 15 13:09:53.220: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:09:53.289: INFO: namespace downward-api-1751 deletion completed in 22.107970127s • [SLOW TEST:35.998 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:09:53.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating server pod server in namespace prestop-8606 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-8606 STEP: Deleting pre-stop pod Feb 15 13:10:18.459: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:10:18.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-8606" for this suite. Feb 15 13:10:58.589: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:10:58.913: INFO: namespace prestop-8606 deletion completed in 40.426484751s • [SLOW TEST:65.624 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:10:58.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-dc2f8ddd-5cfc-4e65-950c-afe8eb3cf9da STEP: Creating a pod to test consume secrets Feb 15 13:10:59.039: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-38ba99b8-cda5-4259-8a43-fb3cf74ca3bd" in namespace "projected-9267" to be "success or failure" Feb 15 13:10:59.054: INFO: Pod "pod-projected-secrets-38ba99b8-cda5-4259-8a43-fb3cf74ca3bd": Phase="Pending", Reason="", readiness=false. Elapsed: 14.479851ms Feb 15 13:11:01.061: INFO: Pod "pod-projected-secrets-38ba99b8-cda5-4259-8a43-fb3cf74ca3bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021522432s Feb 15 13:11:03.082: INFO: Pod "pod-projected-secrets-38ba99b8-cda5-4259-8a43-fb3cf74ca3bd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042524671s Feb 15 13:11:05.096: INFO: Pod "pod-projected-secrets-38ba99b8-cda5-4259-8a43-fb3cf74ca3bd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056539278s Feb 15 13:11:07.112: INFO: Pod "pod-projected-secrets-38ba99b8-cda5-4259-8a43-fb3cf74ca3bd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.072095723s Feb 15 13:11:09.430: INFO: Pod "pod-projected-secrets-38ba99b8-cda5-4259-8a43-fb3cf74ca3bd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.389872366s Feb 15 13:11:11.436: INFO: Pod "pod-projected-secrets-38ba99b8-cda5-4259-8a43-fb3cf74ca3bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.396484967s STEP: Saw pod success Feb 15 13:11:11.436: INFO: Pod "pod-projected-secrets-38ba99b8-cda5-4259-8a43-fb3cf74ca3bd" satisfied condition "success or failure" Feb 15 13:11:11.439: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-38ba99b8-cda5-4259-8a43-fb3cf74ca3bd container secret-volume-test: STEP: delete the pod Feb 15 13:11:11.576: INFO: Waiting for pod pod-projected-secrets-38ba99b8-cda5-4259-8a43-fb3cf74ca3bd to disappear Feb 15 13:11:11.583: INFO: Pod pod-projected-secrets-38ba99b8-cda5-4259-8a43-fb3cf74ca3bd no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:11:11.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9267" for this suite. Feb 15 13:11:17.620: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:11:17.759: INFO: namespace projected-9267 deletion completed in 6.16303329s • [SLOW TEST:18.846 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:11:17.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:11:25.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7680" for this suite. Feb 15 13:12:27.991: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:12:28.119: INFO: namespace kubelet-test-7680 deletion completed in 1m2.153881972s • [SLOW TEST:70.360 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:12:28.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:13:00.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8176" for this suite. Feb 15 13:13:06.452: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:13:06.556: INFO: namespace namespaces-8176 deletion completed in 6.135782515s STEP: Destroying namespace "nsdeletetest-6448" for this suite. Feb 15 13:13:06.559: INFO: Namespace nsdeletetest-6448 was already deleted STEP: Destroying namespace "nsdeletetest-2010" for this suite. Feb 15 13:13:12.588: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:13:12.696: INFO: namespace nsdeletetest-2010 deletion completed in 6.137197564s • [SLOW TEST:44.576 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:13:12.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-e852c411-e496-4a92-a4e3-d6c8b70e99a0 in namespace container-probe-6642 Feb 15 13:13:20.839: INFO: Started pod liveness-e852c411-e496-4a92-a4e3-d6c8b70e99a0 in namespace container-probe-6642 STEP: checking the pod's current state and verifying that restartCount is present Feb 15 13:13:20.849: INFO: Initial restart count of pod liveness-e852c411-e496-4a92-a4e3-d6c8b70e99a0 is 0 Feb 15 13:13:38.959: INFO: Restart count of pod container-probe-6642/liveness-e852c411-e496-4a92-a4e3-d6c8b70e99a0 is now 1 (18.110128273s elapsed) Feb 15 13:13:59.061: INFO: Restart count of pod container-probe-6642/liveness-e852c411-e496-4a92-a4e3-d6c8b70e99a0 is now 2 (38.212316735s elapsed) Feb 15 13:14:20.141: INFO: Restart count of pod container-probe-6642/liveness-e852c411-e496-4a92-a4e3-d6c8b70e99a0 is now 3 (59.292682324s elapsed) Feb 15 13:14:38.311: INFO: Restart count of pod container-probe-6642/liveness-e852c411-e496-4a92-a4e3-d6c8b70e99a0 is now 4 (1m17.462718924s elapsed) Feb 15 13:15:46.153: INFO: Restart count of pod container-probe-6642/liveness-e852c411-e496-4a92-a4e3-d6c8b70e99a0 is now 5 (2m25.304842547s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:15:46.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6642" for this suite. Feb 15 13:15:52.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:15:52.350: INFO: namespace container-probe-6642 deletion completed in 6.122313988s • [SLOW TEST:159.653 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:15:52.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-1590ca73-77c7-4e03-a547-a30fdd5a74f4 in namespace container-probe-6526 Feb 15 13:16:02.459: INFO: Started pod busybox-1590ca73-77c7-4e03-a547-a30fdd5a74f4 in namespace container-probe-6526 STEP: checking the pod's current state and verifying that restartCount is present Feb 15 13:16:02.464: INFO: Initial restart count of pod busybox-1590ca73-77c7-4e03-a547-a30fdd5a74f4 is 0 Feb 15 13:16:52.881: INFO: Restart count of pod container-probe-6526/busybox-1590ca73-77c7-4e03-a547-a30fdd5a74f4 is now 1 (50.41737674s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:16:52.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6526" for this suite. Feb 15 13:16:59.008: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:16:59.155: INFO: namespace container-probe-6526 deletion completed in 6.227853661s • [SLOW TEST:66.805 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:16:59.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition Feb 15 13:16:59.222: INFO: Waiting up to 5m0s for pod "var-expansion-9c8189cf-81b6-4531-b8db-236f23c8536c" in namespace "var-expansion-7434" to be "success or failure" Feb 15 13:16:59.225: INFO: Pod "var-expansion-9c8189cf-81b6-4531-b8db-236f23c8536c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.68332ms Feb 15 13:17:01.232: INFO: Pod "var-expansion-9c8189cf-81b6-4531-b8db-236f23c8536c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0092269s Feb 15 13:17:03.240: INFO: Pod "var-expansion-9c8189cf-81b6-4531-b8db-236f23c8536c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017876475s Feb 15 13:17:05.247: INFO: Pod "var-expansion-9c8189cf-81b6-4531-b8db-236f23c8536c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.025074218s Feb 15 13:17:07.255: INFO: Pod "var-expansion-9c8189cf-81b6-4531-b8db-236f23c8536c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.032292376s STEP: Saw pod success Feb 15 13:17:07.255: INFO: Pod "var-expansion-9c8189cf-81b6-4531-b8db-236f23c8536c" satisfied condition "success or failure" Feb 15 13:17:07.259: INFO: Trying to get logs from node iruya-node pod var-expansion-9c8189cf-81b6-4531-b8db-236f23c8536c container dapi-container: STEP: delete the pod Feb 15 13:17:07.312: INFO: Waiting for pod var-expansion-9c8189cf-81b6-4531-b8db-236f23c8536c to disappear Feb 15 13:17:07.322: INFO: Pod var-expansion-9c8189cf-81b6-4531-b8db-236f23c8536c no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:17:07.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7434" for this suite. Feb 15 13:17:13.365: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:17:13.490: INFO: namespace var-expansion-7434 deletion completed in 6.159222539s • [SLOW TEST:14.334 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:17:13.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Feb 15 13:17:13.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5347' Feb 15 13:17:16.629: INFO: stderr: "" Feb 15 13:17:16.630: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 15 13:17:16.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5347' Feb 15 13:17:16.785: INFO: stderr: "" Feb 15 13:17:16.786: INFO: stdout: "update-demo-nautilus-lplmt update-demo-nautilus-z4zk4 " Feb 15 13:17:16.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lplmt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5347' Feb 15 13:17:16.943: INFO: stderr: "" Feb 15 13:17:16.943: INFO: stdout: "" Feb 15 13:17:16.943: INFO: update-demo-nautilus-lplmt is created but not running Feb 15 13:17:21.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5347' Feb 15 13:17:22.085: INFO: stderr: "" Feb 15 13:17:22.085: INFO: stdout: "update-demo-nautilus-lplmt update-demo-nautilus-z4zk4 " Feb 15 13:17:22.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lplmt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5347' Feb 15 13:17:23.166: INFO: stderr: "" Feb 15 13:17:23.166: INFO: stdout: "" Feb 15 13:17:23.166: INFO: update-demo-nautilus-lplmt is created but not running Feb 15 13:17:28.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5347' Feb 15 13:17:28.373: INFO: stderr: "" Feb 15 13:17:28.373: INFO: stdout: "update-demo-nautilus-lplmt update-demo-nautilus-z4zk4 " Feb 15 13:17:28.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lplmt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5347' Feb 15 13:17:28.500: INFO: stderr: "" Feb 15 13:17:28.500: INFO: stdout: "true" Feb 15 13:17:28.500: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lplmt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5347' Feb 15 13:17:28.614: INFO: stderr: "" Feb 15 13:17:28.614: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 15 13:17:28.614: INFO: validating pod update-demo-nautilus-lplmt Feb 15 13:17:28.627: INFO: got data: { "image": "nautilus.jpg" } Feb 15 13:17:28.627: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 15 13:17:28.627: INFO: update-demo-nautilus-lplmt is verified up and running Feb 15 13:17:28.627: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z4zk4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5347' Feb 15 13:17:28.735: INFO: stderr: "" Feb 15 13:17:28.735: INFO: stdout: "true" Feb 15 13:17:28.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z4zk4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5347' Feb 15 13:17:28.824: INFO: stderr: "" Feb 15 13:17:28.824: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 15 13:17:28.824: INFO: validating pod update-demo-nautilus-z4zk4 Feb 15 13:17:28.867: INFO: got data: { "image": "nautilus.jpg" } Feb 15 13:17:28.867: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 15 13:17:28.867: INFO: update-demo-nautilus-z4zk4 is verified up and running STEP: scaling down the replication controller Feb 15 13:17:28.870: INFO: scanned /root for discovery docs: Feb 15 13:17:28.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-5347' Feb 15 13:17:30.069: INFO: stderr: "" Feb 15 13:17:30.069: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 15 13:17:30.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5347' Feb 15 13:17:30.286: INFO: stderr: "" Feb 15 13:17:30.287: INFO: stdout: "update-demo-nautilus-lplmt update-demo-nautilus-z4zk4 " STEP: Replicas for name=update-demo: expected=1 actual=2 Feb 15 13:17:35.287: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5347' Feb 15 13:17:35.539: INFO: stderr: "" Feb 15 13:17:35.539: INFO: stdout: "update-demo-nautilus-lplmt update-demo-nautilus-z4zk4 " STEP: Replicas for name=update-demo: expected=1 actual=2 Feb 15 13:17:40.540: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5347' Feb 15 13:17:40.679: INFO: stderr: "" Feb 15 13:17:40.680: INFO: stdout: "update-demo-nautilus-z4zk4 " Feb 15 13:17:40.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z4zk4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5347' Feb 15 13:17:40.814: INFO: stderr: "" Feb 15 13:17:40.814: INFO: stdout: "true" Feb 15 13:17:40.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z4zk4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5347' Feb 15 13:17:40.929: INFO: stderr: "" Feb 15 13:17:40.929: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 15 13:17:40.930: INFO: validating pod update-demo-nautilus-z4zk4 Feb 15 13:17:40.941: INFO: got data: { "image": "nautilus.jpg" } Feb 15 13:17:40.941: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 15 13:17:40.941: INFO: update-demo-nautilus-z4zk4 is verified up and running STEP: scaling up the replication controller Feb 15 13:17:40.944: INFO: scanned /root for discovery docs: Feb 15 13:17:40.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-5347' Feb 15 13:17:42.208: INFO: stderr: "" Feb 15 13:17:42.208: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 15 13:17:42.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5347' Feb 15 13:17:42.347: INFO: stderr: "" Feb 15 13:17:42.347: INFO: stdout: "update-demo-nautilus-f8ld7 update-demo-nautilus-z4zk4 " Feb 15 13:17:42.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f8ld7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5347' Feb 15 13:17:42.728: INFO: stderr: "" Feb 15 13:17:42.728: INFO: stdout: "" Feb 15 13:17:42.728: INFO: update-demo-nautilus-f8ld7 is created but not running Feb 15 13:17:47.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5347' Feb 15 13:17:47.960: INFO: stderr: "" Feb 15 13:17:47.960: INFO: stdout: "update-demo-nautilus-f8ld7 update-demo-nautilus-z4zk4 " Feb 15 13:17:47.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f8ld7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5347' Feb 15 13:17:48.116: INFO: stderr: "" Feb 15 13:17:48.116: INFO: stdout: "true" Feb 15 13:17:48.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f8ld7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5347' Feb 15 13:17:48.194: INFO: stderr: "" Feb 15 13:17:48.194: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 15 13:17:48.194: INFO: validating pod update-demo-nautilus-f8ld7 Feb 15 13:17:48.202: INFO: got data: { "image": "nautilus.jpg" } Feb 15 13:17:48.202: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 15 13:17:48.202: INFO: update-demo-nautilus-f8ld7 is verified up and running Feb 15 13:17:48.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z4zk4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5347' Feb 15 13:17:48.333: INFO: stderr: "" Feb 15 13:17:48.333: INFO: stdout: "true" Feb 15 13:17:48.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z4zk4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5347' Feb 15 13:17:48.451: INFO: stderr: "" Feb 15 13:17:48.451: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 15 13:17:48.452: INFO: validating pod update-demo-nautilus-z4zk4 Feb 15 13:17:48.465: INFO: got data: { "image": "nautilus.jpg" } Feb 15 13:17:48.465: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 15 13:17:48.465: INFO: update-demo-nautilus-z4zk4 is verified up and running STEP: using delete to clean up resources Feb 15 13:17:48.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5347' Feb 15 13:17:48.646: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 15 13:17:48.647: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Feb 15 13:17:48.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5347' Feb 15 13:17:48.729: INFO: stderr: "No resources found.\n" Feb 15 13:17:48.729: INFO: stdout: "" Feb 15 13:17:48.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5347 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 15 13:17:48.819: INFO: stderr: "" Feb 15 13:17:48.820: INFO: stdout: "update-demo-nautilus-f8ld7\nupdate-demo-nautilus-z4zk4\n" Feb 15 13:17:49.321: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5347' Feb 15 13:17:49.511: INFO: stderr: "No resources found.\n" Feb 15 13:17:49.512: INFO: stdout: "" Feb 15 13:17:49.512: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5347 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 15 13:17:49.631: INFO: stderr: "" Feb 15 13:17:49.631: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:17:49.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5347" for this suite. Feb 15 13:18:11.713: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:18:11.810: INFO: namespace kubectl-5347 deletion completed in 22.173115239s • [SLOW TEST:58.320 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:18:11.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-6xzg STEP: Creating a pod to test atomic-volume-subpath Feb 15 13:18:11.893: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-6xzg" in namespace "subpath-2437" to be "success or failure" Feb 15 13:18:11.898: INFO: Pod "pod-subpath-test-configmap-6xzg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.545027ms Feb 15 13:18:13.914: INFO: Pod "pod-subpath-test-configmap-6xzg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021019265s Feb 15 13:18:15.922: INFO: Pod "pod-subpath-test-configmap-6xzg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028745389s Feb 15 13:18:17.931: INFO: Pod "pod-subpath-test-configmap-6xzg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037740284s Feb 15 13:18:19.942: INFO: Pod "pod-subpath-test-configmap-6xzg": Phase="Running", Reason="", readiness=true. Elapsed: 8.049351238s Feb 15 13:18:21.951: INFO: Pod "pod-subpath-test-configmap-6xzg": Phase="Running", Reason="", readiness=true. Elapsed: 10.058056886s Feb 15 13:18:23.964: INFO: Pod "pod-subpath-test-configmap-6xzg": Phase="Running", Reason="", readiness=true. Elapsed: 12.07091138s Feb 15 13:18:25.972: INFO: Pod "pod-subpath-test-configmap-6xzg": Phase="Running", Reason="", readiness=true. Elapsed: 14.078898328s Feb 15 13:18:27.980: INFO: Pod "pod-subpath-test-configmap-6xzg": Phase="Running", Reason="", readiness=true. Elapsed: 16.08662377s Feb 15 13:18:29.987: INFO: Pod "pod-subpath-test-configmap-6xzg": Phase="Running", Reason="", readiness=true. Elapsed: 18.094357748s Feb 15 13:18:31.997: INFO: Pod "pod-subpath-test-configmap-6xzg": Phase="Running", Reason="", readiness=true. Elapsed: 20.103857392s Feb 15 13:18:34.050: INFO: Pod "pod-subpath-test-configmap-6xzg": Phase="Running", Reason="", readiness=true. Elapsed: 22.157502444s Feb 15 13:18:36.059: INFO: Pod "pod-subpath-test-configmap-6xzg": Phase="Running", Reason="", readiness=true. Elapsed: 24.166207446s Feb 15 13:18:38.068: INFO: Pod "pod-subpath-test-configmap-6xzg": Phase="Running", Reason="", readiness=true. Elapsed: 26.174969212s Feb 15 13:18:40.082: INFO: Pod "pod-subpath-test-configmap-6xzg": Phase="Running", Reason="", readiness=true. Elapsed: 28.18864067s Feb 15 13:18:42.089: INFO: Pod "pod-subpath-test-configmap-6xzg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.196345348s STEP: Saw pod success Feb 15 13:18:42.090: INFO: Pod "pod-subpath-test-configmap-6xzg" satisfied condition "success or failure" Feb 15 13:18:42.095: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-6xzg container test-container-subpath-configmap-6xzg: STEP: delete the pod Feb 15 13:18:42.212: INFO: Waiting for pod pod-subpath-test-configmap-6xzg to disappear Feb 15 13:18:42.237: INFO: Pod pod-subpath-test-configmap-6xzg no longer exists STEP: Deleting pod pod-subpath-test-configmap-6xzg Feb 15 13:18:42.238: INFO: Deleting pod "pod-subpath-test-configmap-6xzg" in namespace "subpath-2437" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:18:42.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2437" for this suite. Feb 15 13:18:48.281: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:18:48.450: INFO: namespace subpath-2437 deletion completed in 6.199986307s • [SLOW TEST:36.640 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:18:48.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-p82m STEP: Creating a pod to test atomic-volume-subpath Feb 15 13:18:48.587: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-p82m" in namespace "subpath-1759" to be "success or failure" Feb 15 13:18:48.592: INFO: Pod "pod-subpath-test-configmap-p82m": Phase="Pending", Reason="", readiness=false. Elapsed: 5.45215ms Feb 15 13:18:50.607: INFO: Pod "pod-subpath-test-configmap-p82m": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019620319s Feb 15 13:18:52.623: INFO: Pod "pod-subpath-test-configmap-p82m": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035730271s Feb 15 13:18:54.633: INFO: Pod "pod-subpath-test-configmap-p82m": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046284996s Feb 15 13:18:56.649: INFO: Pod "pod-subpath-test-configmap-p82m": Phase="Running", Reason="", readiness=true. Elapsed: 8.061704392s Feb 15 13:18:58.657: INFO: Pod "pod-subpath-test-configmap-p82m": Phase="Running", Reason="", readiness=true. Elapsed: 10.070502566s Feb 15 13:19:00.665: INFO: Pod "pod-subpath-test-configmap-p82m": Phase="Running", Reason="", readiness=true. Elapsed: 12.078465707s Feb 15 13:19:02.672: INFO: Pod "pod-subpath-test-configmap-p82m": Phase="Running", Reason="", readiness=true. Elapsed: 14.085071133s Feb 15 13:19:04.730: INFO: Pod "pod-subpath-test-configmap-p82m": Phase="Running", Reason="", readiness=true. Elapsed: 16.143449919s Feb 15 13:19:06.743: INFO: Pod "pod-subpath-test-configmap-p82m": Phase="Running", Reason="", readiness=true. Elapsed: 18.156200237s Feb 15 13:19:08.756: INFO: Pod "pod-subpath-test-configmap-p82m": Phase="Running", Reason="", readiness=true. Elapsed: 20.169465085s Feb 15 13:19:10.766: INFO: Pod "pod-subpath-test-configmap-p82m": Phase="Running", Reason="", readiness=true. Elapsed: 22.178743233s Feb 15 13:19:12.773: INFO: Pod "pod-subpath-test-configmap-p82m": Phase="Running", Reason="", readiness=true. Elapsed: 24.18607536s Feb 15 13:19:14.822: INFO: Pod "pod-subpath-test-configmap-p82m": Phase="Running", Reason="", readiness=true. Elapsed: 26.234918359s Feb 15 13:19:16.834: INFO: Pod "pod-subpath-test-configmap-p82m": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.246933913s STEP: Saw pod success Feb 15 13:19:16.834: INFO: Pod "pod-subpath-test-configmap-p82m" satisfied condition "success or failure" Feb 15 13:19:16.838: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-p82m container test-container-subpath-configmap-p82m: STEP: delete the pod Feb 15 13:19:16.912: INFO: Waiting for pod pod-subpath-test-configmap-p82m to disappear Feb 15 13:19:16.920: INFO: Pod pod-subpath-test-configmap-p82m no longer exists STEP: Deleting pod pod-subpath-test-configmap-p82m Feb 15 13:19:16.920: INFO: Deleting pod "pod-subpath-test-configmap-p82m" in namespace "subpath-1759" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:19:16.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1759" for this suite. Feb 15 13:19:22.956: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:19:23.160: INFO: namespace subpath-1759 deletion completed in 6.231039353s • [SLOW TEST:34.708 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:19:23.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-65650c0d-5e3c-4e59-ba03-b28b89c4bd5f STEP: Creating a pod to test consume secrets Feb 15 13:19:23.316: INFO: Waiting up to 5m0s for pod "pod-secrets-d6d2f567-7535-454a-9102-36b8ab433021" in namespace "secrets-4939" to be "success or failure" Feb 15 13:19:23.338: INFO: Pod "pod-secrets-d6d2f567-7535-454a-9102-36b8ab433021": Phase="Pending", Reason="", readiness=false. Elapsed: 21.721296ms Feb 15 13:19:25.359: INFO: Pod "pod-secrets-d6d2f567-7535-454a-9102-36b8ab433021": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042706815s Feb 15 13:19:27.370: INFO: Pod "pod-secrets-d6d2f567-7535-454a-9102-36b8ab433021": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053259835s Feb 15 13:19:29.387: INFO: Pod "pod-secrets-d6d2f567-7535-454a-9102-36b8ab433021": Phase="Pending", Reason="", readiness=false. Elapsed: 6.070865145s Feb 15 13:19:31.398: INFO: Pod "pod-secrets-d6d2f567-7535-454a-9102-36b8ab433021": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.081136801s STEP: Saw pod success Feb 15 13:19:31.398: INFO: Pod "pod-secrets-d6d2f567-7535-454a-9102-36b8ab433021" satisfied condition "success or failure" Feb 15 13:19:31.402: INFO: Trying to get logs from node iruya-node pod pod-secrets-d6d2f567-7535-454a-9102-36b8ab433021 container secret-env-test: STEP: delete the pod Feb 15 13:19:31.459: INFO: Waiting for pod pod-secrets-d6d2f567-7535-454a-9102-36b8ab433021 to disappear Feb 15 13:19:31.481: INFO: Pod pod-secrets-d6d2f567-7535-454a-9102-36b8ab433021 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:19:31.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4939" for this suite. Feb 15 13:19:37.598: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:19:37.740: INFO: namespace secrets-4939 deletion completed in 6.251375558s • [SLOW TEST:14.579 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:19:37.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0215 13:20:08.531749 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 15 13:20:08.532: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:20:08.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8849" for this suite. Feb 15 13:20:16.921: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:20:17.673: INFO: namespace gc-8849 deletion completed in 9.127908183s • [SLOW TEST:39.934 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:20:17.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Feb 15 13:20:17.827: INFO: Waiting up to 5m0s for pod "downward-api-e5d0aa16-26db-46db-9074-e7977a9cb0f7" in namespace "downward-api-2922" to be "success or failure" Feb 15 13:20:17.839: INFO: Pod "downward-api-e5d0aa16-26db-46db-9074-e7977a9cb0f7": Phase="Pending", Reason="", readiness=false. Elapsed: 12.03032ms Feb 15 13:20:19.851: INFO: Pod "downward-api-e5d0aa16-26db-46db-9074-e7977a9cb0f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02339668s Feb 15 13:20:21.867: INFO: Pod "downward-api-e5d0aa16-26db-46db-9074-e7977a9cb0f7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03999823s Feb 15 13:20:23.898: INFO: Pod "downward-api-e5d0aa16-26db-46db-9074-e7977a9cb0f7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.070607028s Feb 15 13:20:25.906: INFO: Pod "downward-api-e5d0aa16-26db-46db-9074-e7977a9cb0f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.078806221s STEP: Saw pod success Feb 15 13:20:25.906: INFO: Pod "downward-api-e5d0aa16-26db-46db-9074-e7977a9cb0f7" satisfied condition "success or failure" Feb 15 13:20:25.910: INFO: Trying to get logs from node iruya-node pod downward-api-e5d0aa16-26db-46db-9074-e7977a9cb0f7 container dapi-container: STEP: delete the pod Feb 15 13:20:25.995: INFO: Waiting for pod downward-api-e5d0aa16-26db-46db-9074-e7977a9cb0f7 to disappear Feb 15 13:20:26.006: INFO: Pod downward-api-e5d0aa16-26db-46db-9074-e7977a9cb0f7 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:20:26.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2922" for this suite. Feb 15 13:20:32.060: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:20:32.178: INFO: namespace downward-api-2922 deletion completed in 6.14134991s • [SLOW TEST:14.504 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:20:32.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 15 13:20:32.290: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ab4e103b-ab3c-4ad0-b021-1f591f16ff2c" in namespace "downward-api-8229" to be "success or failure" Feb 15 13:20:32.305: INFO: Pod "downwardapi-volume-ab4e103b-ab3c-4ad0-b021-1f591f16ff2c": Phase="Pending", Reason="", readiness=false. Elapsed: 14.241575ms Feb 15 13:20:34.316: INFO: Pod "downwardapi-volume-ab4e103b-ab3c-4ad0-b021-1f591f16ff2c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025568044s Feb 15 13:20:36.359: INFO: Pod "downwardapi-volume-ab4e103b-ab3c-4ad0-b021-1f591f16ff2c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068303615s Feb 15 13:20:38.384: INFO: Pod "downwardapi-volume-ab4e103b-ab3c-4ad0-b021-1f591f16ff2c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.093961588s Feb 15 13:20:40.393: INFO: Pod "downwardapi-volume-ab4e103b-ab3c-4ad0-b021-1f591f16ff2c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.102729487s STEP: Saw pod success Feb 15 13:20:40.393: INFO: Pod "downwardapi-volume-ab4e103b-ab3c-4ad0-b021-1f591f16ff2c" satisfied condition "success or failure" Feb 15 13:20:40.398: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-ab4e103b-ab3c-4ad0-b021-1f591f16ff2c container client-container: STEP: delete the pod Feb 15 13:20:40.538: INFO: Waiting for pod downwardapi-volume-ab4e103b-ab3c-4ad0-b021-1f591f16ff2c to disappear Feb 15 13:20:40.548: INFO: Pod downwardapi-volume-ab4e103b-ab3c-4ad0-b021-1f591f16ff2c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:20:40.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8229" for this suite. Feb 15 13:20:46.641: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:20:46.809: INFO: namespace downward-api-8229 deletion completed in 6.202705564s • [SLOW TEST:14.631 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:20:46.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-a7ed3316-b2cf-4a72-b20f-6b967e5ceae9 STEP: Creating a pod to test consume configMaps Feb 15 13:20:46.955: INFO: Waiting up to 5m0s for pod "pod-configmaps-c98c1adc-baf9-4b1b-a66b-c6c8db907b65" in namespace "configmap-1105" to be "success or failure" Feb 15 13:20:46.963: INFO: Pod "pod-configmaps-c98c1adc-baf9-4b1b-a66b-c6c8db907b65": Phase="Pending", Reason="", readiness=false. Elapsed: 8.034806ms Feb 15 13:20:48.972: INFO: Pod "pod-configmaps-c98c1adc-baf9-4b1b-a66b-c6c8db907b65": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016568664s Feb 15 13:20:50.982: INFO: Pod "pod-configmaps-c98c1adc-baf9-4b1b-a66b-c6c8db907b65": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027050065s Feb 15 13:20:53.111: INFO: Pod "pod-configmaps-c98c1adc-baf9-4b1b-a66b-c6c8db907b65": Phase="Pending", Reason="", readiness=false. Elapsed: 6.155681886s Feb 15 13:20:55.119: INFO: Pod "pod-configmaps-c98c1adc-baf9-4b1b-a66b-c6c8db907b65": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.163485029s STEP: Saw pod success Feb 15 13:20:55.119: INFO: Pod "pod-configmaps-c98c1adc-baf9-4b1b-a66b-c6c8db907b65" satisfied condition "success or failure" Feb 15 13:20:55.122: INFO: Trying to get logs from node iruya-node pod pod-configmaps-c98c1adc-baf9-4b1b-a66b-c6c8db907b65 container configmap-volume-test: STEP: delete the pod Feb 15 13:20:55.165: INFO: Waiting for pod pod-configmaps-c98c1adc-baf9-4b1b-a66b-c6c8db907b65 to disappear Feb 15 13:20:55.170: INFO: Pod pod-configmaps-c98c1adc-baf9-4b1b-a66b-c6c8db907b65 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:20:55.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1105" for this suite. Feb 15 13:21:01.190: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:21:01.363: INFO: namespace configmap-1105 deletion completed in 6.18872566s • [SLOW TEST:14.554 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:21:01.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Feb 15 13:21:08.335: INFO: 10 pods remaining Feb 15 13:21:08.335: INFO: 10 pods has nil DeletionTimestamp Feb 15 13:21:08.335: INFO: Feb 15 13:21:09.129: INFO: 0 pods remaining Feb 15 13:21:09.129: INFO: 0 pods has nil DeletionTimestamp Feb 15 13:21:09.129: INFO: STEP: Gathering metrics W0215 13:21:09.674144 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 15 13:21:09.674: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:21:09.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1807" for this suite. Feb 15 13:21:21.936: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:21:22.094: INFO: namespace gc-1807 deletion completed in 12.415531681s • [SLOW TEST:20.730 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:21:22.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-8876 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-8876 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8876 Feb 15 13:21:22.228: INFO: Found 0 stateful pods, waiting for 1 Feb 15 13:21:32.237: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Feb 15 13:21:32.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 15 13:21:33.022: INFO: stderr: "I0215 13:21:32.501842 878 log.go:172] (0xc0009360b0) (0xc000a6a640) Create stream\nI0215 13:21:32.502265 878 log.go:172] (0xc0009360b0) (0xc000a6a640) Stream added, broadcasting: 1\nI0215 13:21:32.517632 878 log.go:172] (0xc0009360b0) Reply frame received for 1\nI0215 13:21:32.517876 878 log.go:172] (0xc0009360b0) (0xc000a74000) Create stream\nI0215 13:21:32.517946 878 log.go:172] (0xc0009360b0) (0xc000a74000) Stream added, broadcasting: 3\nI0215 13:21:32.519778 878 log.go:172] (0xc0009360b0) Reply frame received for 3\nI0215 13:21:32.519801 878 log.go:172] (0xc0009360b0) (0xc000a6a6e0) Create stream\nI0215 13:21:32.519811 878 log.go:172] (0xc0009360b0) (0xc000a6a6e0) Stream added, broadcasting: 5\nI0215 13:21:32.520939 878 log.go:172] (0xc0009360b0) Reply frame received for 5\nI0215 13:21:32.795276 878 log.go:172] (0xc0009360b0) Data frame received for 5\nI0215 13:21:32.795337 878 log.go:172] (0xc000a6a6e0) (5) Data frame handling\nI0215 13:21:32.795364 878 log.go:172] (0xc000a6a6e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0215 13:21:32.859409 878 log.go:172] (0xc0009360b0) Data frame received for 3\nI0215 13:21:32.859467 878 log.go:172] (0xc000a74000) (3) Data frame handling\nI0215 13:21:32.859509 878 log.go:172] (0xc000a74000) (3) Data frame sent\nI0215 13:21:33.009153 878 log.go:172] (0xc0009360b0) Data frame received for 1\nI0215 13:21:33.009394 878 log.go:172] (0xc0009360b0) (0xc000a74000) Stream removed, broadcasting: 3\nI0215 13:21:33.009612 878 log.go:172] (0xc000a6a640) (1) Data frame handling\nI0215 13:21:33.009722 878 log.go:172] (0xc000a6a640) (1) Data frame sent\nI0215 13:21:33.009850 878 log.go:172] (0xc0009360b0) (0xc000a6a6e0) Stream removed, broadcasting: 5\nI0215 13:21:33.009918 878 log.go:172] (0xc0009360b0) (0xc000a6a640) Stream removed, broadcasting: 1\nI0215 13:21:33.010015 878 log.go:172] (0xc0009360b0) Go away received\nI0215 13:21:33.012114 878 log.go:172] (0xc0009360b0) (0xc000a6a640) Stream removed, broadcasting: 1\nI0215 13:21:33.012246 878 log.go:172] (0xc0009360b0) (0xc000a74000) Stream removed, broadcasting: 3\nI0215 13:21:33.012264 878 log.go:172] (0xc0009360b0) (0xc000a6a6e0) Stream removed, broadcasting: 5\n" Feb 15 13:21:33.023: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 15 13:21:33.023: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 15 13:21:33.033: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Feb 15 13:21:43.041: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 15 13:21:43.041: INFO: Waiting for statefulset status.replicas updated to 0 Feb 15 13:21:43.067: INFO: POD NODE PHASE GRACE CONDITIONS Feb 15 13:21:43.067: INFO: ss-0 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:22 +0000 UTC }] Feb 15 13:21:43.067: INFO: Feb 15 13:21:43.067: INFO: StatefulSet ss has not reached scale 3, at 1 Feb 15 13:21:44.081: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.985561066s Feb 15 13:21:45.094: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.971288771s Feb 15 13:21:46.103: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.958691627s Feb 15 13:21:47.134: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.94892525s Feb 15 13:21:48.871: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.917738656s Feb 15 13:21:50.468: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.180507926s Feb 15 13:21:51.614: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.584249021s Feb 15 13:21:52.644: INFO: Verifying statefulset ss doesn't scale past 3 for another 437.826784ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8876 Feb 15 13:21:53.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 13:21:54.636: INFO: stderr: "I0215 13:21:54.005918 899 log.go:172] (0xc000116d10) (0xc00050e640) Create stream\nI0215 13:21:54.006290 899 log.go:172] (0xc000116d10) (0xc00050e640) Stream added, broadcasting: 1\nI0215 13:21:54.017523 899 log.go:172] (0xc000116d10) Reply frame received for 1\nI0215 13:21:54.017652 899 log.go:172] (0xc000116d10) (0xc000532320) Create stream\nI0215 13:21:54.017676 899 log.go:172] (0xc000116d10) (0xc000532320) Stream added, broadcasting: 3\nI0215 13:21:54.019459 899 log.go:172] (0xc000116d10) Reply frame received for 3\nI0215 13:21:54.019557 899 log.go:172] (0xc000116d10) (0xc0005323c0) Create stream\nI0215 13:21:54.019590 899 log.go:172] (0xc000116d10) (0xc0005323c0) Stream added, broadcasting: 5\nI0215 13:21:54.021681 899 log.go:172] (0xc000116d10) Reply frame received for 5\nI0215 13:21:54.334836 899 log.go:172] (0xc000116d10) Data frame received for 3\nI0215 13:21:54.334978 899 log.go:172] (0xc000532320) (3) Data frame handling\nI0215 13:21:54.335067 899 log.go:172] (0xc000532320) (3) Data frame sent\nI0215 13:21:54.335111 899 log.go:172] (0xc000116d10) Data frame received for 5\nI0215 13:21:54.335134 899 log.go:172] (0xc0005323c0) (5) Data frame handling\nI0215 13:21:54.335164 899 log.go:172] (0xc0005323c0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0215 13:21:54.628355 899 log.go:172] (0xc000116d10) (0xc000532320) Stream removed, broadcasting: 3\nI0215 13:21:54.628727 899 log.go:172] (0xc000116d10) (0xc0005323c0) Stream removed, broadcasting: 5\nI0215 13:21:54.628767 899 log.go:172] (0xc000116d10) Data frame received for 1\nI0215 13:21:54.628801 899 log.go:172] (0xc00050e640) (1) Data frame handling\nI0215 13:21:54.628818 899 log.go:172] (0xc00050e640) (1) Data frame sent\nI0215 13:21:54.628841 899 log.go:172] (0xc000116d10) (0xc00050e640) Stream removed, broadcasting: 1\nI0215 13:21:54.628862 899 log.go:172] (0xc000116d10) Go away received\nI0215 13:21:54.629689 899 log.go:172] (0xc000116d10) (0xc00050e640) Stream removed, broadcasting: 1\nI0215 13:21:54.629708 899 log.go:172] (0xc000116d10) (0xc000532320) Stream removed, broadcasting: 3\nI0215 13:21:54.629723 899 log.go:172] (0xc000116d10) (0xc0005323c0) Stream removed, broadcasting: 5\n" Feb 15 13:21:54.636: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 15 13:21:54.636: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 15 13:21:54.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 13:21:55.004: INFO: stderr: "I0215 13:21:54.830943 915 log.go:172] (0xc0006d0160) (0xc0002c0280) Create stream\nI0215 13:21:54.831113 915 log.go:172] (0xc0006d0160) (0xc0002c0280) Stream added, broadcasting: 1\nI0215 13:21:54.834284 915 log.go:172] (0xc0006d0160) Reply frame received for 1\nI0215 13:21:54.834369 915 log.go:172] (0xc0006d0160) (0xc00032a000) Create stream\nI0215 13:21:54.834387 915 log.go:172] (0xc0006d0160) (0xc00032a000) Stream added, broadcasting: 3\nI0215 13:21:54.835502 915 log.go:172] (0xc0006d0160) Reply frame received for 3\nI0215 13:21:54.835529 915 log.go:172] (0xc0006d0160) (0xc0003a4000) Create stream\nI0215 13:21:54.835540 915 log.go:172] (0xc0006d0160) (0xc0003a4000) Stream added, broadcasting: 5\nI0215 13:21:54.836453 915 log.go:172] (0xc0006d0160) Reply frame received for 5\nI0215 13:21:54.925360 915 log.go:172] (0xc0006d0160) Data frame received for 3\nI0215 13:21:54.925405 915 log.go:172] (0xc00032a000) (3) Data frame handling\nI0215 13:21:54.925418 915 log.go:172] (0xc00032a000) (3) Data frame sent\nI0215 13:21:54.925427 915 log.go:172] (0xc0006d0160) Data frame received for 5\nI0215 13:21:54.925433 915 log.go:172] (0xc0003a4000) (5) Data frame handling\nI0215 13:21:54.925439 915 log.go:172] (0xc0003a4000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\nI0215 13:21:54.926184 915 log.go:172] (0xc0006d0160) Data frame received for 5\nI0215 13:21:54.926234 915 log.go:172] (0xc0003a4000) (5) Data frame handling\nI0215 13:21:54.926248 915 log.go:172] (0xc0003a4000) (5) Data frame sent\n+ true\nI0215 13:21:54.997337 915 log.go:172] (0xc0006d0160) (0xc00032a000) Stream removed, broadcasting: 3\nI0215 13:21:54.997408 915 log.go:172] (0xc0006d0160) Data frame received for 1\nI0215 13:21:54.997416 915 log.go:172] (0xc0002c0280) (1) Data frame handling\nI0215 13:21:54.997430 915 log.go:172] (0xc0002c0280) (1) Data frame sent\nI0215 13:21:54.997460 915 log.go:172] (0xc0006d0160) (0xc0002c0280) Stream removed, broadcasting: 1\nI0215 13:21:54.997731 915 log.go:172] (0xc0006d0160) (0xc0003a4000) Stream removed, broadcasting: 5\nI0215 13:21:54.997762 915 log.go:172] (0xc0006d0160) (0xc0002c0280) Stream removed, broadcasting: 1\nI0215 13:21:54.997772 915 log.go:172] (0xc0006d0160) (0xc00032a000) Stream removed, broadcasting: 3\nI0215 13:21:54.997780 915 log.go:172] (0xc0006d0160) (0xc0003a4000) Stream removed, broadcasting: 5\n" Feb 15 13:21:55.005: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 15 13:21:55.005: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 15 13:21:55.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 13:21:55.476: INFO: stderr: "I0215 13:21:55.221056 931 log.go:172] (0xc0009fa2c0) (0xc0008806e0) Create stream\nI0215 13:21:55.221320 931 log.go:172] (0xc0009fa2c0) (0xc0008806e0) Stream added, broadcasting: 1\nI0215 13:21:55.229532 931 log.go:172] (0xc0009fa2c0) Reply frame received for 1\nI0215 13:21:55.229609 931 log.go:172] (0xc0009fa2c0) (0xc000880780) Create stream\nI0215 13:21:55.229622 931 log.go:172] (0xc0009fa2c0) (0xc000880780) Stream added, broadcasting: 3\nI0215 13:21:55.230996 931 log.go:172] (0xc0009fa2c0) Reply frame received for 3\nI0215 13:21:55.231026 931 log.go:172] (0xc0009fa2c0) (0xc0005f6280) Create stream\nI0215 13:21:55.231042 931 log.go:172] (0xc0009fa2c0) (0xc0005f6280) Stream added, broadcasting: 5\nI0215 13:21:55.232053 931 log.go:172] (0xc0009fa2c0) Reply frame received for 5\nI0215 13:21:55.354660 931 log.go:172] (0xc0009fa2c0) Data frame received for 5\nI0215 13:21:55.354742 931 log.go:172] (0xc0005f6280) (5) Data frame handling\nI0215 13:21:55.354764 931 log.go:172] (0xc0005f6280) (5) Data frame sent\nI0215 13:21:55.354771 931 log.go:172] (0xc0009fa2c0) Data frame received for 5\nI0215 13:21:55.354779 931 log.go:172] (0xc0005f6280) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0215 13:21:55.354826 931 log.go:172] (0xc0005f6280) (5) Data frame sent\nI0215 13:21:55.354839 931 log.go:172] (0xc0009fa2c0) Data frame received for 3\nI0215 13:21:55.354843 931 log.go:172] (0xc000880780) (3) Data frame handling\nI0215 13:21:55.354855 931 log.go:172] (0xc000880780) (3) Data frame sent\nI0215 13:21:55.458652 931 log.go:172] (0xc0009fa2c0) Data frame received for 1\nI0215 13:21:55.458845 931 log.go:172] (0xc0008806e0) (1) Data frame handling\nI0215 13:21:55.458914 931 log.go:172] (0xc0008806e0) (1) Data frame sent\nI0215 13:21:55.463544 931 log.go:172] (0xc0009fa2c0) (0xc0008806e0) Stream removed, broadcasting: 1\nI0215 13:21:55.465651 931 log.go:172] (0xc0009fa2c0) (0xc000880780) Stream removed, broadcasting: 3\nI0215 13:21:55.465896 931 log.go:172] (0xc0009fa2c0) (0xc0005f6280) Stream removed, broadcasting: 5\nI0215 13:21:55.466036 931 log.go:172] (0xc0009fa2c0) Go away received\nI0215 13:21:55.466137 931 log.go:172] (0xc0009fa2c0) (0xc0008806e0) Stream removed, broadcasting: 1\nI0215 13:21:55.466247 931 log.go:172] (0xc0009fa2c0) (0xc000880780) Stream removed, broadcasting: 3\nI0215 13:21:55.466332 931 log.go:172] (0xc0009fa2c0) (0xc0005f6280) Stream removed, broadcasting: 5\n" Feb 15 13:21:55.477: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 15 13:21:55.477: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 15 13:21:55.487: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 15 13:21:55.487: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 15 13:21:55.487: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Feb 15 13:21:55.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 15 13:21:55.970: INFO: stderr: "I0215 13:21:55.689596 951 log.go:172] (0xc0009cc420) (0xc0008c06e0) Create stream\nI0215 13:21:55.689734 951 log.go:172] (0xc0009cc420) (0xc0008c06e0) Stream added, broadcasting: 1\nI0215 13:21:55.698983 951 log.go:172] (0xc0009cc420) Reply frame received for 1\nI0215 13:21:55.699064 951 log.go:172] (0xc0009cc420) (0xc0005ba140) Create stream\nI0215 13:21:55.699078 951 log.go:172] (0xc0009cc420) (0xc0005ba140) Stream added, broadcasting: 3\nI0215 13:21:55.700546 951 log.go:172] (0xc0009cc420) Reply frame received for 3\nI0215 13:21:55.700608 951 log.go:172] (0xc0009cc420) (0xc000918000) Create stream\nI0215 13:21:55.700629 951 log.go:172] (0xc0009cc420) (0xc000918000) Stream added, broadcasting: 5\nI0215 13:21:55.702465 951 log.go:172] (0xc0009cc420) Reply frame received for 5\nI0215 13:21:55.817186 951 log.go:172] (0xc0009cc420) Data frame received for 3\nI0215 13:21:55.817259 951 log.go:172] (0xc0005ba140) (3) Data frame handling\nI0215 13:21:55.817275 951 log.go:172] (0xc0005ba140) (3) Data frame sent\nI0215 13:21:55.817296 951 log.go:172] (0xc0009cc420) Data frame received for 5\nI0215 13:21:55.817303 951 log.go:172] (0xc000918000) (5) Data frame handling\nI0215 13:21:55.817311 951 log.go:172] (0xc000918000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0215 13:21:55.956576 951 log.go:172] (0xc0009cc420) Data frame received for 1\nI0215 13:21:55.956838 951 log.go:172] (0xc0009cc420) (0xc0005ba140) Stream removed, broadcasting: 3\nI0215 13:21:55.956927 951 log.go:172] (0xc0008c06e0) (1) Data frame handling\nI0215 13:21:55.956953 951 log.go:172] (0xc0008c06e0) (1) Data frame sent\nI0215 13:21:55.957105 951 log.go:172] (0xc0009cc420) (0xc000918000) Stream removed, broadcasting: 5\nI0215 13:21:55.957145 951 log.go:172] (0xc0009cc420) (0xc0008c06e0) Stream removed, broadcasting: 1\nI0215 13:21:55.957151 951 log.go:172] (0xc0009cc420) Go away received\nI0215 13:21:55.958320 951 log.go:172] (0xc0009cc420) (0xc0008c06e0) Stream removed, broadcasting: 1\nI0215 13:21:55.958343 951 log.go:172] (0xc0009cc420) (0xc0005ba140) Stream removed, broadcasting: 3\nI0215 13:21:55.958353 951 log.go:172] (0xc0009cc420) (0xc000918000) Stream removed, broadcasting: 5\n" Feb 15 13:21:55.970: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 15 13:21:55.970: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 15 13:21:55.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 15 13:21:56.512: INFO: stderr: "I0215 13:21:56.231491 973 log.go:172] (0xc0008ca420) (0xc0009ae640) Create stream\nI0215 13:21:56.232021 973 log.go:172] (0xc0008ca420) (0xc0009ae640) Stream added, broadcasting: 1\nI0215 13:21:56.237128 973 log.go:172] (0xc0008ca420) Reply frame received for 1\nI0215 13:21:56.237202 973 log.go:172] (0xc0008ca420) (0xc0009ae6e0) Create stream\nI0215 13:21:56.237224 973 log.go:172] (0xc0008ca420) (0xc0009ae6e0) Stream added, broadcasting: 3\nI0215 13:21:56.239388 973 log.go:172] (0xc0008ca420) Reply frame received for 3\nI0215 13:21:56.239476 973 log.go:172] (0xc0008ca420) (0xc000a14000) Create stream\nI0215 13:21:56.239556 973 log.go:172] (0xc0008ca420) (0xc000a14000) Stream added, broadcasting: 5\nI0215 13:21:56.242112 973 log.go:172] (0xc0008ca420) Reply frame received for 5\nI0215 13:21:56.366044 973 log.go:172] (0xc0008ca420) Data frame received for 5\nI0215 13:21:56.366667 973 log.go:172] (0xc000a14000) (5) Data frame handling\nI0215 13:21:56.366791 973 log.go:172] (0xc000a14000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0215 13:21:56.387972 973 log.go:172] (0xc0008ca420) Data frame received for 3\nI0215 13:21:56.388156 973 log.go:172] (0xc0009ae6e0) (3) Data frame handling\nI0215 13:21:56.388212 973 log.go:172] (0xc0009ae6e0) (3) Data frame sent\nI0215 13:21:56.486694 973 log.go:172] (0xc0008ca420) (0xc0009ae6e0) Stream removed, broadcasting: 3\nI0215 13:21:56.487860 973 log.go:172] (0xc0008ca420) Data frame received for 1\nI0215 13:21:56.487938 973 log.go:172] (0xc0009ae640) (1) Data frame handling\nI0215 13:21:56.488031 973 log.go:172] (0xc0009ae640) (1) Data frame sent\nI0215 13:21:56.488071 973 log.go:172] (0xc0008ca420) (0xc0009ae640) Stream removed, broadcasting: 1\nI0215 13:21:56.488169 973 log.go:172] (0xc0008ca420) (0xc000a14000) Stream removed, broadcasting: 5\nI0215 13:21:56.488926 973 log.go:172] (0xc0008ca420) Go away received\nI0215 13:21:56.491334 973 log.go:172] (0xc0008ca420) (0xc0009ae640) Stream removed, broadcasting: 1\nI0215 13:21:56.491545 973 log.go:172] (0xc0008ca420) (0xc0009ae6e0) Stream removed, broadcasting: 3\nI0215 13:21:56.491594 973 log.go:172] (0xc0008ca420) (0xc000a14000) Stream removed, broadcasting: 5\n" Feb 15 13:21:56.513: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 15 13:21:56.513: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 15 13:21:56.514: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 15 13:21:56.924: INFO: stderr: "I0215 13:21:56.653335 994 log.go:172] (0xc000142790) (0xc00064e3c0) Create stream\nI0215 13:21:56.653515 994 log.go:172] (0xc000142790) (0xc00064e3c0) Stream added, broadcasting: 1\nI0215 13:21:56.657816 994 log.go:172] (0xc000142790) Reply frame received for 1\nI0215 13:21:56.657840 994 log.go:172] (0xc000142790) (0xc00078c000) Create stream\nI0215 13:21:56.657845 994 log.go:172] (0xc000142790) (0xc00078c000) Stream added, broadcasting: 3\nI0215 13:21:56.659367 994 log.go:172] (0xc000142790) Reply frame received for 3\nI0215 13:21:56.659388 994 log.go:172] (0xc000142790) (0xc000364000) Create stream\nI0215 13:21:56.659393 994 log.go:172] (0xc000142790) (0xc000364000) Stream added, broadcasting: 5\nI0215 13:21:56.660510 994 log.go:172] (0xc000142790) Reply frame received for 5\nI0215 13:21:56.753955 994 log.go:172] (0xc000142790) Data frame received for 5\nI0215 13:21:56.754021 994 log.go:172] (0xc000364000) (5) Data frame handling\nI0215 13:21:56.754047 994 log.go:172] (0xc000364000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0215 13:21:56.791270 994 log.go:172] (0xc000142790) Data frame received for 3\nI0215 13:21:56.791329 994 log.go:172] (0xc00078c000) (3) Data frame handling\nI0215 13:21:56.791345 994 log.go:172] (0xc00078c000) (3) Data frame sent\nI0215 13:21:56.912129 994 log.go:172] (0xc000142790) Data frame received for 1\nI0215 13:21:56.912256 994 log.go:172] (0xc00064e3c0) (1) Data frame handling\nI0215 13:21:56.912312 994 log.go:172] (0xc00064e3c0) (1) Data frame sent\nI0215 13:21:56.912621 994 log.go:172] (0xc000142790) (0xc00064e3c0) Stream removed, broadcasting: 1\nI0215 13:21:56.913259 994 log.go:172] (0xc000142790) (0xc00078c000) Stream removed, broadcasting: 3\nI0215 13:21:56.913318 994 log.go:172] (0xc000142790) (0xc000364000) Stream removed, broadcasting: 5\nI0215 13:21:56.913330 994 log.go:172] (0xc000142790) Go away received\nI0215 13:21:56.913793 994 log.go:172] (0xc000142790) (0xc00064e3c0) Stream removed, broadcasting: 1\nI0215 13:21:56.913920 994 log.go:172] (0xc000142790) (0xc00078c000) Stream removed, broadcasting: 3\nI0215 13:21:56.913941 994 log.go:172] (0xc000142790) (0xc000364000) Stream removed, broadcasting: 5\n" Feb 15 13:21:56.924: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 15 13:21:56.924: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 15 13:21:56.924: INFO: Waiting for statefulset status.replicas updated to 0 Feb 15 13:21:56.931: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Feb 15 13:22:06.946: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 15 13:22:06.946: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Feb 15 13:22:06.946: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Feb 15 13:22:06.972: INFO: POD NODE PHASE GRACE CONDITIONS Feb 15 13:22:06.972: INFO: ss-0 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:22 +0000 UTC }] Feb 15 13:22:06.973: INFO: ss-1 iruya-server-sfge57q7djm7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:43 +0000 UTC }] Feb 15 13:22:06.973: INFO: ss-2 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:43 +0000 UTC }] Feb 15 13:22:06.973: INFO: Feb 15 13:22:06.973: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 15 13:22:08.943: INFO: POD NODE PHASE GRACE CONDITIONS Feb 15 13:22:08.943: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:22 +0000 UTC }] Feb 15 13:22:08.943: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:43 +0000 UTC }] Feb 15 13:22:08.943: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:43 +0000 UTC }] Feb 15 13:22:08.943: INFO: Feb 15 13:22:08.943: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 15 13:22:11.032: INFO: POD NODE PHASE GRACE CONDITIONS Feb 15 13:22:11.033: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:22 +0000 UTC }] Feb 15 13:22:11.033: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:43 +0000 UTC }] Feb 15 13:22:11.033: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:43 +0000 UTC }] Feb 15 13:22:11.033: INFO: Feb 15 13:22:11.033: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 15 13:22:12.052: INFO: POD NODE PHASE GRACE CONDITIONS Feb 15 13:22:12.052: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:22 +0000 UTC }] Feb 15 13:22:12.052: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:43 +0000 UTC }] Feb 15 13:22:12.052: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:43 +0000 UTC }] Feb 15 13:22:12.052: INFO: Feb 15 13:22:12.052: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 15 13:22:13.170: INFO: POD NODE PHASE GRACE CONDITIONS Feb 15 13:22:13.170: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:22 +0000 UTC }] Feb 15 13:22:13.171: INFO: ss-1 iruya-server-sfge57q7djm7 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:43 +0000 UTC }] Feb 15 13:22:13.171: INFO: ss-2 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:43 +0000 UTC }] Feb 15 13:22:13.171: INFO: Feb 15 13:22:13.171: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 15 13:22:14.185: INFO: POD NODE PHASE GRACE CONDITIONS Feb 15 13:22:14.185: INFO: ss-0 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:22 +0000 UTC }] Feb 15 13:22:14.185: INFO: ss-2 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:43 +0000 UTC }] Feb 15 13:22:14.185: INFO: Feb 15 13:22:14.185: INFO: StatefulSet ss has not reached scale 0, at 2 Feb 15 13:22:15.204: INFO: POD NODE PHASE GRACE CONDITIONS Feb 15 13:22:15.204: INFO: ss-0 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:22 +0000 UTC }] Feb 15 13:22:15.204: INFO: ss-2 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:43 +0000 UTC }] Feb 15 13:22:15.204: INFO: Feb 15 13:22:15.204: INFO: StatefulSet ss has not reached scale 0, at 2 Feb 15 13:22:16.215: INFO: POD NODE PHASE GRACE CONDITIONS Feb 15 13:22:16.216: INFO: ss-0 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:22 +0000 UTC }] Feb 15 13:22:16.216: INFO: ss-2 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:21:43 +0000 UTC }] Feb 15 13:22:16.216: INFO: Feb 15 13:22:16.216: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8876 Feb 15 13:22:17.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 13:22:17.467: INFO: rc: 1 Feb 15 13:22:17.467: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc000c1dad0 exit status 1 true [0xc000011cc0 0xc000011e88 0xc0006be1c8] [0xc000011cc0 0xc000011e88 0xc0006be1c8] [0xc000011e48 0xc000011fd8] [0xba6c50 0xba6c50] 0xc002738a80 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Feb 15 13:22:27.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 13:22:27.668: INFO: rc: 1 Feb 15 13:22:27.669: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000c1dbc0 exit status 1 true [0xc0006be1e8 0xc0006be330 0xc0006be408] [0xc0006be1e8 0xc0006be330 0xc0006be408] [0xc0006be278 0xc0006be378] [0xba6c50 0xba6c50] 0xc002738de0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 15 13:22:37.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 13:22:37.817: INFO: rc: 1 Feb 15 13:22:37.817: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001eb2240 exit status 1 true [0xc0006c4d60 0xc0006c52b0 0xc0006c5598] [0xc0006c4d60 0xc0006c52b0 0xc0006c5598] [0xc0006c50e8 0xc0006c5580] [0xba6c50 0xba6c50] 0xc002568600 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 15 13:22:47.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 13:22:47.977: INFO: rc: 1 Feb 15 13:22:47.977: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002252630 exit status 1 true [0xc002d22000 0xc002d22018 0xc002d22030] [0xc002d22000 0xc002d22018 0xc002d22030] [0xc002d22010 0xc002d22028] [0xba6c50 0xba6c50] 0xc001de6ae0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 15 13:22:57.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 13:22:58.145: INFO: rc: 1 Feb 15 13:22:58.145: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0020a00c0 exit status 1 true [0xc001cfc018 0xc001cfc060 0xc001cfc0a8] [0xc001cfc018 0xc001cfc060 0xc001cfc0a8] [0xc001cfc048 0xc001cfc090] [0xba6c50 0xba6c50] 0xc001668660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 15 13:23:08.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 13:23:08.331: INFO: rc: 1 Feb 15 13:23:08.331: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000c1dcb0 exit status 1 true [0xc0006be448 0xc0006be4f8 0xc0006be670] [0xc0006be448 0xc0006be4f8 0xc0006be670] [0xc0006be498 0xc0006be5d0] [0xba6c50 0xba6c50] 0xc002739140 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 15 13:23:18.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 13:23:18.519: INFO: rc: 1 Feb 15 13:23:18.520: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001eb2300 exit status 1 true [0xc0006c55c8 0xc0006c5778 0xc0006c5988] [0xc0006c55c8 0xc0006c5778 0xc0006c5988] [0xc0006c56d0 0xc0006c5958] [0xba6c50 0xba6c50] 0xc002568f60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 15 13:23:28.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 13:23:28.686: INFO: rc: 1 Feb 15 13:23:28.686: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000c1dd70 exit status 1 true [0xc0006be6c8 0xc0006be810 0xc0006be928] [0xc0006be6c8 0xc0006be810 0xc0006be928] [0xc0006be778 0xc0006be8f8] [0xba6c50 0xba6c50] 0xc0027396e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 15 13:23:38.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 13:23:38.853: INFO: rc: 1 Feb 15 13:23:38.853: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000c1de30 exit status 1 true [0xc0006be958 0xc0006bea10 0xc0006bea88] [0xc0006be958 0xc0006bea10 0xc0006bea88] [0xc0006be9e8 0xc0006bea58] [0xba6c50 0xba6c50] 0xc002739a40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 15 13:23:48.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 13:23:48.977: INFO: rc: 1 Feb 15 13:23:48.977: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0020a0210 exit status 1 true [0xc001cfc0b0 0xc001cfc140 0xc001cfc198] [0xc001cfc0b0 0xc001cfc140 0xc001cfc198] [0xc001cfc138 0xc001cfc180] [0xba6c50 0xba6c50] 0xc001668ae0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 15 13:23:58.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 13:23:59.123: INFO: rc: 1 Feb 15 13:23:59.123: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001eb23f0 exit status 1 true [0xc0006c59f8 0xc0006c5be0 0xc0006c5e98] [0xc0006c59f8 0xc0006c5be0 0xc0006c5e98] [0xc0006c5b00 0xc0006c5d78] [0xba6c50 0xba6c50] 0xc002569560 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 15 13:24:09.124: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 13:24:09.275: INFO: rc: 1 Feb 15 13:24:09.275: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0020a0300 exit status 1 true [0xc001cfc1a8 0xc001cfc1f0 0xc001cfc208] [0xc001cfc1a8 0xc001cfc1f0 0xc001cfc208] [0xc001cfc1d0 0xc001cfc200] [0xba6c50 0xba6c50] 0xc0016694a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 15 13:24:19.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 13:24:19.471: INFO: rc: 1 Feb 15 13:24:19.471: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0020a0090 exit status 1 true [0xc000011020 0xc0000112f8 0xc000011738] [0xc000011020 0xc0000112f8 0xc000011738] [0xc000011190 0xc0000116e0] [0xba6c50 0xba6c50] 0xc001668660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 15 13:24:29.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 13:24:29.652: INFO: rc: 1 Feb 15 13:24:29.652: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0020a0180 exit status 1 true [0xc0000117b8 0xc0000119a0 0xc000011dd0] [0xc0000117b8 0xc0000119a0 0xc000011dd0] [0xc0000118e8 0xc000011cc0] [0xba6c50 0xba6c50] 0xc001668ae0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 15 13:24:39.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 13:24:39.864: INFO: rc: 1 Feb 15 13:24:39.865: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0020a0270 exit status 1 true [0xc000011e48 0xc000011fd8 0xc001cfc018] [0xc000011e48 0xc000011fd8 0xc001cfc018] [0xc000011f40 0xc0001ac070] [0xba6c50 0xba6c50] 0xc0016694a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 15 13:24:49.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 13:24:50.034: INFO: rc: 1 Feb 15 13:24:50.034: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0020a0360 exit status 1 true [0xc001cfc020 0xc001cfc080 0xc001cfc0b0] [0xc001cfc020 0xc001cfc080 0xc001cfc0b0] [0xc001cfc060 0xc001cfc0a8] [0xba6c50 0xba6c50] 0xc001669c80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 15 13:25:00.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 13:25:00.219: INFO: rc: 1 Feb 15 13:25:00.221: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0022520f0 exit status 1 true [0xc002d22000 0xc002d22018 0xc002d22030] [0xc002d22000 0xc002d22018 0xc002d22030] [0xc002d22010 0xc002d22028] [0xba6c50 0xba6c50] 0xc001de6780 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 15 13:25:10.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 13:25:10.389: INFO: rc: 1 Feb 15 13:25:10.390: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0020a0450 exit status 1 true [0xc001cfc0d8 0xc001cfc160 0xc001cfc1a8] [0xc001cfc0d8 0xc001cfc160 0xc001cfc1a8] [0xc001cfc140 0xc001cfc198] [0xba6c50 0xba6c50] 0xc002738180 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 15 13:25:20.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 13:25:20.604: INFO: rc: 1 Feb 15 13:25:20.604: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000c1c0c0 exit status 1 true [0xc0006be1c8 0xc0006be278 0xc0006be378] [0xc0006be1c8 0xc0006be278 0xc0006be378] [0xc0006be218 0xc0006be350] [0xba6c50 0xba6c50] 0xc002568540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 15 13:25:30.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 13:25:30.769: INFO: rc: 1 Feb 15 13:25:30.770: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002252210 exit status 1 true [0xc002d22038 0xc002d22060 0xc002d22078] [0xc002d22038 0xc002d22060 0xc002d22078] [0xc002d22058 0xc002d22070] [0xba6c50 0xba6c50] 0xc001de7380 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 15 13:25:40.771: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 13:25:40.917: INFO: rc: 1 Feb 15 13:25:40.918: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000c1c1b0 exit status 1 true [0xc0006be408 0xc0006be498 0xc0006be5d0] [0xc0006be408 0xc0006be498 0xc0006be5d0] [0xc0006be478 0xc0006be5b0] [0xba6c50 0xba6c50] 0xc002568f00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 15 13:25:50.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 13:25:51.072: INFO: rc: 1 Feb 15 13:25:51.073: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000c1c2a0 exit status 1 true [0xc0006be670 0xc0006be778 0xc0006be8f8] [0xc0006be670 0xc0006be778 0xc0006be8f8] [0xc0006be700 0xc0006be888] [0xba6c50 0xba6c50] 0xc002569320 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 15 13:26:01.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 13:26:01.245: INFO: rc: 1 Feb 15 13:26:01.246: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001eb2090 exit status 1 true [0xc0006c4280 0xc0006c4660 0xc0006c4ff0] [0xc0006c4280 0xc0006c4660 0xc0006c4ff0] [0xc0006c4338 0xc0006c4d60] [0xba6c50 0xba6c50] 0xc002b04240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 15 13:26:11.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 13:26:11.452: INFO: rc: 1 Feb 15 13:26:11.452: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0020a0570 exit status 1 true [0xc001cfc1c0 0xc001cfc1f8 0xc001cfc218] [0xc001cfc1c0 0xc001cfc1f8 0xc001cfc218] [0xc001cfc1f0 0xc001cfc208] [0xba6c50 0xba6c50] 0xc002738540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 15 13:26:21.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 13:26:21.662: INFO: rc: 1 Feb 15 13:26:21.663: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001eb20c0 exit status 1 true [0xc0001ac070 0xc000011068 0xc0000113b0] [0xc0001ac070 0xc000011068 0xc0000113b0] [0xc000011020 0xc0000112f8] [0xba6c50 0xba6c50] 0xc001668660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 15 13:26:31.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 13:26:31.841: INFO: rc: 1 Feb 15 13:26:31.841: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0020a00f0 exit status 1 true [0xc0006c4280 0xc0006c4660 0xc0006c4ff0] [0xc0006c4280 0xc0006c4660 0xc0006c4ff0] [0xc0006c4338 0xc0006c4d60] [0xba6c50 0xba6c50] 0xc002b04240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 15 13:26:41.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 13:26:41.956: INFO: rc: 1 Feb 15 13:26:41.956: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000c1c090 exit status 1 true [0xc001cfc018 0xc001cfc060 0xc001cfc0a8] [0xc001cfc018 0xc001cfc060 0xc001cfc0a8] [0xc001cfc048 0xc001cfc090] [0xba6c50 0xba6c50] 0xc002738300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 15 13:26:51.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 13:26:52.089: INFO: rc: 1 Feb 15 13:26:52.090: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002252090 exit status 1 true [0xc0006be1c8 0xc0006be278 0xc0006be378] [0xc0006be1c8 0xc0006be278 0xc0006be378] [0xc0006be218 0xc0006be350] [0xba6c50 0xba6c50] 0xc002568540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 15 13:27:02.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 13:27:02.267: INFO: rc: 1 Feb 15 13:27:02.268: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0020a0210 exit status 1 true [0xc0006c50e8 0xc0006c5580 0xc0006c5640] [0xc0006c50e8 0xc0006c5580 0xc0006c5640] [0xc0006c54e0 0xc0006c55c8] [0xba6c50 0xba6c50] 0xc002b04540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 15 13:27:12.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 13:27:12.356: INFO: rc: 1 Feb 15 13:27:12.356: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0022521b0 exit status 1 true [0xc0006be408 0xc0006be498 0xc0006be5d0] [0xc0006be408 0xc0006be498 0xc0006be5d0] [0xc0006be478 0xc0006be5b0] [0xba6c50 0xba6c50] 0xc002568f00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 15 13:27:22.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8876 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 15 13:27:24.911: INFO: rc: 1 Feb 15 13:27:24.911: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: Feb 15 13:27:24.912: INFO: Scaling statefulset ss to 0 Feb 15 13:27:24.927: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Feb 15 13:27:24.932: INFO: Deleting all statefulset in ns statefulset-8876 Feb 15 13:27:24.934: INFO: Scaling statefulset ss to 0 Feb 15 13:27:24.946: INFO: Waiting for statefulset status.replicas updated to 0 Feb 15 13:27:24.949: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:27:24.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8876" for this suite. Feb 15 13:27:30.992: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:27:31.082: INFO: namespace statefulset-8876 deletion completed in 6.11146958s • [SLOW TEST:368.988 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:27:31.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-b4069594-d0df-4bc6-be09-2594c2474b4e STEP: Creating a pod to test consume secrets Feb 15 13:27:31.220: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6267c80b-00d3-4b24-8596-1c0b8c244a3c" in namespace "projected-9324" to be "success or failure" Feb 15 13:27:31.230: INFO: Pod "pod-projected-secrets-6267c80b-00d3-4b24-8596-1c0b8c244a3c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.176487ms Feb 15 13:27:33.244: INFO: Pod "pod-projected-secrets-6267c80b-00d3-4b24-8596-1c0b8c244a3c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024731631s Feb 15 13:27:35.257: INFO: Pod "pod-projected-secrets-6267c80b-00d3-4b24-8596-1c0b8c244a3c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037772624s Feb 15 13:27:37.270: INFO: Pod "pod-projected-secrets-6267c80b-00d3-4b24-8596-1c0b8c244a3c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05039285s Feb 15 13:27:39.278: INFO: Pod "pod-projected-secrets-6267c80b-00d3-4b24-8596-1c0b8c244a3c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.05866073s STEP: Saw pod success Feb 15 13:27:39.279: INFO: Pod "pod-projected-secrets-6267c80b-00d3-4b24-8596-1c0b8c244a3c" satisfied condition "success or failure" Feb 15 13:27:39.282: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-6267c80b-00d3-4b24-8596-1c0b8c244a3c container projected-secret-volume-test: STEP: delete the pod Feb 15 13:27:39.375: INFO: Waiting for pod pod-projected-secrets-6267c80b-00d3-4b24-8596-1c0b8c244a3c to disappear Feb 15 13:27:39.384: INFO: Pod pod-projected-secrets-6267c80b-00d3-4b24-8596-1c0b8c244a3c no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:27:39.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9324" for this suite. Feb 15 13:27:45.514: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:27:45.730: INFO: namespace projected-9324 deletion completed in 6.26229408s • [SLOW TEST:14.648 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:27:45.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Feb 15 13:27:45.897: INFO: Waiting up to 5m0s for pod "pod-9f27a611-6ec3-4886-a022-47b8c4167372" in namespace "emptydir-4188" to be "success or failure" Feb 15 13:27:45.908: INFO: Pod "pod-9f27a611-6ec3-4886-a022-47b8c4167372": Phase="Pending", Reason="", readiness=false. Elapsed: 10.740102ms Feb 15 13:27:47.919: INFO: Pod "pod-9f27a611-6ec3-4886-a022-47b8c4167372": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021595354s Feb 15 13:27:49.927: INFO: Pod "pod-9f27a611-6ec3-4886-a022-47b8c4167372": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030263648s Feb 15 13:27:51.944: INFO: Pod "pod-9f27a611-6ec3-4886-a022-47b8c4167372": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047234563s Feb 15 13:27:53.953: INFO: Pod "pod-9f27a611-6ec3-4886-a022-47b8c4167372": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.056416581s STEP: Saw pod success Feb 15 13:27:53.954: INFO: Pod "pod-9f27a611-6ec3-4886-a022-47b8c4167372" satisfied condition "success or failure" Feb 15 13:27:53.958: INFO: Trying to get logs from node iruya-node pod pod-9f27a611-6ec3-4886-a022-47b8c4167372 container test-container: STEP: delete the pod Feb 15 13:27:54.012: INFO: Waiting for pod pod-9f27a611-6ec3-4886-a022-47b8c4167372 to disappear Feb 15 13:27:54.020: INFO: Pod pod-9f27a611-6ec3-4886-a022-47b8c4167372 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:27:54.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4188" for this suite. Feb 15 13:28:00.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:28:00.293: INFO: namespace emptydir-4188 deletion completed in 6.25222402s • [SLOW TEST:14.562 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:28:00.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Feb 15 13:28:00.362: INFO: Waiting up to 5m0s for pod "pod-b71bf6be-d1f7-40d0-b92b-b550e45722d3" in namespace "emptydir-7099" to be "success or failure" Feb 15 13:28:00.429: INFO: Pod "pod-b71bf6be-d1f7-40d0-b92b-b550e45722d3": Phase="Pending", Reason="", readiness=false. Elapsed: 67.167415ms Feb 15 13:28:02.439: INFO: Pod "pod-b71bf6be-d1f7-40d0-b92b-b550e45722d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076764165s Feb 15 13:28:04.452: INFO: Pod "pod-b71bf6be-d1f7-40d0-b92b-b550e45722d3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090164694s Feb 15 13:28:06.465: INFO: Pod "pod-b71bf6be-d1f7-40d0-b92b-b550e45722d3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.102968251s Feb 15 13:28:08.482: INFO: Pod "pod-b71bf6be-d1f7-40d0-b92b-b550e45722d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.120063779s STEP: Saw pod success Feb 15 13:28:08.482: INFO: Pod "pod-b71bf6be-d1f7-40d0-b92b-b550e45722d3" satisfied condition "success or failure" Feb 15 13:28:08.492: INFO: Trying to get logs from node iruya-node pod pod-b71bf6be-d1f7-40d0-b92b-b550e45722d3 container test-container: STEP: delete the pod Feb 15 13:28:10.037: INFO: Waiting for pod pod-b71bf6be-d1f7-40d0-b92b-b550e45722d3 to disappear Feb 15 13:28:10.055: INFO: Pod pod-b71bf6be-d1f7-40d0-b92b-b550e45722d3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:28:10.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7099" for this suite. Feb 15 13:28:16.105: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:28:16.227: INFO: namespace emptydir-7099 deletion completed in 6.162887199s • [SLOW TEST:15.933 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:28:16.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod test-webserver-4b49bc0c-7334-4087-9e60-f0708becf047 in namespace container-probe-6855 Feb 15 13:28:26.352: INFO: Started pod test-webserver-4b49bc0c-7334-4087-9e60-f0708becf047 in namespace container-probe-6855 STEP: checking the pod's current state and verifying that restartCount is present Feb 15 13:28:26.356: INFO: Initial restart count of pod test-webserver-4b49bc0c-7334-4087-9e60-f0708becf047 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:32:27.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6855" for this suite. Feb 15 13:32:33.401: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:32:33.532: INFO: namespace container-probe-6855 deletion completed in 6.159416526s • [SLOW TEST:257.305 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:32:33.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-8e6c2bce-0441-423c-b7b0-2c74a7e65f29 STEP: Creating configMap with name cm-test-opt-upd-f091ea1d-8226-46f0-bc80-05f75be1ed2a STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-8e6c2bce-0441-423c-b7b0-2c74a7e65f29 STEP: Updating configmap cm-test-opt-upd-f091ea1d-8226-46f0-bc80-05f75be1ed2a STEP: Creating configMap with name cm-test-opt-create-b65e54a5-2d6e-4064-9d5b-e9abd3df2200 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:32:48.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9542" for this suite. Feb 15 13:33:10.087: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:33:10.293: INFO: namespace projected-9542 deletion completed in 22.236057402s • [SLOW TEST:36.761 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:33:10.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 15 13:33:10.450: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Feb 15 13:33:10.463: INFO: Pod name sample-pod: Found 0 pods out of 1 Feb 15 13:33:15.473: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 15 13:33:19.488: INFO: Creating deployment "test-rolling-update-deployment" Feb 15 13:33:19.532: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Feb 15 13:33:19.543: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Feb 15 13:33:21.555: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Feb 15 13:33:21.558: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717370399, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717370399, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717370399, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717370399, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 15 13:33:23.569: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717370399, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717370399, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717370399, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717370399, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 15 13:33:25.567: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717370399, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717370399, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717370399, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717370399, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 15 13:33:27.566: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Feb 15 13:33:27.657: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-542,SelfLink:/apis/apps/v1/namespaces/deployment-542/deployments/test-rolling-update-deployment,UID:de6a466d-d4d5-493b-bdf2-9a7d12ab0810,ResourceVersion:24449829,Generation:1,CreationTimestamp:2020-02-15 13:33:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-15 13:33:19 +0000 UTC 2020-02-15 13:33:19 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-15 13:33:27 +0000 UTC 2020-02-15 13:33:19 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Feb 15 13:33:27.662: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-542,SelfLink:/apis/apps/v1/namespaces/deployment-542/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:aa2080e3-791b-4e45-8973-858f82631c7d,ResourceVersion:24449819,Generation:1,CreationTimestamp:2020-02-15 13:33:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment de6a466d-d4d5-493b-bdf2-9a7d12ab0810 0xc00247b327 0xc00247b328}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Feb 15 13:33:27.662: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Feb 15 13:33:27.662: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-542,SelfLink:/apis/apps/v1/namespaces/deployment-542/replicasets/test-rolling-update-controller,UID:78af6c39-e34f-44d3-b485-03f2c0db723f,ResourceVersion:24449828,Generation:2,CreationTimestamp:2020-02-15 13:33:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment de6a466d-d4d5-493b-bdf2-9a7d12ab0810 0xc00247b257 0xc00247b258}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 15 13:33:27.667: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-9qppb" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-9qppb,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-542,SelfLink:/api/v1/namespaces/deployment-542/pods/test-rolling-update-deployment-79f6b9d75c-9qppb,UID:240383a9-236c-4b96-9b7c-30fa704a41ef,ResourceVersion:24449818,Generation:0,CreationTimestamp:2020-02-15 13:33:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c aa2080e3-791b-4e45-8973-858f82631c7d 0xc00247bc57 0xc00247bc58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gsbnv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gsbnv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-gsbnv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00247bcd0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00247bcf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:33:19 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:33:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:33:27 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:33:19 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-02-15 13:33:19 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-15 13:33:26 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://3f19edd16483ee8f06922b96f3238f7d3ff3efddd308d7fa691ab6ab4e3fd081}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:33:27.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-542" for this suite. Feb 15 13:33:33.695: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:33:33.839: INFO: namespace deployment-542 deletion completed in 6.168448282s • [SLOW TEST:23.546 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:33:33.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-bl97 STEP: Creating a pod to test atomic-volume-subpath Feb 15 13:33:33.986: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-bl97" in namespace "subpath-32" to be "success or failure" Feb 15 13:33:34.006: INFO: Pod "pod-subpath-test-projected-bl97": Phase="Pending", Reason="", readiness=false. Elapsed: 19.438666ms Feb 15 13:33:36.020: INFO: Pod "pod-subpath-test-projected-bl97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033131117s Feb 15 13:33:38.027: INFO: Pod "pod-subpath-test-projected-bl97": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040019661s Feb 15 13:33:40.034: INFO: Pod "pod-subpath-test-projected-bl97": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047854502s Feb 15 13:33:42.045: INFO: Pod "pod-subpath-test-projected-bl97": Phase="Running", Reason="", readiness=true. Elapsed: 8.05845769s Feb 15 13:33:44.056: INFO: Pod "pod-subpath-test-projected-bl97": Phase="Running", Reason="", readiness=true. Elapsed: 10.069877106s Feb 15 13:33:46.064: INFO: Pod "pod-subpath-test-projected-bl97": Phase="Running", Reason="", readiness=true. Elapsed: 12.077192267s Feb 15 13:33:48.072: INFO: Pod "pod-subpath-test-projected-bl97": Phase="Running", Reason="", readiness=true. Elapsed: 14.08520047s Feb 15 13:33:50.081: INFO: Pod "pod-subpath-test-projected-bl97": Phase="Running", Reason="", readiness=true. Elapsed: 16.094132028s Feb 15 13:33:52.101: INFO: Pod "pod-subpath-test-projected-bl97": Phase="Running", Reason="", readiness=true. Elapsed: 18.114459558s Feb 15 13:33:54.109: INFO: Pod "pod-subpath-test-projected-bl97": Phase="Running", Reason="", readiness=true. Elapsed: 20.122905076s Feb 15 13:33:56.126: INFO: Pod "pod-subpath-test-projected-bl97": Phase="Running", Reason="", readiness=true. Elapsed: 22.139771214s Feb 15 13:33:58.137: INFO: Pod "pod-subpath-test-projected-bl97": Phase="Running", Reason="", readiness=true. Elapsed: 24.150694614s Feb 15 13:34:00.147: INFO: Pod "pod-subpath-test-projected-bl97": Phase="Running", Reason="", readiness=true. Elapsed: 26.160194514s Feb 15 13:34:02.158: INFO: Pod "pod-subpath-test-projected-bl97": Phase="Running", Reason="", readiness=true. Elapsed: 28.171643884s Feb 15 13:34:04.166: INFO: Pod "pod-subpath-test-projected-bl97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.179147125s STEP: Saw pod success Feb 15 13:34:04.166: INFO: Pod "pod-subpath-test-projected-bl97" satisfied condition "success or failure" Feb 15 13:34:04.172: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-projected-bl97 container test-container-subpath-projected-bl97: STEP: delete the pod Feb 15 13:34:04.273: INFO: Waiting for pod pod-subpath-test-projected-bl97 to disappear Feb 15 13:34:04.291: INFO: Pod pod-subpath-test-projected-bl97 no longer exists STEP: Deleting pod pod-subpath-test-projected-bl97 Feb 15 13:34:04.291: INFO: Deleting pod "pod-subpath-test-projected-bl97" in namespace "subpath-32" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:34:04.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-32" for this suite. Feb 15 13:34:10.443: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:34:10.594: INFO: namespace subpath-32 deletion completed in 6.184432867s • [SLOW TEST:36.754 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:34:10.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0215 13:34:14.954910 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 15 13:34:14.955: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:34:14.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9958" for this suite. Feb 15 13:34:20.994: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:34:21.142: INFO: namespace gc-9958 deletion completed in 6.177689132s • [SLOW TEST:10.547 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:34:21.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-ttfn STEP: Creating a pod to test atomic-volume-subpath Feb 15 13:34:21.356: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-ttfn" in namespace "subpath-2139" to be "success or failure" Feb 15 13:34:21.365: INFO: Pod "pod-subpath-test-downwardapi-ttfn": Phase="Pending", Reason="", readiness=false. Elapsed: 8.292118ms Feb 15 13:34:23.375: INFO: Pod "pod-subpath-test-downwardapi-ttfn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018027219s Feb 15 13:34:25.381: INFO: Pod "pod-subpath-test-downwardapi-ttfn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024402296s Feb 15 13:34:27.390: INFO: Pod "pod-subpath-test-downwardapi-ttfn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033414925s Feb 15 13:34:29.397: INFO: Pod "pod-subpath-test-downwardapi-ttfn": Phase="Running", Reason="", readiness=true. Elapsed: 8.040253107s Feb 15 13:34:31.408: INFO: Pod "pod-subpath-test-downwardapi-ttfn": Phase="Running", Reason="", readiness=true. Elapsed: 10.051508008s Feb 15 13:34:33.416: INFO: Pod "pod-subpath-test-downwardapi-ttfn": Phase="Running", Reason="", readiness=true. Elapsed: 12.059858208s Feb 15 13:34:35.430: INFO: Pod "pod-subpath-test-downwardapi-ttfn": Phase="Running", Reason="", readiness=true. Elapsed: 14.073644356s Feb 15 13:34:37.440: INFO: Pod "pod-subpath-test-downwardapi-ttfn": Phase="Running", Reason="", readiness=true. Elapsed: 16.08366994s Feb 15 13:34:39.453: INFO: Pod "pod-subpath-test-downwardapi-ttfn": Phase="Running", Reason="", readiness=true. Elapsed: 18.096643368s Feb 15 13:34:41.463: INFO: Pod "pod-subpath-test-downwardapi-ttfn": Phase="Running", Reason="", readiness=true. Elapsed: 20.106195348s Feb 15 13:34:43.528: INFO: Pod "pod-subpath-test-downwardapi-ttfn": Phase="Running", Reason="", readiness=true. Elapsed: 22.171827253s Feb 15 13:34:45.539: INFO: Pod "pod-subpath-test-downwardapi-ttfn": Phase="Running", Reason="", readiness=true. Elapsed: 24.18273776s Feb 15 13:34:47.859: INFO: Pod "pod-subpath-test-downwardapi-ttfn": Phase="Running", Reason="", readiness=true. Elapsed: 26.50246878s Feb 15 13:34:49.868: INFO: Pod "pod-subpath-test-downwardapi-ttfn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.51171099s STEP: Saw pod success Feb 15 13:34:49.868: INFO: Pod "pod-subpath-test-downwardapi-ttfn" satisfied condition "success or failure" Feb 15 13:34:49.877: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-downwardapi-ttfn container test-container-subpath-downwardapi-ttfn: STEP: delete the pod Feb 15 13:34:49.951: INFO: Waiting for pod pod-subpath-test-downwardapi-ttfn to disappear Feb 15 13:34:50.001: INFO: Pod pod-subpath-test-downwardapi-ttfn no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-ttfn Feb 15 13:34:50.001: INFO: Deleting pod "pod-subpath-test-downwardapi-ttfn" in namespace "subpath-2139" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:34:50.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2139" for this suite. Feb 15 13:34:56.037: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:34:56.141: INFO: namespace subpath-2139 deletion completed in 6.12450336s • [SLOW TEST:34.998 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:34:56.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 15 13:34:56.218: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0c8cfe0c-11fc-42a9-8a1a-9758f6754d1c" in namespace "projected-6348" to be "success or failure" Feb 15 13:34:56.225: INFO: Pod "downwardapi-volume-0c8cfe0c-11fc-42a9-8a1a-9758f6754d1c": Phase="Pending", Reason="", readiness=false. Elapsed: 7.176467ms Feb 15 13:34:58.234: INFO: Pod "downwardapi-volume-0c8cfe0c-11fc-42a9-8a1a-9758f6754d1c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015318818s Feb 15 13:35:00.246: INFO: Pod "downwardapi-volume-0c8cfe0c-11fc-42a9-8a1a-9758f6754d1c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028070653s Feb 15 13:35:02.258: INFO: Pod "downwardapi-volume-0c8cfe0c-11fc-42a9-8a1a-9758f6754d1c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040020893s Feb 15 13:35:04.266: INFO: Pod "downwardapi-volume-0c8cfe0c-11fc-42a9-8a1a-9758f6754d1c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.048000377s STEP: Saw pod success Feb 15 13:35:04.266: INFO: Pod "downwardapi-volume-0c8cfe0c-11fc-42a9-8a1a-9758f6754d1c" satisfied condition "success or failure" Feb 15 13:35:04.270: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-0c8cfe0c-11fc-42a9-8a1a-9758f6754d1c container client-container: STEP: delete the pod Feb 15 13:35:04.574: INFO: Waiting for pod downwardapi-volume-0c8cfe0c-11fc-42a9-8a1a-9758f6754d1c to disappear Feb 15 13:35:04.590: INFO: Pod downwardapi-volume-0c8cfe0c-11fc-42a9-8a1a-9758f6754d1c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:35:04.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6348" for this suite. Feb 15 13:35:10.624: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:35:10.775: INFO: namespace projected-6348 deletion completed in 6.177276977s • [SLOW TEST:14.634 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:35:10.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Feb 15 13:35:10.891: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:35:28.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1171" for this suite. Feb 15 13:35:34.237: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:35:34.364: INFO: namespace init-container-1171 deletion completed in 6.173658605s • [SLOW TEST:23.589 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:35:34.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-8221 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 15 13:35:34.486: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 15 13:36:08.798: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-8221 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 15 13:36:08.798: INFO: >>> kubeConfig: /root/.kube/config I0215 13:36:08.999039 8 log.go:172] (0xc001dcad10) (0xc00311e140) Create stream I0215 13:36:08.999129 8 log.go:172] (0xc001dcad10) (0xc00311e140) Stream added, broadcasting: 1 I0215 13:36:09.005914 8 log.go:172] (0xc001dcad10) Reply frame received for 1 I0215 13:36:09.005950 8 log.go:172] (0xc001dcad10) (0xc002083400) Create stream I0215 13:36:09.005958 8 log.go:172] (0xc001dcad10) (0xc002083400) Stream added, broadcasting: 3 I0215 13:36:09.009631 8 log.go:172] (0xc001dcad10) Reply frame received for 3 I0215 13:36:09.009666 8 log.go:172] (0xc001dcad10) (0xc0020834a0) Create stream I0215 13:36:09.009690 8 log.go:172] (0xc001dcad10) (0xc0020834a0) Stream added, broadcasting: 5 I0215 13:36:09.011105 8 log.go:172] (0xc001dcad10) Reply frame received for 5 I0215 13:36:09.286748 8 log.go:172] (0xc001dcad10) Data frame received for 3 I0215 13:36:09.286873 8 log.go:172] (0xc002083400) (3) Data frame handling I0215 13:36:09.286897 8 log.go:172] (0xc002083400) (3) Data frame sent I0215 13:36:09.425201 8 log.go:172] (0xc001dcad10) Data frame received for 1 I0215 13:36:09.425357 8 log.go:172] (0xc001dcad10) (0xc002083400) Stream removed, broadcasting: 3 I0215 13:36:09.425422 8 log.go:172] (0xc00311e140) (1) Data frame handling I0215 13:36:09.425448 8 log.go:172] (0xc00311e140) (1) Data frame sent I0215 13:36:09.425714 8 log.go:172] (0xc001dcad10) (0xc0020834a0) Stream removed, broadcasting: 5 I0215 13:36:09.425751 8 log.go:172] (0xc001dcad10) (0xc00311e140) Stream removed, broadcasting: 1 I0215 13:36:09.425773 8 log.go:172] (0xc001dcad10) Go away received I0215 13:36:09.426413 8 log.go:172] (0xc001dcad10) (0xc00311e140) Stream removed, broadcasting: 1 I0215 13:36:09.426441 8 log.go:172] (0xc001dcad10) (0xc002083400) Stream removed, broadcasting: 3 I0215 13:36:09.426448 8 log.go:172] (0xc001dcad10) (0xc0020834a0) Stream removed, broadcasting: 5 Feb 15 13:36:09.426: INFO: Waiting for endpoints: map[] Feb 15 13:36:09.460: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-8221 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 15 13:36:09.460: INFO: >>> kubeConfig: /root/.kube/config I0215 13:36:09.532855 8 log.go:172] (0xc00066f760) (0xc0018b9220) Create stream I0215 13:36:09.532940 8 log.go:172] (0xc00066f760) (0xc0018b9220) Stream added, broadcasting: 1 I0215 13:36:09.539695 8 log.go:172] (0xc00066f760) Reply frame received for 1 I0215 13:36:09.539730 8 log.go:172] (0xc00066f760) (0xc00311e1e0) Create stream I0215 13:36:09.539741 8 log.go:172] (0xc00066f760) (0xc00311e1e0) Stream added, broadcasting: 3 I0215 13:36:09.541812 8 log.go:172] (0xc00066f760) Reply frame received for 3 I0215 13:36:09.541855 8 log.go:172] (0xc00066f760) (0xc0018b92c0) Create stream I0215 13:36:09.541867 8 log.go:172] (0xc00066f760) (0xc0018b92c0) Stream added, broadcasting: 5 I0215 13:36:09.543984 8 log.go:172] (0xc00066f760) Reply frame received for 5 I0215 13:36:09.709032 8 log.go:172] (0xc00066f760) Data frame received for 3 I0215 13:36:09.709130 8 log.go:172] (0xc00311e1e0) (3) Data frame handling I0215 13:36:09.709187 8 log.go:172] (0xc00311e1e0) (3) Data frame sent I0215 13:36:09.888926 8 log.go:172] (0xc00066f760) Data frame received for 1 I0215 13:36:09.889237 8 log.go:172] (0xc00066f760) (0xc0018b92c0) Stream removed, broadcasting: 5 I0215 13:36:09.889302 8 log.go:172] (0xc0018b9220) (1) Data frame handling I0215 13:36:09.889322 8 log.go:172] (0xc0018b9220) (1) Data frame sent I0215 13:36:09.889373 8 log.go:172] (0xc00066f760) (0xc00311e1e0) Stream removed, broadcasting: 3 I0215 13:36:09.889479 8 log.go:172] (0xc00066f760) (0xc0018b9220) Stream removed, broadcasting: 1 I0215 13:36:09.889506 8 log.go:172] (0xc00066f760) Go away received I0215 13:36:09.890407 8 log.go:172] (0xc00066f760) (0xc0018b9220) Stream removed, broadcasting: 1 I0215 13:36:09.890434 8 log.go:172] (0xc00066f760) (0xc00311e1e0) Stream removed, broadcasting: 3 I0215 13:36:09.890444 8 log.go:172] (0xc00066f760) (0xc0018b92c0) Stream removed, broadcasting: 5 Feb 15 13:36:09.891: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:36:09.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8221" for this suite. Feb 15 13:36:29.956: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:36:30.054: INFO: namespace pod-network-test-8221 deletion completed in 20.148675674s • [SLOW TEST:55.690 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:36:30.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:36:36.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4936" for this suite. Feb 15 13:36:42.628: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:36:42.739: INFO: namespace namespaces-4936 deletion completed in 6.140401395s STEP: Destroying namespace "nsdeletetest-46" for this suite. Feb 15 13:36:42.741: INFO: Namespace nsdeletetest-46 was already deleted STEP: Destroying namespace "nsdeletetest-4434" for this suite. Feb 15 13:36:48.760: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:36:48.871: INFO: namespace nsdeletetest-4434 deletion completed in 6.129726726s • [SLOW TEST:18.816 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:36:48.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:36:58.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4643" for this suite. Feb 15 13:37:20.172: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:37:20.311: INFO: namespace replication-controller-4643 deletion completed in 22.165994909s • [SLOW TEST:31.439 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:37:20.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info Feb 15 13:37:20.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Feb 15 13:37:20.616: INFO: stderr: "" Feb 15 13:37:20.616: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:37:20.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4243" for this suite. Feb 15 13:37:26.688: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:37:27.427: INFO: namespace kubectl-4243 deletion completed in 6.798620144s • [SLOW TEST:7.116 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:37:27.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:38:23.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4295" for this suite. Feb 15 13:38:29.928: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:38:30.066: INFO: namespace container-runtime-4295 deletion completed in 6.234854275s • [SLOW TEST:62.639 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:38:30.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0215 13:38:44.707947 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 15 13:38:44.708: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:38:44.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9400" for this suite. Feb 15 13:38:56.819: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:38:57.068: INFO: namespace gc-9400 deletion completed in 11.741365118s • [SLOW TEST:27.001 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:38:57.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs Feb 15 13:38:57.173: INFO: Waiting up to 5m0s for pod "pod-a764f755-5d32-4bdf-aad8-a98bd38793c4" in namespace "emptydir-3171" to be "success or failure" Feb 15 13:38:57.244: INFO: Pod "pod-a764f755-5d32-4bdf-aad8-a98bd38793c4": Phase="Pending", Reason="", readiness=false. Elapsed: 70.475873ms Feb 15 13:38:59.251: INFO: Pod "pod-a764f755-5d32-4bdf-aad8-a98bd38793c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07719731s Feb 15 13:39:01.256: INFO: Pod "pod-a764f755-5d32-4bdf-aad8-a98bd38793c4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082199588s Feb 15 13:39:03.265: INFO: Pod "pod-a764f755-5d32-4bdf-aad8-a98bd38793c4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.091316475s Feb 15 13:39:05.272: INFO: Pod "pod-a764f755-5d32-4bdf-aad8-a98bd38793c4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.098602469s Feb 15 13:39:07.285: INFO: Pod "pod-a764f755-5d32-4bdf-aad8-a98bd38793c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.111594561s STEP: Saw pod success Feb 15 13:39:07.286: INFO: Pod "pod-a764f755-5d32-4bdf-aad8-a98bd38793c4" satisfied condition "success or failure" Feb 15 13:39:07.299: INFO: Trying to get logs from node iruya-node pod pod-a764f755-5d32-4bdf-aad8-a98bd38793c4 container test-container: STEP: delete the pod Feb 15 13:39:07.382: INFO: Waiting for pod pod-a764f755-5d32-4bdf-aad8-a98bd38793c4 to disappear Feb 15 13:39:07.390: INFO: Pod pod-a764f755-5d32-4bdf-aad8-a98bd38793c4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:39:07.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3171" for this suite. Feb 15 13:39:13.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:39:13.582: INFO: namespace emptydir-3171 deletion completed in 6.185487155s • [SLOW TEST:16.514 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:39:13.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-0e0398fb-fe06-4a15-895a-337f74a43eea STEP: Creating the pod STEP: Updating configmap configmap-test-upd-0e0398fb-fe06-4a15-895a-337f74a43eea STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:39:23.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6881" for this suite. Feb 15 13:39:46.014: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:39:46.181: INFO: namespace configmap-6881 deletion completed in 22.191872404s • [SLOW TEST:32.599 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:39:46.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Feb 15 13:39:57.377: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:39:58.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6208" for this suite. Feb 15 13:40:20.556: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:40:20.680: INFO: namespace replicaset-6208 deletion completed in 22.264289439s • [SLOW TEST:34.499 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:40:20.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 15 13:40:20.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-7590' Feb 15 13:40:22.667: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 15 13:40:22.668: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: rolling-update to same image controller Feb 15 13:40:22.799: INFO: scanned /root for discovery docs: Feb 15 13:40:22.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-7590' Feb 15 13:40:42.973: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Feb 15 13:40:42.974: INFO: stdout: "Created e2e-test-nginx-rc-226badd115edd05c4b4c7cf8ab553ee6\nScaling up e2e-test-nginx-rc-226badd115edd05c4b4c7cf8ab553ee6 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-226badd115edd05c4b4c7cf8ab553ee6 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-226badd115edd05c4b4c7cf8ab553ee6 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Feb 15 13:40:42.974: INFO: stdout: "Created e2e-test-nginx-rc-226badd115edd05c4b4c7cf8ab553ee6\nScaling up e2e-test-nginx-rc-226badd115edd05c4b4c7cf8ab553ee6 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-226badd115edd05c4b4c7cf8ab553ee6 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-226badd115edd05c4b4c7cf8ab553ee6 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Feb 15 13:40:42.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-7590' Feb 15 13:40:43.169: INFO: stderr: "" Feb 15 13:40:43.169: INFO: stdout: "e2e-test-nginx-rc-226badd115edd05c4b4c7cf8ab553ee6-5qw99 " Feb 15 13:40:43.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-226badd115edd05c4b4c7cf8ab553ee6-5qw99 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7590' Feb 15 13:40:43.256: INFO: stderr: "" Feb 15 13:40:43.256: INFO: stdout: "true" Feb 15 13:40:43.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-226badd115edd05c4b4c7cf8ab553ee6-5qw99 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7590' Feb 15 13:40:43.327: INFO: stderr: "" Feb 15 13:40:43.327: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Feb 15 13:40:43.327: INFO: e2e-test-nginx-rc-226badd115edd05c4b4c7cf8ab553ee6-5qw99 is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 Feb 15 13:40:43.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-7590' Feb 15 13:40:43.447: INFO: stderr: "" Feb 15 13:40:43.447: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:40:43.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7590" for this suite. Feb 15 13:41:05.521: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:41:05.653: INFO: namespace kubectl-7590 deletion completed in 22.164030769s • [SLOW TEST:44.973 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:41:05.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-6e0de012-cf6b-4614-9ab3-384cebe18fd7 STEP: Creating a pod to test consume secrets Feb 15 13:41:05.751: INFO: Waiting up to 5m0s for pod "pod-secrets-12158412-be09-4890-af13-fb9a57d7e916" in namespace "secrets-878" to be "success or failure" Feb 15 13:41:05.757: INFO: Pod "pod-secrets-12158412-be09-4890-af13-fb9a57d7e916": Phase="Pending", Reason="", readiness=false. Elapsed: 5.86006ms Feb 15 13:41:07.767: INFO: Pod "pod-secrets-12158412-be09-4890-af13-fb9a57d7e916": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015831811s Feb 15 13:41:09.794: INFO: Pod "pod-secrets-12158412-be09-4890-af13-fb9a57d7e916": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042657031s Feb 15 13:41:11.806: INFO: Pod "pod-secrets-12158412-be09-4890-af13-fb9a57d7e916": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055043533s Feb 15 13:41:13.832: INFO: Pod "pod-secrets-12158412-be09-4890-af13-fb9a57d7e916": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.0808515s STEP: Saw pod success Feb 15 13:41:13.832: INFO: Pod "pod-secrets-12158412-be09-4890-af13-fb9a57d7e916" satisfied condition "success or failure" Feb 15 13:41:13.848: INFO: Trying to get logs from node iruya-node pod pod-secrets-12158412-be09-4890-af13-fb9a57d7e916 container secret-volume-test: STEP: delete the pod Feb 15 13:41:14.055: INFO: Waiting for pod pod-secrets-12158412-be09-4890-af13-fb9a57d7e916 to disappear Feb 15 13:41:14.083: INFO: Pod pod-secrets-12158412-be09-4890-af13-fb9a57d7e916 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:41:14.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-878" for this suite. Feb 15 13:41:20.139: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:41:20.261: INFO: namespace secrets-878 deletion completed in 6.160164168s • [SLOW TEST:14.608 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:41:20.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Feb 15 13:41:28.954: INFO: Successfully updated pod "annotationupdate6ee7f3fe-db51-4de3-83f8-a8746784fc1b" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:41:31.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1796" for this suite. Feb 15 13:41:55.078: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:41:55.170: INFO: namespace projected-1796 deletion completed in 24.112234713s • [SLOW TEST:34.908 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:41:55.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:42:55.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8444" for this suite. Feb 15 13:43:17.328: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:43:17.445: INFO: namespace container-probe-8444 deletion completed in 22.141734329s • [SLOW TEST:82.274 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:43:17.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Feb 15 13:43:17.634: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6963,SelfLink:/api/v1/namespaces/watch-6963/configmaps/e2e-watch-test-label-changed,UID:3e36e5e0-986e-4bac-ae48-afcb839ff600,ResourceVersion:24451381,Generation:0,CreationTimestamp:2020-02-15 13:43:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 15 13:43:17.634: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6963,SelfLink:/api/v1/namespaces/watch-6963/configmaps/e2e-watch-test-label-changed,UID:3e36e5e0-986e-4bac-ae48-afcb839ff600,ResourceVersion:24451382,Generation:0,CreationTimestamp:2020-02-15 13:43:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Feb 15 13:43:17.634: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6963,SelfLink:/api/v1/namespaces/watch-6963/configmaps/e2e-watch-test-label-changed,UID:3e36e5e0-986e-4bac-ae48-afcb839ff600,ResourceVersion:24451383,Generation:0,CreationTimestamp:2020-02-15 13:43:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Feb 15 13:43:27.664: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6963,SelfLink:/api/v1/namespaces/watch-6963/configmaps/e2e-watch-test-label-changed,UID:3e36e5e0-986e-4bac-ae48-afcb839ff600,ResourceVersion:24451398,Generation:0,CreationTimestamp:2020-02-15 13:43:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 15 13:43:27.664: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6963,SelfLink:/api/v1/namespaces/watch-6963/configmaps/e2e-watch-test-label-changed,UID:3e36e5e0-986e-4bac-ae48-afcb839ff600,ResourceVersion:24451399,Generation:0,CreationTimestamp:2020-02-15 13:43:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Feb 15 13:43:27.664: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6963,SelfLink:/api/v1/namespaces/watch-6963/configmaps/e2e-watch-test-label-changed,UID:3e36e5e0-986e-4bac-ae48-afcb839ff600,ResourceVersion:24451400,Generation:0,CreationTimestamp:2020-02-15 13:43:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:43:27.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6963" for this suite. Feb 15 13:43:33.690: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:43:33.838: INFO: namespace watch-6963 deletion completed in 6.166758325s • [SLOW TEST:16.392 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:43:33.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Feb 15 13:43:33.986: INFO: Waiting up to 5m0s for pod "pod-063e6e73-2903-486e-9617-1a182fd90f6c" in namespace "emptydir-2305" to be "success or failure" Feb 15 13:43:34.040: INFO: Pod "pod-063e6e73-2903-486e-9617-1a182fd90f6c": Phase="Pending", Reason="", readiness=false. Elapsed: 53.696151ms Feb 15 13:43:36.048: INFO: Pod "pod-063e6e73-2903-486e-9617-1a182fd90f6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061653931s Feb 15 13:43:38.055: INFO: Pod "pod-063e6e73-2903-486e-9617-1a182fd90f6c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068429338s Feb 15 13:43:40.067: INFO: Pod "pod-063e6e73-2903-486e-9617-1a182fd90f6c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.081248568s Feb 15 13:43:42.078: INFO: Pod "pod-063e6e73-2903-486e-9617-1a182fd90f6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.091667157s STEP: Saw pod success Feb 15 13:43:42.078: INFO: Pod "pod-063e6e73-2903-486e-9617-1a182fd90f6c" satisfied condition "success or failure" Feb 15 13:43:42.081: INFO: Trying to get logs from node iruya-node pod pod-063e6e73-2903-486e-9617-1a182fd90f6c container test-container: STEP: delete the pod Feb 15 13:43:42.264: INFO: Waiting for pod pod-063e6e73-2903-486e-9617-1a182fd90f6c to disappear Feb 15 13:43:42.280: INFO: Pod pod-063e6e73-2903-486e-9617-1a182fd90f6c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:43:42.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2305" for this suite. Feb 15 13:43:48.315: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:43:48.636: INFO: namespace emptydir-2305 deletion completed in 6.348050033s • [SLOW TEST:14.798 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:43:48.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Feb 15 13:43:48.755: INFO: Waiting up to 5m0s for pod "pod-83d426b9-82dc-43ac-8e81-5c17be6bcb2b" in namespace "emptydir-9670" to be "success or failure" Feb 15 13:43:48.765: INFO: Pod "pod-83d426b9-82dc-43ac-8e81-5c17be6bcb2b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.048544ms Feb 15 13:43:50.785: INFO: Pod "pod-83d426b9-82dc-43ac-8e81-5c17be6bcb2b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029634416s Feb 15 13:43:52.800: INFO: Pod "pod-83d426b9-82dc-43ac-8e81-5c17be6bcb2b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044537589s Feb 15 13:43:54.807: INFO: Pod "pod-83d426b9-82dc-43ac-8e81-5c17be6bcb2b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051994887s Feb 15 13:43:56.815: INFO: Pod "pod-83d426b9-82dc-43ac-8e81-5c17be6bcb2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.059083692s STEP: Saw pod success Feb 15 13:43:56.815: INFO: Pod "pod-83d426b9-82dc-43ac-8e81-5c17be6bcb2b" satisfied condition "success or failure" Feb 15 13:43:56.819: INFO: Trying to get logs from node iruya-node pod pod-83d426b9-82dc-43ac-8e81-5c17be6bcb2b container test-container: STEP: delete the pod Feb 15 13:43:56.897: INFO: Waiting for pod pod-83d426b9-82dc-43ac-8e81-5c17be6bcb2b to disappear Feb 15 13:43:56.906: INFO: Pod pod-83d426b9-82dc-43ac-8e81-5c17be6bcb2b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:43:56.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9670" for this suite. Feb 15 13:44:02.942: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:44:03.054: INFO: namespace emptydir-9670 deletion completed in 6.140895876s • [SLOW TEST:14.418 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:44:03.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Feb 15 13:44:10.270: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:44:10.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6244" for this suite. Feb 15 13:44:16.404: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:44:16.693: INFO: namespace container-runtime-6244 deletion completed in 6.314590165s • [SLOW TEST:13.638 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:44:16.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Feb 15 13:44:16.923: INFO: Waiting up to 5m0s for pod "pod-aea6f15c-0b83-4a71-9ab2-b4cb37fa4461" in namespace "emptydir-6207" to be "success or failure" Feb 15 13:44:16.944: INFO: Pod "pod-aea6f15c-0b83-4a71-9ab2-b4cb37fa4461": Phase="Pending", Reason="", readiness=false. Elapsed: 20.897406ms Feb 15 13:44:18.956: INFO: Pod "pod-aea6f15c-0b83-4a71-9ab2-b4cb37fa4461": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032372889s Feb 15 13:44:21.051: INFO: Pod "pod-aea6f15c-0b83-4a71-9ab2-b4cb37fa4461": Phase="Pending", Reason="", readiness=false. Elapsed: 4.127997191s Feb 15 13:44:23.061: INFO: Pod "pod-aea6f15c-0b83-4a71-9ab2-b4cb37fa4461": Phase="Pending", Reason="", readiness=false. Elapsed: 6.137121271s Feb 15 13:44:25.066: INFO: Pod "pod-aea6f15c-0b83-4a71-9ab2-b4cb37fa4461": Phase="Running", Reason="", readiness=true. Elapsed: 8.142575557s Feb 15 13:44:27.074: INFO: Pod "pod-aea6f15c-0b83-4a71-9ab2-b4cb37fa4461": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.15083188s STEP: Saw pod success Feb 15 13:44:27.075: INFO: Pod "pod-aea6f15c-0b83-4a71-9ab2-b4cb37fa4461" satisfied condition "success or failure" Feb 15 13:44:27.084: INFO: Trying to get logs from node iruya-node pod pod-aea6f15c-0b83-4a71-9ab2-b4cb37fa4461 container test-container: STEP: delete the pod Feb 15 13:44:27.479: INFO: Waiting for pod pod-aea6f15c-0b83-4a71-9ab2-b4cb37fa4461 to disappear Feb 15 13:44:27.485: INFO: Pod pod-aea6f15c-0b83-4a71-9ab2-b4cb37fa4461 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:44:27.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6207" for this suite. Feb 15 13:44:33.541: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:44:33.675: INFO: namespace emptydir-6207 deletion completed in 6.183289103s • [SLOW TEST:16.980 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:44:33.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:44:33.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8813" for this suite. Feb 15 13:44:39.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:44:40.125: INFO: namespace kubelet-test-8813 deletion completed in 6.197193493s • [SLOW TEST:6.448 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:44:40.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Feb 15 13:44:40.308: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:44:53.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1007" for this suite. Feb 15 13:45:01.303: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:45:01.447: INFO: namespace init-container-1007 deletion completed in 8.170171884s • [SLOW TEST:21.321 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:45:01.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Feb 15 13:45:01.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2676' Feb 15 13:45:02.013: INFO: stderr: "" Feb 15 13:45:02.013: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 15 13:45:02.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2676' Feb 15 13:45:02.261: INFO: stderr: "" Feb 15 13:45:02.262: INFO: stdout: "update-demo-nautilus-5ddl6 update-demo-nautilus-xdj9h " Feb 15 13:45:02.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5ddl6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2676' Feb 15 13:45:02.512: INFO: stderr: "" Feb 15 13:45:02.513: INFO: stdout: "" Feb 15 13:45:02.513: INFO: update-demo-nautilus-5ddl6 is created but not running Feb 15 13:45:07.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2676' Feb 15 13:45:07.976: INFO: stderr: "" Feb 15 13:45:07.976: INFO: stdout: "update-demo-nautilus-5ddl6 update-demo-nautilus-xdj9h " Feb 15 13:45:07.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5ddl6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2676' Feb 15 13:45:08.143: INFO: stderr: "" Feb 15 13:45:08.144: INFO: stdout: "" Feb 15 13:45:08.144: INFO: update-demo-nautilus-5ddl6 is created but not running Feb 15 13:45:13.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2676' Feb 15 13:45:13.306: INFO: stderr: "" Feb 15 13:45:13.307: INFO: stdout: "update-demo-nautilus-5ddl6 update-demo-nautilus-xdj9h " Feb 15 13:45:13.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5ddl6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2676' Feb 15 13:45:13.482: INFO: stderr: "" Feb 15 13:45:13.483: INFO: stdout: "true" Feb 15 13:45:13.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5ddl6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2676' Feb 15 13:45:13.658: INFO: stderr: "" Feb 15 13:45:13.658: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 15 13:45:13.658: INFO: validating pod update-demo-nautilus-5ddl6 Feb 15 13:45:13.707: INFO: got data: { "image": "nautilus.jpg" } Feb 15 13:45:13.708: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 15 13:45:13.708: INFO: update-demo-nautilus-5ddl6 is verified up and running Feb 15 13:45:13.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xdj9h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2676' Feb 15 13:45:13.802: INFO: stderr: "" Feb 15 13:45:13.802: INFO: stdout: "true" Feb 15 13:45:13.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xdj9h -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2676' Feb 15 13:45:13.963: INFO: stderr: "" Feb 15 13:45:13.963: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 15 13:45:13.963: INFO: validating pod update-demo-nautilus-xdj9h Feb 15 13:45:13.973: INFO: got data: { "image": "nautilus.jpg" } Feb 15 13:45:13.974: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 15 13:45:13.974: INFO: update-demo-nautilus-xdj9h is verified up and running STEP: using delete to clean up resources Feb 15 13:45:13.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2676' Feb 15 13:45:14.117: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 15 13:45:14.117: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Feb 15 13:45:14.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2676' Feb 15 13:45:14.237: INFO: stderr: "No resources found.\n" Feb 15 13:45:14.237: INFO: stdout: "" Feb 15 13:45:14.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2676 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 15 13:45:14.331: INFO: stderr: "" Feb 15 13:45:14.331: INFO: stdout: "update-demo-nautilus-5ddl6\nupdate-demo-nautilus-xdj9h\n" Feb 15 13:45:14.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2676' Feb 15 13:45:15.036: INFO: stderr: "No resources found.\n" Feb 15 13:45:15.036: INFO: stdout: "" Feb 15 13:45:15.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2676 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 15 13:45:15.153: INFO: stderr: "" Feb 15 13:45:15.153: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:45:15.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2676" for this suite. Feb 15 13:45:37.216: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:45:37.368: INFO: namespace kubectl-2676 deletion completed in 22.210125017s • [SLOW TEST:35.921 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:45:37.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-3443 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 15 13:45:37.477: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 15 13:46:09.921: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-3443 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 15 13:46:09.922: INFO: >>> kubeConfig: /root/.kube/config I0215 13:46:09.984230 8 log.go:172] (0xc00066f130) (0xc000f5cfa0) Create stream I0215 13:46:09.984381 8 log.go:172] (0xc00066f130) (0xc000f5cfa0) Stream added, broadcasting: 1 I0215 13:46:09.995062 8 log.go:172] (0xc00066f130) Reply frame received for 1 I0215 13:46:09.995167 8 log.go:172] (0xc00066f130) (0xc00024ba40) Create stream I0215 13:46:09.995177 8 log.go:172] (0xc00066f130) (0xc00024ba40) Stream added, broadcasting: 3 I0215 13:46:09.996924 8 log.go:172] (0xc00066f130) Reply frame received for 3 I0215 13:46:09.996963 8 log.go:172] (0xc00066f130) (0xc001c3fea0) Create stream I0215 13:46:09.996973 8 log.go:172] (0xc00066f130) (0xc001c3fea0) Stream added, broadcasting: 5 I0215 13:46:10.000374 8 log.go:172] (0xc00066f130) Reply frame received for 5 I0215 13:46:10.296661 8 log.go:172] (0xc00066f130) Data frame received for 3 I0215 13:46:10.296809 8 log.go:172] (0xc00024ba40) (3) Data frame handling I0215 13:46:10.296838 8 log.go:172] (0xc00024ba40) (3) Data frame sent I0215 13:46:10.441015 8 log.go:172] (0xc00066f130) Data frame received for 1 I0215 13:46:10.441293 8 log.go:172] (0xc00066f130) (0xc00024ba40) Stream removed, broadcasting: 3 I0215 13:46:10.441600 8 log.go:172] (0xc000f5cfa0) (1) Data frame handling I0215 13:46:10.441639 8 log.go:172] (0xc000f5cfa0) (1) Data frame sent I0215 13:46:10.441696 8 log.go:172] (0xc00066f130) (0xc001c3fea0) Stream removed, broadcasting: 5 I0215 13:46:10.441745 8 log.go:172] (0xc00066f130) (0xc000f5cfa0) Stream removed, broadcasting: 1 I0215 13:46:10.441766 8 log.go:172] (0xc00066f130) Go away received I0215 13:46:10.442353 8 log.go:172] (0xc00066f130) (0xc000f5cfa0) Stream removed, broadcasting: 1 I0215 13:46:10.442379 8 log.go:172] (0xc00066f130) (0xc00024ba40) Stream removed, broadcasting: 3 I0215 13:46:10.442396 8 log.go:172] (0xc00066f130) (0xc001c3fea0) Stream removed, broadcasting: 5 Feb 15 13:46:10.442: INFO: Waiting for endpoints: map[] Feb 15 13:46:10.451: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-3443 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 15 13:46:10.451: INFO: >>> kubeConfig: /root/.kube/config I0215 13:46:10.539787 8 log.go:172] (0xc00205f290) (0xc002082000) Create stream I0215 13:46:10.540148 8 log.go:172] (0xc00205f290) (0xc002082000) Stream added, broadcasting: 1 I0215 13:46:10.552068 8 log.go:172] (0xc00205f290) Reply frame received for 1 I0215 13:46:10.552127 8 log.go:172] (0xc00205f290) (0xc000f5d040) Create stream I0215 13:46:10.552136 8 log.go:172] (0xc00205f290) (0xc000f5d040) Stream added, broadcasting: 3 I0215 13:46:10.554854 8 log.go:172] (0xc00205f290) Reply frame received for 3 I0215 13:46:10.554905 8 log.go:172] (0xc00205f290) (0xc00024bae0) Create stream I0215 13:46:10.554932 8 log.go:172] (0xc00205f290) (0xc00024bae0) Stream added, broadcasting: 5 I0215 13:46:10.558540 8 log.go:172] (0xc00205f290) Reply frame received for 5 I0215 13:46:10.896749 8 log.go:172] (0xc00205f290) Data frame received for 3 I0215 13:46:10.896927 8 log.go:172] (0xc000f5d040) (3) Data frame handling I0215 13:46:10.896954 8 log.go:172] (0xc000f5d040) (3) Data frame sent I0215 13:46:11.027823 8 log.go:172] (0xc00205f290) (0xc000f5d040) Stream removed, broadcasting: 3 I0215 13:46:11.028127 8 log.go:172] (0xc00205f290) Data frame received for 1 I0215 13:46:11.028291 8 log.go:172] (0xc00205f290) (0xc00024bae0) Stream removed, broadcasting: 5 I0215 13:46:11.028351 8 log.go:172] (0xc002082000) (1) Data frame handling I0215 13:46:11.028373 8 log.go:172] (0xc002082000) (1) Data frame sent I0215 13:46:11.028383 8 log.go:172] (0xc00205f290) (0xc002082000) Stream removed, broadcasting: 1 I0215 13:46:11.028391 8 log.go:172] (0xc00205f290) Go away received I0215 13:46:11.028838 8 log.go:172] (0xc00205f290) (0xc002082000) Stream removed, broadcasting: 1 I0215 13:46:11.028860 8 log.go:172] (0xc00205f290) (0xc000f5d040) Stream removed, broadcasting: 3 I0215 13:46:11.028874 8 log.go:172] (0xc00205f290) (0xc00024bae0) Stream removed, broadcasting: 5 Feb 15 13:46:11.029: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:46:11.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3443" for this suite. Feb 15 13:46:31.074: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:46:31.172: INFO: namespace pod-network-test-3443 deletion completed in 20.131965668s • [SLOW TEST:53.804 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:46:31.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 15 13:46:31.296: INFO: Waiting up to 5m0s for pod "downwardapi-volume-27890c15-8ed4-435c-92d2-1dc5bdbd3388" in namespace "downward-api-5613" to be "success or failure" Feb 15 13:46:31.312: INFO: Pod "downwardapi-volume-27890c15-8ed4-435c-92d2-1dc5bdbd3388": Phase="Pending", Reason="", readiness=false. Elapsed: 16.409951ms Feb 15 13:46:33.323: INFO: Pod "downwardapi-volume-27890c15-8ed4-435c-92d2-1dc5bdbd3388": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027386755s Feb 15 13:46:35.331: INFO: Pod "downwardapi-volume-27890c15-8ed4-435c-92d2-1dc5bdbd3388": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034525874s Feb 15 13:46:37.897: INFO: Pod "downwardapi-volume-27890c15-8ed4-435c-92d2-1dc5bdbd3388": Phase="Pending", Reason="", readiness=false. Elapsed: 6.600795523s Feb 15 13:46:39.914: INFO: Pod "downwardapi-volume-27890c15-8ed4-435c-92d2-1dc5bdbd3388": Phase="Pending", Reason="", readiness=false. Elapsed: 8.618328958s Feb 15 13:46:41.934: INFO: Pod "downwardapi-volume-27890c15-8ed4-435c-92d2-1dc5bdbd3388": Phase="Pending", Reason="", readiness=false. Elapsed: 10.638515503s Feb 15 13:46:43.959: INFO: Pod "downwardapi-volume-27890c15-8ed4-435c-92d2-1dc5bdbd3388": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.66309324s STEP: Saw pod success Feb 15 13:46:43.960: INFO: Pod "downwardapi-volume-27890c15-8ed4-435c-92d2-1dc5bdbd3388" satisfied condition "success or failure" Feb 15 13:46:43.977: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-27890c15-8ed4-435c-92d2-1dc5bdbd3388 container client-container: STEP: delete the pod Feb 15 13:46:44.064: INFO: Waiting for pod downwardapi-volume-27890c15-8ed4-435c-92d2-1dc5bdbd3388 to disappear Feb 15 13:46:44.117: INFO: Pod downwardapi-volume-27890c15-8ed4-435c-92d2-1dc5bdbd3388 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:46:44.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5613" for this suite. Feb 15 13:46:50.494: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:46:51.159: INFO: namespace downward-api-5613 deletion completed in 6.74053441s • [SLOW TEST:19.986 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:46:51.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Feb 15 13:46:51.240: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:47:08.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2234" for this suite. Feb 15 13:47:30.697: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:47:30.812: INFO: namespace init-container-2234 deletion completed in 22.156061746s • [SLOW TEST:39.652 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:47:30.813: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-ae836ca8-ba45-41b0-ace9-8f7da22ace8a in namespace container-probe-1487 Feb 15 13:47:38.918: INFO: Started pod liveness-ae836ca8-ba45-41b0-ace9-8f7da22ace8a in namespace container-probe-1487 STEP: checking the pod's current state and verifying that restartCount is present Feb 15 13:47:38.921: INFO: Initial restart count of pod liveness-ae836ca8-ba45-41b0-ace9-8f7da22ace8a is 0 Feb 15 13:48:01.053: INFO: Restart count of pod container-probe-1487/liveness-ae836ca8-ba45-41b0-ace9-8f7da22ace8a is now 1 (22.131642946s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:48:01.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1487" for this suite. Feb 15 13:48:07.194: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:48:07.323: INFO: namespace container-probe-1487 deletion completed in 6.160683618s • [SLOW TEST:36.510 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:48:07.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod Feb 15 13:48:07.378: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-399' Feb 15 13:48:07.847: INFO: stderr: "" Feb 15 13:48:07.847: INFO: stdout: "pod/pause created\n" Feb 15 13:48:07.847: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Feb 15 13:48:07.847: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-399" to be "running and ready" Feb 15 13:48:07.900: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 52.976927ms Feb 15 13:48:09.916: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068287366s Feb 15 13:48:11.924: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076605346s Feb 15 13:48:13.942: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.094122479s Feb 15 13:48:15.955: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.10767447s Feb 15 13:48:17.961: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 10.113989314s Feb 15 13:48:17.962: INFO: Pod "pause" satisfied condition "running and ready" Feb 15 13:48:17.962: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod Feb 15 13:48:17.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-399' Feb 15 13:48:18.158: INFO: stderr: "" Feb 15 13:48:18.158: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Feb 15 13:48:18.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-399' Feb 15 13:48:18.295: INFO: stderr: "" Feb 15 13:48:18.295: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 11s testing-label-value\n" STEP: removing the label testing-label of a pod Feb 15 13:48:18.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-399' Feb 15 13:48:18.406: INFO: stderr: "" Feb 15 13:48:18.406: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Feb 15 13:48:18.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-399' Feb 15 13:48:18.571: INFO: stderr: "" Feb 15 13:48:18.572: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 11s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources Feb 15 13:48:18.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-399' Feb 15 13:48:18.687: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 15 13:48:18.688: INFO: stdout: "pod \"pause\" force deleted\n" Feb 15 13:48:18.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-399' Feb 15 13:48:18.851: INFO: stderr: "No resources found.\n" Feb 15 13:48:18.852: INFO: stdout: "" Feb 15 13:48:18.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-399 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 15 13:48:18.972: INFO: stderr: "" Feb 15 13:48:18.973: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:48:18.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-399" for this suite. Feb 15 13:48:25.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:48:25.172: INFO: namespace kubectl-399 deletion completed in 6.185094991s • [SLOW TEST:17.849 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:48:25.172: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments Feb 15 13:48:25.455: INFO: Waiting up to 5m0s for pod "client-containers-194e38c2-b45b-400b-bab9-78d7627bf436" in namespace "containers-4487" to be "success or failure" Feb 15 13:48:25.483: INFO: Pod "client-containers-194e38c2-b45b-400b-bab9-78d7627bf436": Phase="Pending", Reason="", readiness=false. Elapsed: 28.504456ms Feb 15 13:48:27.491: INFO: Pod "client-containers-194e38c2-b45b-400b-bab9-78d7627bf436": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036415628s Feb 15 13:48:29.497: INFO: Pod "client-containers-194e38c2-b45b-400b-bab9-78d7627bf436": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042517054s Feb 15 13:48:31.510: INFO: Pod "client-containers-194e38c2-b45b-400b-bab9-78d7627bf436": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055222785s Feb 15 13:48:33.531: INFO: Pod "client-containers-194e38c2-b45b-400b-bab9-78d7627bf436": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.076463776s STEP: Saw pod success Feb 15 13:48:33.532: INFO: Pod "client-containers-194e38c2-b45b-400b-bab9-78d7627bf436" satisfied condition "success or failure" Feb 15 13:48:33.538: INFO: Trying to get logs from node iruya-node pod client-containers-194e38c2-b45b-400b-bab9-78d7627bf436 container test-container: STEP: delete the pod Feb 15 13:48:33.764: INFO: Waiting for pod client-containers-194e38c2-b45b-400b-bab9-78d7627bf436 to disappear Feb 15 13:48:33.778: INFO: Pod client-containers-194e38c2-b45b-400b-bab9-78d7627bf436 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:48:33.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4487" for this suite. Feb 15 13:48:39.855: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:48:40.007: INFO: namespace containers-4487 deletion completed in 6.186582179s • [SLOW TEST:14.835 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:48:40.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5095.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-5095.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5095.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5095.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-5095.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5095.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 15 13:48:54.160: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-5095.svc.cluster.local from pod dns-5095/dns-test-b6c1cb9f-1737-47ac-9a7e-b19a80670008: the server could not find the requested resource (get pods dns-test-b6c1cb9f-1737-47ac-9a7e-b19a80670008) Feb 15 13:48:54.169: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-5095/dns-test-b6c1cb9f-1737-47ac-9a7e-b19a80670008: the server could not find the requested resource (get pods dns-test-b6c1cb9f-1737-47ac-9a7e-b19a80670008) Feb 15 13:48:54.173: INFO: Unable to read jessie_udp@PodARecord from pod dns-5095/dns-test-b6c1cb9f-1737-47ac-9a7e-b19a80670008: the server could not find the requested resource (get pods dns-test-b6c1cb9f-1737-47ac-9a7e-b19a80670008) Feb 15 13:48:54.177: INFO: Unable to read jessie_tcp@PodARecord from pod dns-5095/dns-test-b6c1cb9f-1737-47ac-9a7e-b19a80670008: the server could not find the requested resource (get pods dns-test-b6c1cb9f-1737-47ac-9a7e-b19a80670008) Feb 15 13:48:54.177: INFO: Lookups using dns-5095/dns-test-b6c1cb9f-1737-47ac-9a7e-b19a80670008 failed for: [jessie_hosts@dns-querier-1.dns-test-service.dns-5095.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord] Feb 15 13:48:59.250: INFO: DNS probes using dns-5095/dns-test-b6c1cb9f-1737-47ac-9a7e-b19a80670008 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:48:59.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5095" for this suite. Feb 15 13:49:05.406: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:49:05.554: INFO: namespace dns-5095 deletion completed in 6.18747135s • [SLOW TEST:25.546 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:49:05.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:49:13.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1733" for this suite. Feb 15 13:49:59.779: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:49:59.894: INFO: namespace kubelet-test-1733 deletion completed in 46.14231345s • [SLOW TEST:54.338 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:49:59.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-334d3960-9bac-4e86-97bd-5407a2213983 STEP: Creating a pod to test consume configMaps Feb 15 13:50:00.004: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fb506040-35b4-4d18-b26f-673797db5765" in namespace "projected-6914" to be "success or failure" Feb 15 13:50:00.009: INFO: Pod "pod-projected-configmaps-fb506040-35b4-4d18-b26f-673797db5765": Phase="Pending", Reason="", readiness=false. Elapsed: 5.388696ms Feb 15 13:50:02.022: INFO: Pod "pod-projected-configmaps-fb506040-35b4-4d18-b26f-673797db5765": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018179979s Feb 15 13:50:04.053: INFO: Pod "pod-projected-configmaps-fb506040-35b4-4d18-b26f-673797db5765": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0490492s Feb 15 13:50:06.060: INFO: Pod "pod-projected-configmaps-fb506040-35b4-4d18-b26f-673797db5765": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056354472s Feb 15 13:50:08.081: INFO: Pod "pod-projected-configmaps-fb506040-35b4-4d18-b26f-673797db5765": Phase="Pending", Reason="", readiness=false. Elapsed: 8.077312425s Feb 15 13:50:10.092: INFO: Pod "pod-projected-configmaps-fb506040-35b4-4d18-b26f-673797db5765": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.088532254s STEP: Saw pod success Feb 15 13:50:10.093: INFO: Pod "pod-projected-configmaps-fb506040-35b4-4d18-b26f-673797db5765" satisfied condition "success or failure" Feb 15 13:50:10.096: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-fb506040-35b4-4d18-b26f-673797db5765 container projected-configmap-volume-test: STEP: delete the pod Feb 15 13:50:10.183: INFO: Waiting for pod pod-projected-configmaps-fb506040-35b4-4d18-b26f-673797db5765 to disappear Feb 15 13:50:10.190: INFO: Pod pod-projected-configmaps-fb506040-35b4-4d18-b26f-673797db5765 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:50:10.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6914" for this suite. Feb 15 13:50:16.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:50:16.374: INFO: namespace projected-6914 deletion completed in 6.178130343s • [SLOW TEST:16.479 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:50:16.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 15 13:50:16.504: INFO: Creating deployment "nginx-deployment" Feb 15 13:50:16.515: INFO: Waiting for observed generation 1 Feb 15 13:50:18.748: INFO: Waiting for all required pods to come up Feb 15 13:50:19.601: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Feb 15 13:50:46.013: INFO: Waiting for deployment "nginx-deployment" to complete Feb 15 13:50:46.022: INFO: Updating deployment "nginx-deployment" with a non-existent image Feb 15 13:50:46.037: INFO: Updating deployment nginx-deployment Feb 15 13:50:46.037: INFO: Waiting for observed generation 2 Feb 15 13:50:48.580: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Feb 15 13:50:49.165: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Feb 15 13:50:49.950: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Feb 15 13:50:51.316: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Feb 15 13:50:51.316: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Feb 15 13:50:51.320: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Feb 15 13:50:51.333: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Feb 15 13:50:51.333: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Feb 15 13:50:51.814: INFO: Updating deployment nginx-deployment Feb 15 13:50:51.814: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Feb 15 13:50:53.139: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Feb 15 13:50:53.734: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Feb 15 13:51:01.088: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-6791,SelfLink:/apis/apps/v1/namespaces/deployment-6791/deployments/nginx-deployment,UID:46475c13-1db1-4d04-9613-bc1ecdac1aaf,ResourceVersion:24452722,Generation:3,CreationTimestamp:2020-02-15 13:50:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:25,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-02-15 13:50:49 +0000 UTC 2020-02-15 13:50:16 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2020-02-15 13:50:52 +0000 UTC 2020-02-15 13:50:52 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Feb 15 13:51:02.499: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-6791,SelfLink:/apis/apps/v1/namespaces/deployment-6791/replicasets/nginx-deployment-55fb7cb77f,UID:3b702264-8632-417d-8d4f-b80030098dfd,ResourceVersion:24452730,Generation:3,CreationTimestamp:2020-02-15 13:50:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 46475c13-1db1-4d04-9613-bc1ecdac1aaf 0xc001284227 0xc001284228}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 15 13:51:02.499: INFO: All old ReplicaSets of Deployment "nginx-deployment": Feb 15 13:51:02.499: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-6791,SelfLink:/apis/apps/v1/namespaces/deployment-6791/replicasets/nginx-deployment-7b8c6f4498,UID:55b26db0-0cda-44c5-be42-6ca932ec1ea9,ResourceVersion:24452718,Generation:3,CreationTimestamp:2020-02-15 13:50:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 46475c13-1db1-4d04-9613-bc1ecdac1aaf 0xc0012842f7 0xc0012842f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Feb 15 13:51:04.152: INFO: Pod "nginx-deployment-55fb7cb77f-49kng" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-49kng,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6791,SelfLink:/api/v1/namespaces/deployment-6791/pods/nginx-deployment-55fb7cb77f-49kng,UID:589d003b-dcb1-4b38-86c0-358b88fb3b28,ResourceVersion:24452688,Generation:0,CreationTimestamp:2020-02-15 13:50:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 3b702264-8632-417d-8d4f-b80030098dfd 0xc002db1d37 0xc002db1d38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-r4t6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r4t6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-r4t6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002db1db0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002db1dd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:53 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 13:51:04.153: INFO: Pod "nginx-deployment-55fb7cb77f-7m79t" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-7m79t,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6791,SelfLink:/api/v1/namespaces/deployment-6791/pods/nginx-deployment-55fb7cb77f-7m79t,UID:126f98f9-0fda-4f38-a500-4d7b8ce66a81,ResourceVersion:24452657,Generation:0,CreationTimestamp:2020-02-15 13:50:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 3b702264-8632-417d-8d4f-b80030098dfd 0xc002db1e57 0xc002db1e58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-r4t6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r4t6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-r4t6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002db1ec0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002db1ee0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:46 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-15 13:50:49 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 13:51:04.154: INFO: Pod "nginx-deployment-55fb7cb77f-844jb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-844jb,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6791,SelfLink:/api/v1/namespaces/deployment-6791/pods/nginx-deployment-55fb7cb77f-844jb,UID:1e32fd76-9ed3-4a83-a492-78992c8c5142,ResourceVersion:24452720,Generation:0,CreationTimestamp:2020-02-15 13:50:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 3b702264-8632-417d-8d4f-b80030098dfd 0xc002db1fb7 0xc002db1fb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-r4t6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r4t6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-r4t6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f94030} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f94050}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:53 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-15 13:50:53 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 13:51:04.154: INFO: Pod "nginx-deployment-55fb7cb77f-brcwx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-brcwx,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6791,SelfLink:/api/v1/namespaces/deployment-6791/pods/nginx-deployment-55fb7cb77f-brcwx,UID:de10796a-4df9-4332-b7e2-711f4a067bcc,ResourceVersion:24452694,Generation:0,CreationTimestamp:2020-02-15 13:50:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 3b702264-8632-417d-8d4f-b80030098dfd 0xc001f94127 0xc001f94128}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-r4t6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r4t6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-r4t6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f941a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f941c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:53 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 13:51:04.154: INFO: Pod "nginx-deployment-55fb7cb77f-fswkp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-fswkp,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6791,SelfLink:/api/v1/namespaces/deployment-6791/pods/nginx-deployment-55fb7cb77f-fswkp,UID:f0d81d78-353b-4fbc-bca3-159275953970,ResourceVersion:24452629,Generation:0,CreationTimestamp:2020-02-15 13:50:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 3b702264-8632-417d-8d4f-b80030098dfd 0xc001f94247 0xc001f94248}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-r4t6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r4t6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-r4t6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f942b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f942d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:46 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-15 13:50:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 13:51:04.155: INFO: Pod "nginx-deployment-55fb7cb77f-gg4h7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-gg4h7,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6791,SelfLink:/api/v1/namespaces/deployment-6791/pods/nginx-deployment-55fb7cb77f-gg4h7,UID:d360fbb6-4b39-4a6c-a6d9-6ccaf4e3f15b,ResourceVersion:24452634,Generation:0,CreationTimestamp:2020-02-15 13:50:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 3b702264-8632-417d-8d4f-b80030098dfd 0xc001f943a7 0xc001f943a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-r4t6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r4t6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-r4t6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f94420} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f94440}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:46 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-15 13:50:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 13:51:04.155: INFO: Pod "nginx-deployment-55fb7cb77f-jxdg5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-jxdg5,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6791,SelfLink:/api/v1/namespaces/deployment-6791/pods/nginx-deployment-55fb7cb77f-jxdg5,UID:89c6d5a5-fea2-4281-a1b8-a26b47317888,ResourceVersion:24452696,Generation:0,CreationTimestamp:2020-02-15 13:50:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 3b702264-8632-417d-8d4f-b80030098dfd 0xc001f94527 0xc001f94528}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-r4t6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r4t6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-r4t6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f945a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f945c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:53 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 13:51:04.155: INFO: Pod "nginx-deployment-55fb7cb77f-nfh4t" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-nfh4t,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6791,SelfLink:/api/v1/namespaces/deployment-6791/pods/nginx-deployment-55fb7cb77f-nfh4t,UID:cecb9c3e-ebae-4156-90f1-fce10c8d1e37,ResourceVersion:24452716,Generation:0,CreationTimestamp:2020-02-15 13:50:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 3b702264-8632-417d-8d4f-b80030098dfd 0xc001f94647 0xc001f94648}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-r4t6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r4t6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-r4t6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f946b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f946d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:54 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 13:51:04.155: INFO: Pod "nginx-deployment-55fb7cb77f-q7zhl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-q7zhl,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6791,SelfLink:/api/v1/namespaces/deployment-6791/pods/nginx-deployment-55fb7cb77f-q7zhl,UID:9f2d673e-2e94-4f88-9664-deed7534458b,ResourceVersion:24452658,Generation:0,CreationTimestamp:2020-02-15 13:50:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 3b702264-8632-417d-8d4f-b80030098dfd 0xc001f94757 0xc001f94758}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-r4t6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r4t6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-r4t6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f947d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f947f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:46 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-15 13:50:49 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 13:51:04.156: INFO: Pod "nginx-deployment-55fb7cb77f-rctzx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-rctzx,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6791,SelfLink:/api/v1/namespaces/deployment-6791/pods/nginx-deployment-55fb7cb77f-rctzx,UID:aee6769e-57df-4c1b-b82f-e307f4d22626,ResourceVersion:24452723,Generation:0,CreationTimestamp:2020-02-15 13:50:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 3b702264-8632-417d-8d4f-b80030098dfd 0xc001f948c7 0xc001f948c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-r4t6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r4t6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-r4t6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f94a00} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f94a20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:53 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-15 13:50:54 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 13:51:04.156: INFO: Pod "nginx-deployment-55fb7cb77f-rskwz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-rskwz,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6791,SelfLink:/api/v1/namespaces/deployment-6791/pods/nginx-deployment-55fb7cb77f-rskwz,UID:b6d0cbcc-575c-4418-b100-078e61275112,ResourceVersion:24452624,Generation:0,CreationTimestamp:2020-02-15 13:50:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 3b702264-8632-417d-8d4f-b80030098dfd 0xc001f94de7 0xc001f94de8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-r4t6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r4t6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-r4t6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f94f80} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f94fa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:46 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-15 13:50:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 13:51:04.156: INFO: Pod "nginx-deployment-55fb7cb77f-tdjtf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-tdjtf,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6791,SelfLink:/api/v1/namespaces/deployment-6791/pods/nginx-deployment-55fb7cb77f-tdjtf,UID:93b209b5-0565-4ea4-b53f-dad4eacc9367,ResourceVersion:24452692,Generation:0,CreationTimestamp:2020-02-15 13:50:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 3b702264-8632-417d-8d4f-b80030098dfd 0xc001f95377 0xc001f95378}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-r4t6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r4t6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-r4t6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f95580} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f955a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:53 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 13:51:04.157: INFO: Pod "nginx-deployment-55fb7cb77f-zsjdt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-zsjdt,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6791,SelfLink:/api/v1/namespaces/deployment-6791/pods/nginx-deployment-55fb7cb77f-zsjdt,UID:4c6a1181-9575-495c-8e99-4d335db340a1,ResourceVersion:24452697,Generation:0,CreationTimestamp:2020-02-15 13:50:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 3b702264-8632-417d-8d4f-b80030098dfd 0xc001f95627 0xc001f95628}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-r4t6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r4t6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-r4t6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f956a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f956c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:53 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 13:51:04.157: INFO: Pod "nginx-deployment-7b8c6f4498-49nd9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-49nd9,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6791,SelfLink:/api/v1/namespaces/deployment-6791/pods/nginx-deployment-7b8c6f4498-49nd9,UID:fec3a6c2-86ee-4185-b968-9d543b782ea5,ResourceVersion:24452712,Generation:0,CreationTimestamp:2020-02-15 13:50:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 55b26db0-0cda-44c5-be42-6ca932ec1ea9 0xc001f95747 0xc001f95748}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-r4t6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r4t6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-r4t6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f957b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f957d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:54 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 13:51:04.157: INFO: Pod "nginx-deployment-7b8c6f4498-5mggb" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-5mggb,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6791,SelfLink:/api/v1/namespaces/deployment-6791/pods/nginx-deployment-7b8c6f4498-5mggb,UID:82086d1d-d7bf-4d8e-81e2-d5d1b0846907,ResourceVersion:24452564,Generation:0,CreationTimestamp:2020-02-15 13:50:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 55b26db0-0cda-44c5-be42-6ca932ec1ea9 0xc001f95857 0xc001f95858}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-r4t6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r4t6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-r4t6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f958d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f958f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:16 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:40 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:16 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.4,StartTime:2020-02-15 13:50:16 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-15 13:50:39 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://4a77ef3d56cac9b4bdf3893d7c67ae1732da5b973eb83b492c041e3905739bb3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 13:51:04.158: INFO: Pod "nginx-deployment-7b8c6f4498-75d82" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-75d82,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6791,SelfLink:/api/v1/namespaces/deployment-6791/pods/nginx-deployment-7b8c6f4498-75d82,UID:147443be-6c79-4cf9-b3ee-205dc989a9cf,ResourceVersion:24452714,Generation:0,CreationTimestamp:2020-02-15 13:50:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 55b26db0-0cda-44c5-be42-6ca932ec1ea9 0xc001f959e7 0xc001f959e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-r4t6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r4t6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-r4t6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f95a50} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f95a70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:54 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 13:51:04.158: INFO: Pod "nginx-deployment-7b8c6f4498-85xwg" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-85xwg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6791,SelfLink:/api/v1/namespaces/deployment-6791/pods/nginx-deployment-7b8c6f4498-85xwg,UID:131a68fa-bc73-4b73-8204-4aeab3e24f6a,ResourceVersion:24452584,Generation:0,CreationTimestamp:2020-02-15 13:50:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 55b26db0-0cda-44c5-be42-6ca932ec1ea9 0xc001f95af7 0xc001f95af8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-r4t6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r4t6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-r4t6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f95b60} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f95b80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:16 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:42 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:16 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.8,StartTime:2020-02-15 13:50:16 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-15 13:50:42 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://d98cb6cca8f6a191db6f0d1321e984dad5dd778c08f99676d114f899ed609ed5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 13:51:04.158: INFO: Pod "nginx-deployment-7b8c6f4498-86b74" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-86b74,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6791,SelfLink:/api/v1/namespaces/deployment-6791/pods/nginx-deployment-7b8c6f4498-86b74,UID:f82955c5-4ce0-4a70-affd-f28be3f22aa3,ResourceVersion:24452693,Generation:0,CreationTimestamp:2020-02-15 13:50:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 55b26db0-0cda-44c5-be42-6ca932ec1ea9 0xc001f95c57 0xc001f95c58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-r4t6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r4t6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-r4t6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f95cc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f95ce0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:53 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 13:51:04.158: INFO: Pod "nginx-deployment-7b8c6f4498-8jtgs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8jtgs,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6791,SelfLink:/api/v1/namespaces/deployment-6791/pods/nginx-deployment-7b8c6f4498-8jtgs,UID:b9f0902c-89c1-4f87-9741-86ba7718c71f,ResourceVersion:24452695,Generation:0,CreationTimestamp:2020-02-15 13:50:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 55b26db0-0cda-44c5-be42-6ca932ec1ea9 0xc001f95d67 0xc001f95d68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-r4t6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r4t6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-r4t6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f95de0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f95e00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:53 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 13:51:04.159: INFO: Pod "nginx-deployment-7b8c6f4498-8rvkg" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8rvkg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6791,SelfLink:/api/v1/namespaces/deployment-6791/pods/nginx-deployment-7b8c6f4498-8rvkg,UID:fc0bea0f-2837-433e-a6ef-00db3024323e,ResourceVersion:24452569,Generation:0,CreationTimestamp:2020-02-15 13:50:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 55b26db0-0cda-44c5-be42-6ca932ec1ea9 0xc001f95e87 0xc001f95e88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-r4t6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r4t6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-r4t6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f95f00} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f95f20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:16 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:41 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:16 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.5,StartTime:2020-02-15 13:50:16 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-15 13:50:39 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://dc58650fd5552a38db244b1873cbcfc9e00dd03cdf2bb27e37092139253c079c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 13:51:04.159: INFO: Pod "nginx-deployment-7b8c6f4498-d8nh7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-d8nh7,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6791,SelfLink:/api/v1/namespaces/deployment-6791/pods/nginx-deployment-7b8c6f4498-d8nh7,UID:735e47e4-f560-46ce-a6e0-8816666eac63,ResourceVersion:24452732,Generation:0,CreationTimestamp:2020-02-15 13:50:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 55b26db0-0cda-44c5-be42-6ca932ec1ea9 0xc001f95ff7 0xc001f95ff8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-r4t6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r4t6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-r4t6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a76070} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a76090}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:53 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-15 13:50:53 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 13:51:04.160: INFO: Pod "nginx-deployment-7b8c6f4498-ghztl" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-ghztl,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6791,SelfLink:/api/v1/namespaces/deployment-6791/pods/nginx-deployment-7b8c6f4498-ghztl,UID:d1dcf758-7eea-4eab-8c1b-506d0e659419,ResourceVersion:24452557,Generation:0,CreationTimestamp:2020-02-15 13:50:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 55b26db0-0cda-44c5-be42-6ca932ec1ea9 0xc002a76157 0xc002a76158}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-r4t6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r4t6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-r4t6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a761d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a761f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:16 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:39 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:39 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:16 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-02-15 13:50:16 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-15 13:50:39 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://b882d102bb7a172134812816c3ddf2575392272b3ec92647788c6e1b28bc30e1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 13:51:04.160: INFO: Pod "nginx-deployment-7b8c6f4498-gxkds" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-gxkds,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6791,SelfLink:/api/v1/namespaces/deployment-6791/pods/nginx-deployment-7b8c6f4498-gxkds,UID:e2942cd5-abdf-4721-9509-9bdf4d384865,ResourceVersion:24452711,Generation:0,CreationTimestamp:2020-02-15 13:50:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 55b26db0-0cda-44c5-be42-6ca932ec1ea9 0xc002a762d7 0xc002a762d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-r4t6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r4t6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-r4t6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a76350} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a76370}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:54 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 13:51:04.161: INFO: Pod "nginx-deployment-7b8c6f4498-kclp2" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-kclp2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6791,SelfLink:/api/v1/namespaces/deployment-6791/pods/nginx-deployment-7b8c6f4498-kclp2,UID:69fe856f-c5d9-4b90-9142-12dc542eb1ae,ResourceVersion:24452592,Generation:0,CreationTimestamp:2020-02-15 13:50:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 55b26db0-0cda-44c5-be42-6ca932ec1ea9 0xc002a763f7 0xc002a763f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-r4t6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r4t6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-r4t6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a76460} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a76480}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:17 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:44 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:44 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:16 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.5,StartTime:2020-02-15 13:50:17 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-15 13:50:42 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://34f7e66a4439248464578c9918aade879f7c771b972bb7985c85ccca8525a20e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 13:51:04.161: INFO: Pod "nginx-deployment-7b8c6f4498-klk9c" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-klk9c,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6791,SelfLink:/api/v1/namespaces/deployment-6791/pods/nginx-deployment-7b8c6f4498-klk9c,UID:7a22a2d7-9cf3-4750-a4ff-99fccefcb3ee,ResourceVersion:24452726,Generation:0,CreationTimestamp:2020-02-15 13:50:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 55b26db0-0cda-44c5-be42-6ca932ec1ea9 0xc002a76557 0xc002a76558}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-r4t6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r4t6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-r4t6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a765c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a765e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:53 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-15 13:50:55 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 13:51:04.161: INFO: Pod "nginx-deployment-7b8c6f4498-krdbf" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-krdbf,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6791,SelfLink:/api/v1/namespaces/deployment-6791/pods/nginx-deployment-7b8c6f4498-krdbf,UID:ea6deacf-291c-43a1-a849-a8bfefac52ab,ResourceVersion:24452568,Generation:0,CreationTimestamp:2020-02-15 13:50:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 55b26db0-0cda-44c5-be42-6ca932ec1ea9 0xc002a766a7 0xc002a766a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-r4t6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r4t6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-r4t6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a76710} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a76730}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:16 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:41 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:16 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2020-02-15 13:50:16 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-15 13:50:40 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://f46bca9b5565b49edbba3af970b03c221d3a8e5cc57453d7c1004c2a8ef8815f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 13:51:04.162: INFO: Pod "nginx-deployment-7b8c6f4498-lcpmd" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-lcpmd,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6791,SelfLink:/api/v1/namespaces/deployment-6791/pods/nginx-deployment-7b8c6f4498-lcpmd,UID:efaa0d2e-9e02-4bf5-9265-6be53ec16862,ResourceVersion:24452551,Generation:0,CreationTimestamp:2020-02-15 13:50:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 55b26db0-0cda-44c5-be42-6ca932ec1ea9 0xc002a76807 0xc002a76808}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-r4t6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r4t6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-r4t6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a76880} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a768a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:16 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:39 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:39 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:16 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-02-15 13:50:16 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-15 13:50:39 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://f1b87d39694ee77b0aa9fe4203278723458af4c91107d269a5353a73eb04ebd5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 13:51:04.162: INFO: Pod "nginx-deployment-7b8c6f4498-lq2wj" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-lq2wj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6791,SelfLink:/api/v1/namespaces/deployment-6791/pods/nginx-deployment-7b8c6f4498-lq2wj,UID:fafb9e83-c9c5-4500-8e2a-d048cce37a56,ResourceVersion:24452575,Generation:0,CreationTimestamp:2020-02-15 13:50:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 55b26db0-0cda-44c5-be42-6ca932ec1ea9 0xc002a76977 0xc002a76978}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-r4t6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r4t6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-r4t6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a769f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a76a10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:16 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:41 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:16 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.3,StartTime:2020-02-15 13:50:16 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-15 13:50:39 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://bd4047991497918b7ad9b326d68ed0fe203b879cc993e982628b4798688ba0b3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 13:51:04.162: INFO: Pod "nginx-deployment-7b8c6f4498-qhdm4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-qhdm4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6791,SelfLink:/api/v1/namespaces/deployment-6791/pods/nginx-deployment-7b8c6f4498-qhdm4,UID:fc1ad713-42ef-41c4-908c-c4ec73719f46,ResourceVersion:24452691,Generation:0,CreationTimestamp:2020-02-15 13:50:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 55b26db0-0cda-44c5-be42-6ca932ec1ea9 0xc002a76ae7 0xc002a76ae8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-r4t6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r4t6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-r4t6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a76b60} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a76b80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:53 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 13:51:04.163: INFO: Pod "nginx-deployment-7b8c6f4498-s66kk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-s66kk,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6791,SelfLink:/api/v1/namespaces/deployment-6791/pods/nginx-deployment-7b8c6f4498-s66kk,UID:9fe64222-66db-4c21-8a81-4e42f4aa4826,ResourceVersion:24452703,Generation:0,CreationTimestamp:2020-02-15 13:50:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 55b26db0-0cda-44c5-be42-6ca932ec1ea9 0xc002a76c07 0xc002a76c08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-r4t6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r4t6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-r4t6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a76c80} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a76ca0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:53 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 13:51:04.163: INFO: Pod "nginx-deployment-7b8c6f4498-zbnsw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zbnsw,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6791,SelfLink:/api/v1/namespaces/deployment-6791/pods/nginx-deployment-7b8c6f4498-zbnsw,UID:30daf3b0-2e55-4679-9cb3-386f63e62cf6,ResourceVersion:24452738,Generation:0,CreationTimestamp:2020-02-15 13:50:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 55b26db0-0cda-44c5-be42-6ca932ec1ea9 0xc002a76d27 0xc002a76d28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-r4t6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r4t6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-r4t6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a76d90} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a76db0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:53 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-15 13:50:55 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 13:51:04.163: INFO: Pod "nginx-deployment-7b8c6f4498-zhmpd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zhmpd,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6791,SelfLink:/api/v1/namespaces/deployment-6791/pods/nginx-deployment-7b8c6f4498-zhmpd,UID:22cd81e8-7977-40f0-ad82-6b892c14fbba,ResourceVersion:24452715,Generation:0,CreationTimestamp:2020-02-15 13:50:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 55b26db0-0cda-44c5-be42-6ca932ec1ea9 0xc002a76e77 0xc002a76e78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-r4t6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r4t6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-r4t6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a76ef0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a76f10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:54 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 15 13:51:04.163: INFO: Pod "nginx-deployment-7b8c6f4498-znkp5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-znkp5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6791,SelfLink:/api/v1/namespaces/deployment-6791/pods/nginx-deployment-7b8c6f4498-znkp5,UID:a93d3f7e-1f0c-4ae1-b65e-72a0b55683b9,ResourceVersion:24452713,Generation:0,CreationTimestamp:2020-02-15 13:50:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 55b26db0-0cda-44c5-be42-6ca932ec1ea9 0xc002a76f97 0xc002a76f98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-r4t6h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r4t6h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-r4t6h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a77000} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a77020}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 13:50:54 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:51:04.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6791" for this suite. Feb 15 13:51:55.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:51:56.082: INFO: namespace deployment-6791 deletion completed in 50.486524266s • [SLOW TEST:99.708 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:51:56.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Feb 15 13:52:12.864: INFO: Successfully updated pod "pod-update-activedeadlineseconds-ed52fe66-edd5-473d-83fb-e8fbae45d8b1" Feb 15 13:52:12.865: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-ed52fe66-edd5-473d-83fb-e8fbae45d8b1" in namespace "pods-3290" to be "terminated due to deadline exceeded" Feb 15 13:52:12.918: INFO: Pod "pod-update-activedeadlineseconds-ed52fe66-edd5-473d-83fb-e8fbae45d8b1": Phase="Running", Reason="", readiness=true. Elapsed: 53.69065ms Feb 15 13:52:14.928: INFO: Pod "pod-update-activedeadlineseconds-ed52fe66-edd5-473d-83fb-e8fbae45d8b1": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.062967997s Feb 15 13:52:14.928: INFO: Pod "pod-update-activedeadlineseconds-ed52fe66-edd5-473d-83fb-e8fbae45d8b1" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:52:14.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3290" for this suite. Feb 15 13:52:20.991: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:52:21.098: INFO: namespace pods-3290 deletion completed in 6.16155297s • [SLOW TEST:25.015 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:52:21.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-ba0c7195-12bf-46c7-ab30-207b9e948fc9 STEP: Creating a pod to test consume secrets Feb 15 13:52:21.206: INFO: Waiting up to 5m0s for pod "pod-secrets-e832be51-4b19-464d-999a-468a9a7782b7" in namespace "secrets-9498" to be "success or failure" Feb 15 13:52:21.211: INFO: Pod "pod-secrets-e832be51-4b19-464d-999a-468a9a7782b7": Phase="Pending", Reason="", readiness=false. Elapsed: 5.250239ms Feb 15 13:52:23.222: INFO: Pod "pod-secrets-e832be51-4b19-464d-999a-468a9a7782b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015840285s Feb 15 13:52:25.231: INFO: Pod "pod-secrets-e832be51-4b19-464d-999a-468a9a7782b7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025204085s Feb 15 13:52:27.266: INFO: Pod "pod-secrets-e832be51-4b19-464d-999a-468a9a7782b7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060040795s Feb 15 13:52:29.275: INFO: Pod "pod-secrets-e832be51-4b19-464d-999a-468a9a7782b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.069246079s STEP: Saw pod success Feb 15 13:52:29.275: INFO: Pod "pod-secrets-e832be51-4b19-464d-999a-468a9a7782b7" satisfied condition "success or failure" Feb 15 13:52:29.278: INFO: Trying to get logs from node iruya-node pod pod-secrets-e832be51-4b19-464d-999a-468a9a7782b7 container secret-volume-test: STEP: delete the pod Feb 15 13:52:29.316: INFO: Waiting for pod pod-secrets-e832be51-4b19-464d-999a-468a9a7782b7 to disappear Feb 15 13:52:29.328: INFO: Pod pod-secrets-e832be51-4b19-464d-999a-468a9a7782b7 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:52:29.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9498" for this suite. Feb 15 13:52:35.383: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:52:35.487: INFO: namespace secrets-9498 deletion completed in 6.123803059s • [SLOW TEST:14.389 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:52:35.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:52:41.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7402" for this suite. Feb 15 13:52:49.213: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:52:49.336: INFO: namespace watch-7402 deletion completed in 8.233118832s • [SLOW TEST:13.849 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:52:49.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 15 13:52:59.648: INFO: Waiting up to 5m0s for pod "client-envvars-16dbff87-184d-4977-ad11-8578bf665117" in namespace "pods-5804" to be "success or failure" Feb 15 13:52:59.750: INFO: Pod "client-envvars-16dbff87-184d-4977-ad11-8578bf665117": Phase="Pending", Reason="", readiness=false. Elapsed: 101.678719ms Feb 15 13:53:01.761: INFO: Pod "client-envvars-16dbff87-184d-4977-ad11-8578bf665117": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112551061s Feb 15 13:53:03.788: INFO: Pod "client-envvars-16dbff87-184d-4977-ad11-8578bf665117": Phase="Pending", Reason="", readiness=false. Elapsed: 4.138906117s Feb 15 13:53:05.805: INFO: Pod "client-envvars-16dbff87-184d-4977-ad11-8578bf665117": Phase="Pending", Reason="", readiness=false. Elapsed: 6.156321274s Feb 15 13:53:07.812: INFO: Pod "client-envvars-16dbff87-184d-4977-ad11-8578bf665117": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.163469698s STEP: Saw pod success Feb 15 13:53:07.812: INFO: Pod "client-envvars-16dbff87-184d-4977-ad11-8578bf665117" satisfied condition "success or failure" Feb 15 13:53:07.816: INFO: Trying to get logs from node iruya-node pod client-envvars-16dbff87-184d-4977-ad11-8578bf665117 container env3cont: STEP: delete the pod Feb 15 13:53:08.041: INFO: Waiting for pod client-envvars-16dbff87-184d-4977-ad11-8578bf665117 to disappear Feb 15 13:53:08.046: INFO: Pod client-envvars-16dbff87-184d-4977-ad11-8578bf665117 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:53:08.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5804" for this suite. Feb 15 13:54:00.109: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:54:00.277: INFO: namespace pods-5804 deletion completed in 52.226795399s • [SLOW TEST:70.941 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:54:00.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-502f34f5-44b5-4c8d-b8bc-75efbfb5e5d2 STEP: Creating a pod to test consume configMaps Feb 15 13:54:00.459: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-56e10db0-1f6d-4609-b83f-2400a6841c88" in namespace "projected-9156" to be "success or failure" Feb 15 13:54:00.480: INFO: Pod "pod-projected-configmaps-56e10db0-1f6d-4609-b83f-2400a6841c88": Phase="Pending", Reason="", readiness=false. Elapsed: 19.673402ms Feb 15 13:54:02.495: INFO: Pod "pod-projected-configmaps-56e10db0-1f6d-4609-b83f-2400a6841c88": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034403854s Feb 15 13:54:04.510: INFO: Pod "pod-projected-configmaps-56e10db0-1f6d-4609-b83f-2400a6841c88": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049571497s Feb 15 13:54:06.523: INFO: Pod "pod-projected-configmaps-56e10db0-1f6d-4609-b83f-2400a6841c88": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06304818s Feb 15 13:54:08.545: INFO: Pod "pod-projected-configmaps-56e10db0-1f6d-4609-b83f-2400a6841c88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.084688188s STEP: Saw pod success Feb 15 13:54:08.546: INFO: Pod "pod-projected-configmaps-56e10db0-1f6d-4609-b83f-2400a6841c88" satisfied condition "success or failure" Feb 15 13:54:08.562: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-56e10db0-1f6d-4609-b83f-2400a6841c88 container projected-configmap-volume-test: STEP: delete the pod Feb 15 13:54:08.708: INFO: Waiting for pod pod-projected-configmaps-56e10db0-1f6d-4609-b83f-2400a6841c88 to disappear Feb 15 13:54:08.721: INFO: Pod pod-projected-configmaps-56e10db0-1f6d-4609-b83f-2400a6841c88 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:54:08.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9156" for this suite. Feb 15 13:54:14.942: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:54:15.084: INFO: namespace projected-9156 deletion completed in 6.354518207s • [SLOW TEST:14.806 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:54:15.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1910.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-1910.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1910.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-1910.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 15 13:54:29.272: INFO: File wheezy_udp@dns-test-service-3.dns-1910.svc.cluster.local from pod dns-1910/dns-test-f3d360be-af6f-4c4a-8ad3-43d517180eb5 contains '' instead of 'foo.example.com.' Feb 15 13:54:29.279: INFO: File jessie_udp@dns-test-service-3.dns-1910.svc.cluster.local from pod dns-1910/dns-test-f3d360be-af6f-4c4a-8ad3-43d517180eb5 contains '' instead of 'foo.example.com.' Feb 15 13:54:29.279: INFO: Lookups using dns-1910/dns-test-f3d360be-af6f-4c4a-8ad3-43d517180eb5 failed for: [wheezy_udp@dns-test-service-3.dns-1910.svc.cluster.local jessie_udp@dns-test-service-3.dns-1910.svc.cluster.local] Feb 15 13:54:34.300: INFO: DNS probes using dns-test-f3d360be-af6f-4c4a-8ad3-43d517180eb5 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1910.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-1910.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1910.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-1910.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 15 13:54:50.533: INFO: File wheezy_udp@dns-test-service-3.dns-1910.svc.cluster.local from pod dns-1910/dns-test-63e41148-7d5e-4a8a-9b5f-3438513702fb contains '' instead of 'bar.example.com.' Feb 15 13:54:50.540: INFO: File jessie_udp@dns-test-service-3.dns-1910.svc.cluster.local from pod dns-1910/dns-test-63e41148-7d5e-4a8a-9b5f-3438513702fb contains '' instead of 'bar.example.com.' Feb 15 13:54:50.540: INFO: Lookups using dns-1910/dns-test-63e41148-7d5e-4a8a-9b5f-3438513702fb failed for: [wheezy_udp@dns-test-service-3.dns-1910.svc.cluster.local jessie_udp@dns-test-service-3.dns-1910.svc.cluster.local] Feb 15 13:54:55.557: INFO: File wheezy_udp@dns-test-service-3.dns-1910.svc.cluster.local from pod dns-1910/dns-test-63e41148-7d5e-4a8a-9b5f-3438513702fb contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 15 13:54:55.563: INFO: File jessie_udp@dns-test-service-3.dns-1910.svc.cluster.local from pod dns-1910/dns-test-63e41148-7d5e-4a8a-9b5f-3438513702fb contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 15 13:54:55.563: INFO: Lookups using dns-1910/dns-test-63e41148-7d5e-4a8a-9b5f-3438513702fb failed for: [wheezy_udp@dns-test-service-3.dns-1910.svc.cluster.local jessie_udp@dns-test-service-3.dns-1910.svc.cluster.local] Feb 15 13:55:00.652: INFO: File jessie_udp@dns-test-service-3.dns-1910.svc.cluster.local from pod dns-1910/dns-test-63e41148-7d5e-4a8a-9b5f-3438513702fb contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 15 13:55:00.653: INFO: Lookups using dns-1910/dns-test-63e41148-7d5e-4a8a-9b5f-3438513702fb failed for: [jessie_udp@dns-test-service-3.dns-1910.svc.cluster.local] Feb 15 13:55:05.562: INFO: DNS probes using dns-test-63e41148-7d5e-4a8a-9b5f-3438513702fb succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1910.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-1910.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1910.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-1910.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 15 13:55:25.338: INFO: File wheezy_udp@dns-test-service-3.dns-1910.svc.cluster.local from pod dns-1910/dns-test-dedb844c-124b-43fc-aed2-c5f0ddda4ba4 contains '' instead of '10.109.212.14' Feb 15 13:55:25.366: INFO: File jessie_udp@dns-test-service-3.dns-1910.svc.cluster.local from pod dns-1910/dns-test-dedb844c-124b-43fc-aed2-c5f0ddda4ba4 contains '' instead of '10.109.212.14' Feb 15 13:55:25.366: INFO: Lookups using dns-1910/dns-test-dedb844c-124b-43fc-aed2-c5f0ddda4ba4 failed for: [wheezy_udp@dns-test-service-3.dns-1910.svc.cluster.local jessie_udp@dns-test-service-3.dns-1910.svc.cluster.local] Feb 15 13:55:30.406: INFO: DNS probes using dns-test-dedb844c-124b-43fc-aed2-c5f0ddda4ba4 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:55:30.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1910" for this suite. Feb 15 13:55:38.707: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:55:38.835: INFO: namespace dns-1910 deletion completed in 8.16126149s • [SLOW TEST:83.750 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:55:38.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-2183/configmap-test-b5f7c42c-1f20-4c06-b9f9-6ad51a643690 STEP: Creating a pod to test consume configMaps Feb 15 13:55:38.983: INFO: Waiting up to 5m0s for pod "pod-configmaps-53a49a41-fd78-4224-9d7c-65732323e72f" in namespace "configmap-2183" to be "success or failure" Feb 15 13:55:39.016: INFO: Pod "pod-configmaps-53a49a41-fd78-4224-9d7c-65732323e72f": Phase="Pending", Reason="", readiness=false. Elapsed: 33.096948ms Feb 15 13:55:41.027: INFO: Pod "pod-configmaps-53a49a41-fd78-4224-9d7c-65732323e72f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043881842s Feb 15 13:55:43.033: INFO: Pod "pod-configmaps-53a49a41-fd78-4224-9d7c-65732323e72f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050421465s Feb 15 13:55:45.043: INFO: Pod "pod-configmaps-53a49a41-fd78-4224-9d7c-65732323e72f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059706172s Feb 15 13:55:47.055: INFO: Pod "pod-configmaps-53a49a41-fd78-4224-9d7c-65732323e72f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.072031421s STEP: Saw pod success Feb 15 13:55:47.055: INFO: Pod "pod-configmaps-53a49a41-fd78-4224-9d7c-65732323e72f" satisfied condition "success or failure" Feb 15 13:55:47.058: INFO: Trying to get logs from node iruya-node pod pod-configmaps-53a49a41-fd78-4224-9d7c-65732323e72f container env-test: STEP: delete the pod Feb 15 13:55:47.262: INFO: Waiting for pod pod-configmaps-53a49a41-fd78-4224-9d7c-65732323e72f to disappear Feb 15 13:55:47.279: INFO: Pod pod-configmaps-53a49a41-fd78-4224-9d7c-65732323e72f no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:55:47.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2183" for this suite. Feb 15 13:55:53.386: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:55:53.502: INFO: namespace configmap-2183 deletion completed in 6.211848541s • [SLOW TEST:14.666 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:55:53.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Feb 15 13:55:53.651: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-1416,SelfLink:/api/v1/namespaces/watch-1416/configmaps/e2e-watch-test-resource-version,UID:4b4b7992-0599-47bf-8178-9d1567f1c866,ResourceVersion:24453715,Generation:0,CreationTimestamp:2020-02-15 13:55:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 15 13:55:53.652: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-1416,SelfLink:/api/v1/namespaces/watch-1416/configmaps/e2e-watch-test-resource-version,UID:4b4b7992-0599-47bf-8178-9d1567f1c866,ResourceVersion:24453716,Generation:0,CreationTimestamp:2020-02-15 13:55:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:55:53.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1416" for this suite. Feb 15 13:55:59.678: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:55:59.876: INFO: namespace watch-1416 deletion completed in 6.219662958s • [SLOW TEST:6.374 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:55:59.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4870.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4870.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4870.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4870.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4870.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4870.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4870.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4870.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4870.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4870.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4870.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4870.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4870.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 184.178.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.178.184_udp@PTR;check="$$(dig +tcp +noall +answer +search 184.178.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.178.184_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4870.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4870.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4870.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4870.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4870.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4870.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4870.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4870.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4870.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4870.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4870.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4870.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4870.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 184.178.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.178.184_udp@PTR;check="$$(dig +tcp +noall +answer +search 184.178.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.178.184_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 15 13:56:12.236: INFO: Unable to read wheezy_udp@dns-test-service.dns-4870.svc.cluster.local from pod dns-4870/dns-test-3536a075-8c49-413b-87b8-09fc5517b63d: the server could not find the requested resource (get pods dns-test-3536a075-8c49-413b-87b8-09fc5517b63d) Feb 15 13:56:12.242: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4870.svc.cluster.local from pod dns-4870/dns-test-3536a075-8c49-413b-87b8-09fc5517b63d: the server could not find the requested resource (get pods dns-test-3536a075-8c49-413b-87b8-09fc5517b63d) Feb 15 13:56:12.247: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4870.svc.cluster.local from pod dns-4870/dns-test-3536a075-8c49-413b-87b8-09fc5517b63d: the server could not find the requested resource (get pods dns-test-3536a075-8c49-413b-87b8-09fc5517b63d) Feb 15 13:56:12.251: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4870.svc.cluster.local from pod dns-4870/dns-test-3536a075-8c49-413b-87b8-09fc5517b63d: the server could not find the requested resource (get pods dns-test-3536a075-8c49-413b-87b8-09fc5517b63d) Feb 15 13:56:12.256: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-4870.svc.cluster.local from pod dns-4870/dns-test-3536a075-8c49-413b-87b8-09fc5517b63d: the server could not find the requested resource (get pods dns-test-3536a075-8c49-413b-87b8-09fc5517b63d) Feb 15 13:56:12.260: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-4870.svc.cluster.local from pod dns-4870/dns-test-3536a075-8c49-413b-87b8-09fc5517b63d: the server could not find the requested resource (get pods dns-test-3536a075-8c49-413b-87b8-09fc5517b63d) Feb 15 13:56:12.268: INFO: Unable to read wheezy_udp@PodARecord from pod dns-4870/dns-test-3536a075-8c49-413b-87b8-09fc5517b63d: the server could not find the requested resource (get pods dns-test-3536a075-8c49-413b-87b8-09fc5517b63d) Feb 15 13:56:12.272: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-4870/dns-test-3536a075-8c49-413b-87b8-09fc5517b63d: the server could not find the requested resource (get pods dns-test-3536a075-8c49-413b-87b8-09fc5517b63d) Feb 15 13:56:12.276: INFO: Unable to read 10.108.178.184_udp@PTR from pod dns-4870/dns-test-3536a075-8c49-413b-87b8-09fc5517b63d: the server could not find the requested resource (get pods dns-test-3536a075-8c49-413b-87b8-09fc5517b63d) Feb 15 13:56:12.280: INFO: Unable to read 10.108.178.184_tcp@PTR from pod dns-4870/dns-test-3536a075-8c49-413b-87b8-09fc5517b63d: the server could not find the requested resource (get pods dns-test-3536a075-8c49-413b-87b8-09fc5517b63d) Feb 15 13:56:12.284: INFO: Unable to read jessie_udp@dns-test-service.dns-4870.svc.cluster.local from pod dns-4870/dns-test-3536a075-8c49-413b-87b8-09fc5517b63d: the server could not find the requested resource (get pods dns-test-3536a075-8c49-413b-87b8-09fc5517b63d) Feb 15 13:56:12.289: INFO: Unable to read jessie_tcp@dns-test-service.dns-4870.svc.cluster.local from pod dns-4870/dns-test-3536a075-8c49-413b-87b8-09fc5517b63d: the server could not find the requested resource (get pods dns-test-3536a075-8c49-413b-87b8-09fc5517b63d) Feb 15 13:56:12.293: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4870.svc.cluster.local from pod dns-4870/dns-test-3536a075-8c49-413b-87b8-09fc5517b63d: the server could not find the requested resource (get pods dns-test-3536a075-8c49-413b-87b8-09fc5517b63d) Feb 15 13:56:12.296: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4870.svc.cluster.local from pod dns-4870/dns-test-3536a075-8c49-413b-87b8-09fc5517b63d: the server could not find the requested resource (get pods dns-test-3536a075-8c49-413b-87b8-09fc5517b63d) Feb 15 13:56:12.300: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-4870.svc.cluster.local from pod dns-4870/dns-test-3536a075-8c49-413b-87b8-09fc5517b63d: the server could not find the requested resource (get pods dns-test-3536a075-8c49-413b-87b8-09fc5517b63d) Feb 15 13:56:12.304: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-4870.svc.cluster.local from pod dns-4870/dns-test-3536a075-8c49-413b-87b8-09fc5517b63d: the server could not find the requested resource (get pods dns-test-3536a075-8c49-413b-87b8-09fc5517b63d) Feb 15 13:56:12.309: INFO: Unable to read jessie_udp@PodARecord from pod dns-4870/dns-test-3536a075-8c49-413b-87b8-09fc5517b63d: the server could not find the requested resource (get pods dns-test-3536a075-8c49-413b-87b8-09fc5517b63d) Feb 15 13:56:12.319: INFO: Unable to read jessie_tcp@PodARecord from pod dns-4870/dns-test-3536a075-8c49-413b-87b8-09fc5517b63d: the server could not find the requested resource (get pods dns-test-3536a075-8c49-413b-87b8-09fc5517b63d) Feb 15 13:56:12.329: INFO: Unable to read 10.108.178.184_udp@PTR from pod dns-4870/dns-test-3536a075-8c49-413b-87b8-09fc5517b63d: the server could not find the requested resource (get pods dns-test-3536a075-8c49-413b-87b8-09fc5517b63d) Feb 15 13:56:12.339: INFO: Unable to read 10.108.178.184_tcp@PTR from pod dns-4870/dns-test-3536a075-8c49-413b-87b8-09fc5517b63d: the server could not find the requested resource (get pods dns-test-3536a075-8c49-413b-87b8-09fc5517b63d) Feb 15 13:56:12.339: INFO: Lookups using dns-4870/dns-test-3536a075-8c49-413b-87b8-09fc5517b63d failed for: [wheezy_udp@dns-test-service.dns-4870.svc.cluster.local wheezy_tcp@dns-test-service.dns-4870.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4870.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4870.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-4870.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-4870.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.108.178.184_udp@PTR 10.108.178.184_tcp@PTR jessie_udp@dns-test-service.dns-4870.svc.cluster.local jessie_tcp@dns-test-service.dns-4870.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4870.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4870.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-4870.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-4870.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.108.178.184_udp@PTR 10.108.178.184_tcp@PTR] Feb 15 13:56:17.506: INFO: DNS probes using dns-4870/dns-test-3536a075-8c49-413b-87b8-09fc5517b63d succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:56:17.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4870" for this suite. Feb 15 13:56:23.990: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:56:24.075: INFO: namespace dns-4870 deletion completed in 6.113563732s • [SLOW TEST:24.197 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:56:24.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test use defaults Feb 15 13:56:24.149: INFO: Waiting up to 5m0s for pod "client-containers-68214c84-12b9-43be-aa18-7d14eed7c838" in namespace "containers-3547" to be "success or failure" Feb 15 13:56:24.156: INFO: Pod "client-containers-68214c84-12b9-43be-aa18-7d14eed7c838": Phase="Pending", Reason="", readiness=false. Elapsed: 6.33612ms Feb 15 13:56:26.164: INFO: Pod "client-containers-68214c84-12b9-43be-aa18-7d14eed7c838": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014455408s Feb 15 13:56:28.172: INFO: Pod "client-containers-68214c84-12b9-43be-aa18-7d14eed7c838": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022710059s Feb 15 13:56:30.202: INFO: Pod "client-containers-68214c84-12b9-43be-aa18-7d14eed7c838": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052342045s Feb 15 13:56:32.220: INFO: Pod "client-containers-68214c84-12b9-43be-aa18-7d14eed7c838": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.070041578s STEP: Saw pod success Feb 15 13:56:32.220: INFO: Pod "client-containers-68214c84-12b9-43be-aa18-7d14eed7c838" satisfied condition "success or failure" Feb 15 13:56:32.227: INFO: Trying to get logs from node iruya-node pod client-containers-68214c84-12b9-43be-aa18-7d14eed7c838 container test-container: STEP: delete the pod Feb 15 13:56:32.306: INFO: Waiting for pod client-containers-68214c84-12b9-43be-aa18-7d14eed7c838 to disappear Feb 15 13:56:32.386: INFO: Pod client-containers-68214c84-12b9-43be-aa18-7d14eed7c838 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:56:32.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3547" for this suite. Feb 15 13:56:38.442: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:56:38.566: INFO: namespace containers-3547 deletion completed in 6.173312416s • [SLOW TEST:14.491 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:56:38.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 15 13:56:38.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-3177' Feb 15 13:56:40.987: INFO: stderr: "" Feb 15 13:56:40.988: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Feb 15 13:56:51.041: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-3177 -o json' Feb 15 13:56:51.205: INFO: stderr: "" Feb 15 13:56:51.205: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-02-15T13:56:40Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-3177\",\n \"resourceVersion\": \"24453883\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-3177/pods/e2e-test-nginx-pod\",\n \"uid\": \"a3bb6620-02f5-49be-983c-da03272cb5e8\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-l29cc\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-node\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-l29cc\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-l29cc\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-15T13:56:41Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-15T13:56:48Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-15T13:56:48Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-15T13:56:40Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://7e9b8277dd3a91f7bd404cb2eea286bcbcad2ddf6eaa307e1ff25e5c7afdaa03\",\n \"image\": \"nginx:1.14-alpine\",\n \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-02-15T13:56:47Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.96.3.65\",\n \"phase\": \"Running\",\n \"podIP\": \"10.44.0.1\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-02-15T13:56:41Z\"\n }\n}\n" STEP: replace the image in the pod Feb 15 13:56:51.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-3177' Feb 15 13:56:51.724: INFO: stderr: "" Feb 15 13:56:51.724: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 Feb 15 13:56:51.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-3177' Feb 15 13:57:06.609: INFO: stderr: "" Feb 15 13:57:06.609: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:57:06.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3177" for this suite. Feb 15 13:57:12.646: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:57:12.805: INFO: namespace kubectl-3177 deletion completed in 6.184610122s • [SLOW TEST:34.238 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:57:12.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Feb 15 13:57:12.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8469' Feb 15 13:57:13.500: INFO: stderr: "" Feb 15 13:57:13.500: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Feb 15 13:57:14.513: INFO: Selector matched 1 pods for map[app:redis] Feb 15 13:57:14.513: INFO: Found 0 / 1 Feb 15 13:57:15.515: INFO: Selector matched 1 pods for map[app:redis] Feb 15 13:57:15.515: INFO: Found 0 / 1 Feb 15 13:57:16.524: INFO: Selector matched 1 pods for map[app:redis] Feb 15 13:57:16.525: INFO: Found 0 / 1 Feb 15 13:57:17.508: INFO: Selector matched 1 pods for map[app:redis] Feb 15 13:57:17.508: INFO: Found 0 / 1 Feb 15 13:57:18.518: INFO: Selector matched 1 pods for map[app:redis] Feb 15 13:57:18.518: INFO: Found 0 / 1 Feb 15 13:57:19.522: INFO: Selector matched 1 pods for map[app:redis] Feb 15 13:57:19.522: INFO: Found 0 / 1 Feb 15 13:57:20.522: INFO: Selector matched 1 pods for map[app:redis] Feb 15 13:57:20.522: INFO: Found 0 / 1 Feb 15 13:57:21.516: INFO: Selector matched 1 pods for map[app:redis] Feb 15 13:57:21.517: INFO: Found 1 / 1 Feb 15 13:57:21.517: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Feb 15 13:57:21.523: INFO: Selector matched 1 pods for map[app:redis] Feb 15 13:57:21.523: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Feb 15 13:57:21.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-s4q6x --namespace=kubectl-8469 -p {"metadata":{"annotations":{"x":"y"}}}' Feb 15 13:57:21.731: INFO: stderr: "" Feb 15 13:57:21.732: INFO: stdout: "pod/redis-master-s4q6x patched\n" STEP: checking annotations Feb 15 13:57:21.782: INFO: Selector matched 1 pods for map[app:redis] Feb 15 13:57:21.782: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:57:21.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8469" for this suite. Feb 15 13:57:43.833: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:57:43.968: INFO: namespace kubectl-8469 deletion completed in 22.180235265s • [SLOW TEST:31.163 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:57:43.969: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Feb 15 13:57:44.119: INFO: Waiting up to 5m0s for pod "downward-api-eb56ac7b-c799-43d6-bb51-a3db73dc49ec" in namespace "downward-api-5186" to be "success or failure" Feb 15 13:57:44.154: INFO: Pod "downward-api-eb56ac7b-c799-43d6-bb51-a3db73dc49ec": Phase="Pending", Reason="", readiness=false. Elapsed: 35.243916ms Feb 15 13:57:46.163: INFO: Pod "downward-api-eb56ac7b-c799-43d6-bb51-a3db73dc49ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044160234s Feb 15 13:57:48.189: INFO: Pod "downward-api-eb56ac7b-c799-43d6-bb51-a3db73dc49ec": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069925008s Feb 15 13:57:50.220: INFO: Pod "downward-api-eb56ac7b-c799-43d6-bb51-a3db73dc49ec": Phase="Pending", Reason="", readiness=false. Elapsed: 6.101530304s Feb 15 13:57:52.245: INFO: Pod "downward-api-eb56ac7b-c799-43d6-bb51-a3db73dc49ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.126548967s STEP: Saw pod success Feb 15 13:57:52.246: INFO: Pod "downward-api-eb56ac7b-c799-43d6-bb51-a3db73dc49ec" satisfied condition "success or failure" Feb 15 13:57:52.255: INFO: Trying to get logs from node iruya-node pod downward-api-eb56ac7b-c799-43d6-bb51-a3db73dc49ec container dapi-container: STEP: delete the pod Feb 15 13:57:52.552: INFO: Waiting for pod downward-api-eb56ac7b-c799-43d6-bb51-a3db73dc49ec to disappear Feb 15 13:57:52.566: INFO: Pod downward-api-eb56ac7b-c799-43d6-bb51-a3db73dc49ec no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:57:52.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5186" for this suite. Feb 15 13:57:58.611: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:57:58.775: INFO: namespace downward-api-5186 deletion completed in 6.194885089s • [SLOW TEST:14.806 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:57:58.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Feb 15 13:57:58.944: INFO: Waiting up to 5m0s for pod "pod-cca17e9f-31c0-473d-9605-fc6c6a7bb113" in namespace "emptydir-5408" to be "success or failure" Feb 15 13:57:58.956: INFO: Pod "pod-cca17e9f-31c0-473d-9605-fc6c6a7bb113": Phase="Pending", Reason="", readiness=false. Elapsed: 12.136038ms Feb 15 13:58:00.971: INFO: Pod "pod-cca17e9f-31c0-473d-9605-fc6c6a7bb113": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027009557s Feb 15 13:58:03.010: INFO: Pod "pod-cca17e9f-31c0-473d-9605-fc6c6a7bb113": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065579784s Feb 15 13:58:05.025: INFO: Pod "pod-cca17e9f-31c0-473d-9605-fc6c6a7bb113": Phase="Pending", Reason="", readiness=false. Elapsed: 6.08041724s Feb 15 13:58:07.038: INFO: Pod "pod-cca17e9f-31c0-473d-9605-fc6c6a7bb113": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.093809797s STEP: Saw pod success Feb 15 13:58:07.038: INFO: Pod "pod-cca17e9f-31c0-473d-9605-fc6c6a7bb113" satisfied condition "success or failure" Feb 15 13:58:07.045: INFO: Trying to get logs from node iruya-node pod pod-cca17e9f-31c0-473d-9605-fc6c6a7bb113 container test-container: STEP: delete the pod Feb 15 13:58:07.177: INFO: Waiting for pod pod-cca17e9f-31c0-473d-9605-fc6c6a7bb113 to disappear Feb 15 13:58:07.184: INFO: Pod pod-cca17e9f-31c0-473d-9605-fc6c6a7bb113 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:58:07.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5408" for this suite. Feb 15 13:58:13.291: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:58:13.395: INFO: namespace emptydir-5408 deletion completed in 6.204236475s • [SLOW TEST:14.619 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:58:13.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-df7235f4-a829-4557-ab5a-6f0d3aeb1525 STEP: Creating a pod to test consume secrets Feb 15 13:58:13.527: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-45da0ae4-352c-4346-9aa0-978d7159fae1" in namespace "projected-7954" to be "success or failure" Feb 15 13:58:13.559: INFO: Pod "pod-projected-secrets-45da0ae4-352c-4346-9aa0-978d7159fae1": Phase="Pending", Reason="", readiness=false. Elapsed: 31.769661ms Feb 15 13:58:15.577: INFO: Pod "pod-projected-secrets-45da0ae4-352c-4346-9aa0-978d7159fae1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05028193s Feb 15 13:58:17.596: INFO: Pod "pod-projected-secrets-45da0ae4-352c-4346-9aa0-978d7159fae1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068819205s Feb 15 13:58:19.605: INFO: Pod "pod-projected-secrets-45da0ae4-352c-4346-9aa0-978d7159fae1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078262092s Feb 15 13:58:21.620: INFO: Pod "pod-projected-secrets-45da0ae4-352c-4346-9aa0-978d7159fae1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.093282023s Feb 15 13:58:23.660: INFO: Pod "pod-projected-secrets-45da0ae4-352c-4346-9aa0-978d7159fae1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.132859265s STEP: Saw pod success Feb 15 13:58:23.660: INFO: Pod "pod-projected-secrets-45da0ae4-352c-4346-9aa0-978d7159fae1" satisfied condition "success or failure" Feb 15 13:58:23.667: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-45da0ae4-352c-4346-9aa0-978d7159fae1 container projected-secret-volume-test: STEP: delete the pod Feb 15 13:58:23.750: INFO: Waiting for pod pod-projected-secrets-45da0ae4-352c-4346-9aa0-978d7159fae1 to disappear Feb 15 13:58:23.990: INFO: Pod pod-projected-secrets-45da0ae4-352c-4346-9aa0-978d7159fae1 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:58:23.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7954" for this suite. Feb 15 13:58:30.029: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:58:30.133: INFO: namespace projected-7954 deletion completed in 6.13326231s • [SLOW TEST:16.738 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:58:30.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all Feb 15 13:58:30.316: INFO: Waiting up to 5m0s for pod "client-containers-ee46c316-eeda-4d04-bcd6-5684f2ee875e" in namespace "containers-8322" to be "success or failure" Feb 15 13:58:30.331: INFO: Pod "client-containers-ee46c316-eeda-4d04-bcd6-5684f2ee875e": Phase="Pending", Reason="", readiness=false. Elapsed: 14.896357ms Feb 15 13:58:32.343: INFO: Pod "client-containers-ee46c316-eeda-4d04-bcd6-5684f2ee875e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026835848s Feb 15 13:58:34.356: INFO: Pod "client-containers-ee46c316-eeda-4d04-bcd6-5684f2ee875e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039956342s Feb 15 13:58:36.366: INFO: Pod "client-containers-ee46c316-eeda-4d04-bcd6-5684f2ee875e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049457522s Feb 15 13:58:38.380: INFO: Pod "client-containers-ee46c316-eeda-4d04-bcd6-5684f2ee875e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.063176614s STEP: Saw pod success Feb 15 13:58:38.380: INFO: Pod "client-containers-ee46c316-eeda-4d04-bcd6-5684f2ee875e" satisfied condition "success or failure" Feb 15 13:58:38.384: INFO: Trying to get logs from node iruya-node pod client-containers-ee46c316-eeda-4d04-bcd6-5684f2ee875e container test-container: STEP: delete the pod Feb 15 13:58:38.493: INFO: Waiting for pod client-containers-ee46c316-eeda-4d04-bcd6-5684f2ee875e to disappear Feb 15 13:58:38.718: INFO: Pod client-containers-ee46c316-eeda-4d04-bcd6-5684f2ee875e no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:58:38.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8322" for this suite. Feb 15 13:58:44.761: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:58:44.887: INFO: namespace containers-8322 deletion completed in 6.161140941s • [SLOW TEST:14.753 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:58:44.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-a336ba40-cc38-45b2-b11d-ce54da180079 Feb 15 13:58:45.088: INFO: Pod name my-hostname-basic-a336ba40-cc38-45b2-b11d-ce54da180079: Found 0 pods out of 1 Feb 15 13:58:50.109: INFO: Pod name my-hostname-basic-a336ba40-cc38-45b2-b11d-ce54da180079: Found 1 pods out of 1 Feb 15 13:58:50.110: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-a336ba40-cc38-45b2-b11d-ce54da180079" are running Feb 15 13:58:54.179: INFO: Pod "my-hostname-basic-a336ba40-cc38-45b2-b11d-ce54da180079-shcp4" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-15 13:58:45 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-15 13:58:45 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-a336ba40-cc38-45b2-b11d-ce54da180079]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-15 13:58:45 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-a336ba40-cc38-45b2-b11d-ce54da180079]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-15 13:58:45 +0000 UTC Reason: Message:}]) Feb 15 13:58:54.179: INFO: Trying to dial the pod Feb 15 13:58:59.257: INFO: Controller my-hostname-basic-a336ba40-cc38-45b2-b11d-ce54da180079: Got expected result from replica 1 [my-hostname-basic-a336ba40-cc38-45b2-b11d-ce54da180079-shcp4]: "my-hostname-basic-a336ba40-cc38-45b2-b11d-ce54da180079-shcp4", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 13:58:59.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5778" for this suite. Feb 15 13:59:05.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 13:59:05.439: INFO: namespace replication-controller-5778 deletion completed in 6.173278554s • [SLOW TEST:20.552 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 13:59:05.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-86e0be68-85a6-4fef-b3ef-e62d980f3183 STEP: Creating secret with name s-test-opt-upd-a3f46c9a-9e2a-414a-bb07-34504f0b5b7d STEP: Creating the pod STEP: Deleting secret s-test-opt-del-86e0be68-85a6-4fef-b3ef-e62d980f3183 STEP: Updating secret s-test-opt-upd-a3f46c9a-9e2a-414a-bb07-34504f0b5b7d STEP: Creating secret with name s-test-opt-create-fb88e84b-2ed9-4558-b439-41e82fe564c6 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 14:00:41.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6949" for this suite. Feb 15 14:01:03.668: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 14:01:03.779: INFO: namespace secrets-6949 deletion completed in 22.144663892s • [SLOW TEST:118.340 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 14:01:03.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Feb 15 14:01:14.507: INFO: Successfully updated pod "pod-update-a5be2b89-4854-4509-a0ea-08d1d75a02e5" STEP: verifying the updated pod is in kubernetes Feb 15 14:01:14.534: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 14:01:14.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7994" for this suite. Feb 15 14:01:30.625: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 14:01:30.750: INFO: namespace pods-7994 deletion completed in 16.210127563s • [SLOW TEST:26.971 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 14:01:30.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-6bff0f88-5c2d-4c09-a145-b57ed9912e59 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-6bff0f88-5c2d-4c09-a145-b57ed9912e59 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 14:01:43.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7559" for this suite. Feb 15 14:02:05.098: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 14:02:05.231: INFO: namespace projected-7559 deletion completed in 22.153025126s • [SLOW TEST:34.481 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 14:02:05.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Feb 15 14:02:29.418: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4497 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 15 14:02:29.419: INFO: >>> kubeConfig: /root/.kube/config I0215 14:02:29.523117 8 log.go:172] (0xc0021b8840) (0xc0018b9540) Create stream I0215 14:02:29.523229 8 log.go:172] (0xc0021b8840) (0xc0018b9540) Stream added, broadcasting: 1 I0215 14:02:29.536113 8 log.go:172] (0xc0021b8840) Reply frame received for 1 I0215 14:02:29.536180 8 log.go:172] (0xc0021b8840) (0xc002a0c280) Create stream I0215 14:02:29.536197 8 log.go:172] (0xc0021b8840) (0xc002a0c280) Stream added, broadcasting: 3 I0215 14:02:29.538613 8 log.go:172] (0xc0021b8840) Reply frame received for 3 I0215 14:02:29.538653 8 log.go:172] (0xc0021b8840) (0xc00311f2c0) Create stream I0215 14:02:29.538668 8 log.go:172] (0xc0021b8840) (0xc00311f2c0) Stream added, broadcasting: 5 I0215 14:02:29.543067 8 log.go:172] (0xc0021b8840) Reply frame received for 5 I0215 14:02:29.671923 8 log.go:172] (0xc0021b8840) Data frame received for 3 I0215 14:02:29.672179 8 log.go:172] (0xc002a0c280) (3) Data frame handling I0215 14:02:29.672234 8 log.go:172] (0xc002a0c280) (3) Data frame sent I0215 14:02:29.831653 8 log.go:172] (0xc0021b8840) (0xc002a0c280) Stream removed, broadcasting: 3 I0215 14:02:29.831847 8 log.go:172] (0xc0021b8840) Data frame received for 1 I0215 14:02:29.831892 8 log.go:172] (0xc0018b9540) (1) Data frame handling I0215 14:02:29.831958 8 log.go:172] (0xc0021b8840) (0xc00311f2c0) Stream removed, broadcasting: 5 I0215 14:02:29.832003 8 log.go:172] (0xc0018b9540) (1) Data frame sent I0215 14:02:29.832024 8 log.go:172] (0xc0021b8840) (0xc0018b9540) Stream removed, broadcasting: 1 I0215 14:02:29.832043 8 log.go:172] (0xc0021b8840) Go away received I0215 14:02:29.832549 8 log.go:172] (0xc0021b8840) (0xc0018b9540) Stream removed, broadcasting: 1 I0215 14:02:29.832595 8 log.go:172] (0xc0021b8840) (0xc002a0c280) Stream removed, broadcasting: 3 I0215 14:02:29.832613 8 log.go:172] (0xc0021b8840) (0xc00311f2c0) Stream removed, broadcasting: 5 Feb 15 14:02:29.832: INFO: Exec stderr: "" Feb 15 14:02:29.832: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4497 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 15 14:02:29.833: INFO: >>> kubeConfig: /root/.kube/config I0215 14:02:29.935654 8 log.go:172] (0xc0029c86e0) (0xc002a0c5a0) Create stream I0215 14:02:29.935967 8 log.go:172] (0xc0029c86e0) (0xc002a0c5a0) Stream added, broadcasting: 1 I0215 14:02:29.951574 8 log.go:172] (0xc0029c86e0) Reply frame received for 1 I0215 14:02:29.951748 8 log.go:172] (0xc0029c86e0) (0xc00231a460) Create stream I0215 14:02:29.951775 8 log.go:172] (0xc0029c86e0) (0xc00231a460) Stream added, broadcasting: 3 I0215 14:02:29.958032 8 log.go:172] (0xc0029c86e0) Reply frame received for 3 I0215 14:02:29.958082 8 log.go:172] (0xc0029c86e0) (0xc001e2d5e0) Create stream I0215 14:02:29.958149 8 log.go:172] (0xc0029c86e0) (0xc001e2d5e0) Stream added, broadcasting: 5 I0215 14:02:29.966744 8 log.go:172] (0xc0029c86e0) Reply frame received for 5 I0215 14:02:30.072246 8 log.go:172] (0xc0029c86e0) Data frame received for 3 I0215 14:02:30.072359 8 log.go:172] (0xc00231a460) (3) Data frame handling I0215 14:02:30.072432 8 log.go:172] (0xc00231a460) (3) Data frame sent I0215 14:02:30.214398 8 log.go:172] (0xc0029c86e0) (0xc00231a460) Stream removed, broadcasting: 3 I0215 14:02:30.214641 8 log.go:172] (0xc0029c86e0) Data frame received for 1 I0215 14:02:30.214678 8 log.go:172] (0xc0029c86e0) (0xc001e2d5e0) Stream removed, broadcasting: 5 I0215 14:02:30.214719 8 log.go:172] (0xc002a0c5a0) (1) Data frame handling I0215 14:02:30.214741 8 log.go:172] (0xc002a0c5a0) (1) Data frame sent I0215 14:02:30.214751 8 log.go:172] (0xc0029c86e0) (0xc002a0c5a0) Stream removed, broadcasting: 1 I0215 14:02:30.214773 8 log.go:172] (0xc0029c86e0) Go away received I0215 14:02:30.215298 8 log.go:172] (0xc0029c86e0) (0xc002a0c5a0) Stream removed, broadcasting: 1 I0215 14:02:30.215309 8 log.go:172] (0xc0029c86e0) (0xc00231a460) Stream removed, broadcasting: 3 I0215 14:02:30.215315 8 log.go:172] (0xc0029c86e0) (0xc001e2d5e0) Stream removed, broadcasting: 5 Feb 15 14:02:30.215: INFO: Exec stderr: "" Feb 15 14:02:30.215: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4497 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 15 14:02:30.215: INFO: >>> kubeConfig: /root/.kube/config I0215 14:02:30.272780 8 log.go:172] (0xc002f96f20) (0xc00231aa00) Create stream I0215 14:02:30.272818 8 log.go:172] (0xc002f96f20) (0xc00231aa00) Stream added, broadcasting: 1 I0215 14:02:30.278138 8 log.go:172] (0xc002f96f20) Reply frame received for 1 I0215 14:02:30.278179 8 log.go:172] (0xc002f96f20) (0xc00311f360) Create stream I0215 14:02:30.278196 8 log.go:172] (0xc002f96f20) (0xc00311f360) Stream added, broadcasting: 3 I0215 14:02:30.280781 8 log.go:172] (0xc002f96f20) Reply frame received for 3 I0215 14:02:30.280817 8 log.go:172] (0xc002f96f20) (0xc001e2d680) Create stream I0215 14:02:30.280830 8 log.go:172] (0xc002f96f20) (0xc001e2d680) Stream added, broadcasting: 5 I0215 14:02:30.282307 8 log.go:172] (0xc002f96f20) Reply frame received for 5 I0215 14:02:30.385583 8 log.go:172] (0xc002f96f20) Data frame received for 3 I0215 14:02:30.385674 8 log.go:172] (0xc00311f360) (3) Data frame handling I0215 14:02:30.385690 8 log.go:172] (0xc00311f360) (3) Data frame sent I0215 14:02:30.602222 8 log.go:172] (0xc002f96f20) Data frame received for 1 I0215 14:02:30.602442 8 log.go:172] (0xc00231aa00) (1) Data frame handling I0215 14:02:30.602476 8 log.go:172] (0xc00231aa00) (1) Data frame sent I0215 14:02:30.604502 8 log.go:172] (0xc002f96f20) (0xc001e2d680) Stream removed, broadcasting: 5 I0215 14:02:30.604793 8 log.go:172] (0xc002f96f20) (0xc00231aa00) Stream removed, broadcasting: 1 I0215 14:02:30.605782 8 log.go:172] (0xc002f96f20) (0xc00311f360) Stream removed, broadcasting: 3 I0215 14:02:30.605851 8 log.go:172] (0xc002f96f20) (0xc00231aa00) Stream removed, broadcasting: 1 I0215 14:02:30.605872 8 log.go:172] (0xc002f96f20) (0xc00311f360) Stream removed, broadcasting: 3 I0215 14:02:30.605888 8 log.go:172] (0xc002f96f20) (0xc001e2d680) Stream removed, broadcasting: 5 Feb 15 14:02:30.606: INFO: Exec stderr: "" Feb 15 14:02:30.607: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4497 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 15 14:02:30.607: INFO: >>> kubeConfig: /root/.kube/config I0215 14:02:30.608805 8 log.go:172] (0xc002f96f20) Go away received I0215 14:02:30.767369 8 log.go:172] (0xc00010fa20) (0xc0031bc000) Create stream I0215 14:02:30.767722 8 log.go:172] (0xc00010fa20) (0xc0031bc000) Stream added, broadcasting: 1 I0215 14:02:30.783799 8 log.go:172] (0xc00010fa20) Reply frame received for 1 I0215 14:02:30.783892 8 log.go:172] (0xc00010fa20) (0xc00182c140) Create stream I0215 14:02:30.783927 8 log.go:172] (0xc00010fa20) (0xc00182c140) Stream added, broadcasting: 3 I0215 14:02:30.787194 8 log.go:172] (0xc00010fa20) Reply frame received for 3 I0215 14:02:30.787230 8 log.go:172] (0xc00010fa20) (0xc0016a2000) Create stream I0215 14:02:30.787247 8 log.go:172] (0xc00010fa20) (0xc0016a2000) Stream added, broadcasting: 5 I0215 14:02:30.793740 8 log.go:172] (0xc00010fa20) Reply frame received for 5 I0215 14:02:30.949841 8 log.go:172] (0xc00010fa20) Data frame received for 3 I0215 14:02:30.950006 8 log.go:172] (0xc00182c140) (3) Data frame handling I0215 14:02:30.950053 8 log.go:172] (0xc00182c140) (3) Data frame sent I0215 14:02:31.152056 8 log.go:172] (0xc00010fa20) Data frame received for 1 I0215 14:02:31.152241 8 log.go:172] (0xc0031bc000) (1) Data frame handling I0215 14:02:31.152263 8 log.go:172] (0xc0031bc000) (1) Data frame sent I0215 14:02:31.152286 8 log.go:172] (0xc00010fa20) (0xc0031bc000) Stream removed, broadcasting: 1 I0215 14:02:31.153327 8 log.go:172] (0xc00010fa20) (0xc00182c140) Stream removed, broadcasting: 3 I0215 14:02:31.153454 8 log.go:172] (0xc00010fa20) (0xc0016a2000) Stream removed, broadcasting: 5 I0215 14:02:31.153496 8 log.go:172] (0xc00010fa20) Go away received I0215 14:02:31.154442 8 log.go:172] (0xc00010fa20) (0xc0031bc000) Stream removed, broadcasting: 1 I0215 14:02:31.154950 8 log.go:172] (0xc00010fa20) (0xc00182c140) Stream removed, broadcasting: 3 I0215 14:02:31.155014 8 log.go:172] (0xc00010fa20) (0xc0016a2000) Stream removed, broadcasting: 5 Feb 15 14:02:31.155: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Feb 15 14:02:31.155: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4497 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 15 14:02:31.155: INFO: >>> kubeConfig: /root/.kube/config I0215 14:02:31.218900 8 log.go:172] (0xc00144a000) (0xc00182c500) Create stream I0215 14:02:31.219049 8 log.go:172] (0xc00144a000) (0xc00182c500) Stream added, broadcasting: 1 I0215 14:02:31.227265 8 log.go:172] (0xc00144a000) Reply frame received for 1 I0215 14:02:31.227325 8 log.go:172] (0xc00144a000) (0xc001cd6000) Create stream I0215 14:02:31.227343 8 log.go:172] (0xc00144a000) (0xc001cd6000) Stream added, broadcasting: 3 I0215 14:02:31.230982 8 log.go:172] (0xc00144a000) Reply frame received for 3 I0215 14:02:31.231086 8 log.go:172] (0xc00144a000) (0xc0016a20a0) Create stream I0215 14:02:31.231097 8 log.go:172] (0xc00144a000) (0xc0016a20a0) Stream added, broadcasting: 5 I0215 14:02:31.233594 8 log.go:172] (0xc00144a000) Reply frame received for 5 I0215 14:02:31.311591 8 log.go:172] (0xc00144a000) Data frame received for 3 I0215 14:02:31.311640 8 log.go:172] (0xc001cd6000) (3) Data frame handling I0215 14:02:31.311661 8 log.go:172] (0xc001cd6000) (3) Data frame sent I0215 14:02:31.425287 8 log.go:172] (0xc00144a000) Data frame received for 1 I0215 14:02:31.425379 8 log.go:172] (0xc00144a000) (0xc001cd6000) Stream removed, broadcasting: 3 I0215 14:02:31.425406 8 log.go:172] (0xc00182c500) (1) Data frame handling I0215 14:02:31.425414 8 log.go:172] (0xc00182c500) (1) Data frame sent I0215 14:02:31.425437 8 log.go:172] (0xc00144a000) (0xc0016a20a0) Stream removed, broadcasting: 5 I0215 14:02:31.425499 8 log.go:172] (0xc00144a000) (0xc00182c500) Stream removed, broadcasting: 1 I0215 14:02:31.425535 8 log.go:172] (0xc00144a000) Go away received I0215 14:02:31.425755 8 log.go:172] (0xc00144a000) (0xc00182c500) Stream removed, broadcasting: 1 I0215 14:02:31.425765 8 log.go:172] (0xc00144a000) (0xc001cd6000) Stream removed, broadcasting: 3 I0215 14:02:31.425769 8 log.go:172] (0xc00144a000) (0xc0016a20a0) Stream removed, broadcasting: 5 Feb 15 14:02:31.425: INFO: Exec stderr: "" Feb 15 14:02:31.425: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4497 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 15 14:02:31.426: INFO: >>> kubeConfig: /root/.kube/config I0215 14:02:31.489563 8 log.go:172] (0xc0005e9760) (0xc0016a25a0) Create stream I0215 14:02:31.489604 8 log.go:172] (0xc0005e9760) (0xc0016a25a0) Stream added, broadcasting: 1 I0215 14:02:31.497213 8 log.go:172] (0xc0005e9760) Reply frame received for 1 I0215 14:02:31.497262 8 log.go:172] (0xc0005e9760) (0xc00182c5a0) Create stream I0215 14:02:31.497283 8 log.go:172] (0xc0005e9760) (0xc00182c5a0) Stream added, broadcasting: 3 I0215 14:02:31.499436 8 log.go:172] (0xc0005e9760) Reply frame received for 3 I0215 14:02:31.499471 8 log.go:172] (0xc0005e9760) (0xc00182c640) Create stream I0215 14:02:31.499489 8 log.go:172] (0xc0005e9760) (0xc00182c640) Stream added, broadcasting: 5 I0215 14:02:31.501064 8 log.go:172] (0xc0005e9760) Reply frame received for 5 I0215 14:02:31.596032 8 log.go:172] (0xc0005e9760) Data frame received for 3 I0215 14:02:31.596133 8 log.go:172] (0xc00182c5a0) (3) Data frame handling I0215 14:02:31.596155 8 log.go:172] (0xc00182c5a0) (3) Data frame sent I0215 14:02:31.725121 8 log.go:172] (0xc0005e9760) Data frame received for 1 I0215 14:02:31.725203 8 log.go:172] (0xc0005e9760) (0xc00182c640) Stream removed, broadcasting: 5 I0215 14:02:31.725246 8 log.go:172] (0xc0016a25a0) (1) Data frame handling I0215 14:02:31.725269 8 log.go:172] (0xc0005e9760) (0xc00182c5a0) Stream removed, broadcasting: 3 I0215 14:02:31.725297 8 log.go:172] (0xc0016a25a0) (1) Data frame sent I0215 14:02:31.725311 8 log.go:172] (0xc0005e9760) (0xc0016a25a0) Stream removed, broadcasting: 1 I0215 14:02:31.725330 8 log.go:172] (0xc0005e9760) Go away received I0215 14:02:31.725508 8 log.go:172] (0xc0005e9760) (0xc0016a25a0) Stream removed, broadcasting: 1 I0215 14:02:31.725519 8 log.go:172] (0xc0005e9760) (0xc00182c5a0) Stream removed, broadcasting: 3 I0215 14:02:31.725527 8 log.go:172] (0xc0005e9760) (0xc00182c640) Stream removed, broadcasting: 5 Feb 15 14:02:31.725: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Feb 15 14:02:31.725: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4497 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 15 14:02:31.725: INFO: >>> kubeConfig: /root/.kube/config I0215 14:02:31.773970 8 log.go:172] (0xc00066e4d0) (0xc0016a2c80) Create stream I0215 14:02:31.774022 8 log.go:172] (0xc00066e4d0) (0xc0016a2c80) Stream added, broadcasting: 1 I0215 14:02:31.779844 8 log.go:172] (0xc00066e4d0) Reply frame received for 1 I0215 14:02:31.779899 8 log.go:172] (0xc00066e4d0) (0xc0016a2d20) Create stream I0215 14:02:31.779916 8 log.go:172] (0xc00066e4d0) (0xc0016a2d20) Stream added, broadcasting: 3 I0215 14:02:31.781092 8 log.go:172] (0xc00066e4d0) Reply frame received for 3 I0215 14:02:31.781136 8 log.go:172] (0xc00066e4d0) (0xc0031bc0a0) Create stream I0215 14:02:31.781152 8 log.go:172] (0xc00066e4d0) (0xc0031bc0a0) Stream added, broadcasting: 5 I0215 14:02:31.782479 8 log.go:172] (0xc00066e4d0) Reply frame received for 5 I0215 14:02:31.997231 8 log.go:172] (0xc00066e4d0) Data frame received for 3 I0215 14:02:31.997381 8 log.go:172] (0xc0016a2d20) (3) Data frame handling I0215 14:02:31.997419 8 log.go:172] (0xc0016a2d20) (3) Data frame sent I0215 14:02:32.098820 8 log.go:172] (0xc00066e4d0) Data frame received for 1 I0215 14:02:32.098929 8 log.go:172] (0xc00066e4d0) (0xc0016a2d20) Stream removed, broadcasting: 3 I0215 14:02:32.099023 8 log.go:172] (0xc0016a2c80) (1) Data frame handling I0215 14:02:32.099035 8 log.go:172] (0xc0016a2c80) (1) Data frame sent I0215 14:02:32.099059 8 log.go:172] (0xc00066e4d0) (0xc0016a2c80) Stream removed, broadcasting: 1 I0215 14:02:32.099437 8 log.go:172] (0xc00066e4d0) (0xc0031bc0a0) Stream removed, broadcasting: 5 I0215 14:02:32.099471 8 log.go:172] (0xc00066e4d0) (0xc0016a2c80) Stream removed, broadcasting: 1 I0215 14:02:32.099478 8 log.go:172] (0xc00066e4d0) (0xc0016a2d20) Stream removed, broadcasting: 3 I0215 14:02:32.099482 8 log.go:172] (0xc00066e4d0) (0xc0031bc0a0) Stream removed, broadcasting: 5 I0215 14:02:32.099870 8 log.go:172] (0xc00066e4d0) Go away received Feb 15 14:02:32.099: INFO: Exec stderr: "" Feb 15 14:02:32.100: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4497 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 15 14:02:32.100: INFO: >>> kubeConfig: /root/.kube/config I0215 14:02:32.189300 8 log.go:172] (0xc00066ed10) (0xc0016a3040) Create stream I0215 14:02:32.189698 8 log.go:172] (0xc00066ed10) (0xc0016a3040) Stream added, broadcasting: 1 I0215 14:02:32.205981 8 log.go:172] (0xc00066ed10) Reply frame received for 1 I0215 14:02:32.206154 8 log.go:172] (0xc00066ed10) (0xc0031bc280) Create stream I0215 14:02:32.206170 8 log.go:172] (0xc00066ed10) (0xc0031bc280) Stream added, broadcasting: 3 I0215 14:02:32.207761 8 log.go:172] (0xc00066ed10) Reply frame received for 3 I0215 14:02:32.207800 8 log.go:172] (0xc00066ed10) (0xc0016a30e0) Create stream I0215 14:02:32.207812 8 log.go:172] (0xc00066ed10) (0xc0016a30e0) Stream added, broadcasting: 5 I0215 14:02:32.209121 8 log.go:172] (0xc00066ed10) Reply frame received for 5 I0215 14:02:32.322629 8 log.go:172] (0xc00066ed10) Data frame received for 3 I0215 14:02:32.322693 8 log.go:172] (0xc0031bc280) (3) Data frame handling I0215 14:02:32.322708 8 log.go:172] (0xc0031bc280) (3) Data frame sent I0215 14:02:32.429705 8 log.go:172] (0xc00066ed10) (0xc0031bc280) Stream removed, broadcasting: 3 I0215 14:02:32.430003 8 log.go:172] (0xc00066ed10) Data frame received for 1 I0215 14:02:32.430045 8 log.go:172] (0xc0016a3040) (1) Data frame handling I0215 14:02:32.430070 8 log.go:172] (0xc0016a3040) (1) Data frame sent I0215 14:02:32.430094 8 log.go:172] (0xc00066ed10) (0xc0016a3040) Stream removed, broadcasting: 1 I0215 14:02:32.430905 8 log.go:172] (0xc00066ed10) (0xc0016a30e0) Stream removed, broadcasting: 5 I0215 14:02:32.430953 8 log.go:172] (0xc00066ed10) (0xc0016a3040) Stream removed, broadcasting: 1 I0215 14:02:32.430961 8 log.go:172] (0xc00066ed10) (0xc0031bc280) Stream removed, broadcasting: 3 I0215 14:02:32.430989 8 log.go:172] (0xc00066ed10) (0xc0016a30e0) Stream removed, broadcasting: 5 I0215 14:02:32.431114 8 log.go:172] (0xc00066ed10) Go away received Feb 15 14:02:32.431: INFO: Exec stderr: "" Feb 15 14:02:32.431: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4497 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 15 14:02:32.431: INFO: >>> kubeConfig: /root/.kube/config I0215 14:02:32.491613 8 log.go:172] (0xc002f96790) (0xc00095c320) Create stream I0215 14:02:32.491660 8 log.go:172] (0xc002f96790) (0xc00095c320) Stream added, broadcasting: 1 I0215 14:02:32.498310 8 log.go:172] (0xc002f96790) Reply frame received for 1 I0215 14:02:32.498352 8 log.go:172] (0xc002f96790) (0xc001cd60a0) Create stream I0215 14:02:32.498375 8 log.go:172] (0xc002f96790) (0xc001cd60a0) Stream added, broadcasting: 3 I0215 14:02:32.499917 8 log.go:172] (0xc002f96790) Reply frame received for 3 I0215 14:02:32.499950 8 log.go:172] (0xc002f96790) (0xc001cd61e0) Create stream I0215 14:02:32.499961 8 log.go:172] (0xc002f96790) (0xc001cd61e0) Stream added, broadcasting: 5 I0215 14:02:32.501301 8 log.go:172] (0xc002f96790) Reply frame received for 5 I0215 14:02:32.639185 8 log.go:172] (0xc002f96790) Data frame received for 3 I0215 14:02:32.639239 8 log.go:172] (0xc001cd60a0) (3) Data frame handling I0215 14:02:32.639254 8 log.go:172] (0xc001cd60a0) (3) Data frame sent I0215 14:02:32.728877 8 log.go:172] (0xc002f96790) Data frame received for 1 I0215 14:02:32.729113 8 log.go:172] (0xc002f96790) (0xc001cd60a0) Stream removed, broadcasting: 3 I0215 14:02:32.729219 8 log.go:172] (0xc00095c320) (1) Data frame handling I0215 14:02:32.729253 8 log.go:172] (0xc00095c320) (1) Data frame sent I0215 14:02:32.729321 8 log.go:172] (0xc002f96790) (0xc001cd61e0) Stream removed, broadcasting: 5 I0215 14:02:32.729369 8 log.go:172] (0xc002f96790) (0xc00095c320) Stream removed, broadcasting: 1 I0215 14:02:32.729392 8 log.go:172] (0xc002f96790) Go away received I0215 14:02:32.729657 8 log.go:172] (0xc002f96790) (0xc00095c320) Stream removed, broadcasting: 1 I0215 14:02:32.729666 8 log.go:172] (0xc002f96790) (0xc001cd60a0) Stream removed, broadcasting: 3 I0215 14:02:32.729676 8 log.go:172] (0xc002f96790) (0xc001cd61e0) Stream removed, broadcasting: 5 Feb 15 14:02:32.729: INFO: Exec stderr: "" Feb 15 14:02:32.729: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4497 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 15 14:02:32.729: INFO: >>> kubeConfig: /root/.kube/config I0215 14:02:32.785476 8 log.go:172] (0xc001c9cfd0) (0xc001cd6500) Create stream I0215 14:02:32.785604 8 log.go:172] (0xc001c9cfd0) (0xc001cd6500) Stream added, broadcasting: 1 I0215 14:02:32.792428 8 log.go:172] (0xc001c9cfd0) Reply frame received for 1 I0215 14:02:32.792484 8 log.go:172] (0xc001c9cfd0) (0xc00182c6e0) Create stream I0215 14:02:32.792495 8 log.go:172] (0xc001c9cfd0) (0xc00182c6e0) Stream added, broadcasting: 3 I0215 14:02:32.794415 8 log.go:172] (0xc001c9cfd0) Reply frame received for 3 I0215 14:02:32.794434 8 log.go:172] (0xc001c9cfd0) (0xc0031bc460) Create stream I0215 14:02:32.794443 8 log.go:172] (0xc001c9cfd0) (0xc0031bc460) Stream added, broadcasting: 5 I0215 14:02:32.797698 8 log.go:172] (0xc001c9cfd0) Reply frame received for 5 I0215 14:02:32.884369 8 log.go:172] (0xc001c9cfd0) Data frame received for 3 I0215 14:02:32.884554 8 log.go:172] (0xc00182c6e0) (3) Data frame handling I0215 14:02:32.884613 8 log.go:172] (0xc00182c6e0) (3) Data frame sent I0215 14:02:33.000416 8 log.go:172] (0xc001c9cfd0) Data frame received for 1 I0215 14:02:33.000485 8 log.go:172] (0xc001cd6500) (1) Data frame handling I0215 14:02:33.000508 8 log.go:172] (0xc001cd6500) (1) Data frame sent I0215 14:02:33.000520 8 log.go:172] (0xc001c9cfd0) (0xc001cd6500) Stream removed, broadcasting: 1 I0215 14:02:33.000945 8 log.go:172] (0xc001c9cfd0) (0xc00182c6e0) Stream removed, broadcasting: 3 I0215 14:02:33.000987 8 log.go:172] (0xc001c9cfd0) (0xc0031bc460) Stream removed, broadcasting: 5 I0215 14:02:33.001008 8 log.go:172] (0xc001c9cfd0) Go away received I0215 14:02:33.001220 8 log.go:172] (0xc001c9cfd0) (0xc001cd6500) Stream removed, broadcasting: 1 I0215 14:02:33.001292 8 log.go:172] (0xc001c9cfd0) (0xc00182c6e0) Stream removed, broadcasting: 3 I0215 14:02:33.001297 8 log.go:172] (0xc001c9cfd0) (0xc0031bc460) Stream removed, broadcasting: 5 Feb 15 14:02:33.001: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 14:02:33.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-4497" for this suite. Feb 15 14:03:17.028: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 14:03:17.170: INFO: namespace e2e-kubelet-etc-hosts-4497 deletion completed in 44.160606694s • [SLOW TEST:71.938 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 14:03:17.170: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 15 14:03:17.233: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 14:03:27.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1506" for this suite. Feb 15 14:04:09.847: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 14:04:10.097: INFO: namespace pods-1506 deletion completed in 42.747945087s • [SLOW TEST:52.927 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 14:04:10.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Feb 15 14:04:18.851: INFO: Successfully updated pod "labelsupdatea74a74d3-6bf7-4e9b-862a-2e890a6c86bf" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 14:04:20.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-150" for this suite. Feb 15 14:04:43.007: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 14:04:43.113: INFO: namespace downward-api-150 deletion completed in 22.135213854s • [SLOW TEST:33.015 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 14:04:43.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 15 14:04:43.247: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5824f106-57c4-4875-8fb5-9d5a9c19b5ea" in namespace "projected-9273" to be "success or failure" Feb 15 14:04:43.269: INFO: Pod "downwardapi-volume-5824f106-57c4-4875-8fb5-9d5a9c19b5ea": Phase="Pending", Reason="", readiness=false. Elapsed: 22.280514ms Feb 15 14:04:45.279: INFO: Pod "downwardapi-volume-5824f106-57c4-4875-8fb5-9d5a9c19b5ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031735082s Feb 15 14:04:47.288: INFO: Pod "downwardapi-volume-5824f106-57c4-4875-8fb5-9d5a9c19b5ea": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040761564s Feb 15 14:04:49.296: INFO: Pod "downwardapi-volume-5824f106-57c4-4875-8fb5-9d5a9c19b5ea": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049239857s Feb 15 14:04:51.308: INFO: Pod "downwardapi-volume-5824f106-57c4-4875-8fb5-9d5a9c19b5ea": Phase="Running", Reason="", readiness=true. Elapsed: 8.06127396s Feb 15 14:04:53.317: INFO: Pod "downwardapi-volume-5824f106-57c4-4875-8fb5-9d5a9c19b5ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.069950901s STEP: Saw pod success Feb 15 14:04:53.317: INFO: Pod "downwardapi-volume-5824f106-57c4-4875-8fb5-9d5a9c19b5ea" satisfied condition "success or failure" Feb 15 14:04:53.321: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-5824f106-57c4-4875-8fb5-9d5a9c19b5ea container client-container: STEP: delete the pod Feb 15 14:04:53.417: INFO: Waiting for pod downwardapi-volume-5824f106-57c4-4875-8fb5-9d5a9c19b5ea to disappear Feb 15 14:04:53.428: INFO: Pod downwardapi-volume-5824f106-57c4-4875-8fb5-9d5a9c19b5ea no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 14:04:53.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9273" for this suite. Feb 15 14:04:59.476: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 14:04:59.659: INFO: namespace projected-9273 deletion completed in 6.223785711s • [SLOW TEST:16.545 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 14:04:59.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-4418 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 15 14:04:59.787: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 15 14:05:36.004: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-4418 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 15 14:05:36.004: INFO: >>> kubeConfig: /root/.kube/config I0215 14:05:36.102446 8 log.go:172] (0xc000ad1ce0) (0xc0013c3860) Create stream I0215 14:05:36.102816 8 log.go:172] (0xc000ad1ce0) (0xc0013c3860) Stream added, broadcasting: 1 I0215 14:05:36.112409 8 log.go:172] (0xc000ad1ce0) Reply frame received for 1 I0215 14:05:36.112445 8 log.go:172] (0xc000ad1ce0) (0xc0018b8a00) Create stream I0215 14:05:36.112455 8 log.go:172] (0xc000ad1ce0) (0xc0018b8a00) Stream added, broadcasting: 3 I0215 14:05:36.114408 8 log.go:172] (0xc000ad1ce0) Reply frame received for 3 I0215 14:05:36.114451 8 log.go:172] (0xc000ad1ce0) (0xc0031bd5e0) Create stream I0215 14:05:36.114465 8 log.go:172] (0xc000ad1ce0) (0xc0031bd5e0) Stream added, broadcasting: 5 I0215 14:05:36.117720 8 log.go:172] (0xc000ad1ce0) Reply frame received for 5 I0215 14:05:36.281428 8 log.go:172] (0xc000ad1ce0) Data frame received for 3 I0215 14:05:36.281478 8 log.go:172] (0xc0018b8a00) (3) Data frame handling I0215 14:05:36.281504 8 log.go:172] (0xc0018b8a00) (3) Data frame sent I0215 14:05:36.407973 8 log.go:172] (0xc000ad1ce0) Data frame received for 1 I0215 14:05:36.408186 8 log.go:172] (0xc000ad1ce0) (0xc0031bd5e0) Stream removed, broadcasting: 5 I0215 14:05:36.408290 8 log.go:172] (0xc0013c3860) (1) Data frame handling I0215 14:05:36.408340 8 log.go:172] (0xc0013c3860) (1) Data frame sent I0215 14:05:36.408363 8 log.go:172] (0xc000ad1ce0) (0xc0018b8a00) Stream removed, broadcasting: 3 I0215 14:05:36.408403 8 log.go:172] (0xc000ad1ce0) (0xc0013c3860) Stream removed, broadcasting: 1 I0215 14:05:36.408438 8 log.go:172] (0xc000ad1ce0) Go away received I0215 14:05:36.409301 8 log.go:172] (0xc000ad1ce0) (0xc0013c3860) Stream removed, broadcasting: 1 I0215 14:05:36.409336 8 log.go:172] (0xc000ad1ce0) (0xc0018b8a00) Stream removed, broadcasting: 3 I0215 14:05:36.409353 8 log.go:172] (0xc000ad1ce0) (0xc0031bd5e0) Stream removed, broadcasting: 5 Feb 15 14:05:36.409: INFO: Found all expected endpoints: [netserver-0] Feb 15 14:05:36.419: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-4418 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 15 14:05:36.419: INFO: >>> kubeConfig: /root/.kube/config I0215 14:05:36.522074 8 log.go:172] (0xc0005e9600) (0xc001e2d860) Create stream I0215 14:05:36.522235 8 log.go:172] (0xc0005e9600) (0xc001e2d860) Stream added, broadcasting: 1 I0215 14:05:36.538087 8 log.go:172] (0xc0005e9600) Reply frame received for 1 I0215 14:05:36.538165 8 log.go:172] (0xc0005e9600) (0xc000f5ca00) Create stream I0215 14:05:36.538195 8 log.go:172] (0xc0005e9600) (0xc000f5ca00) Stream added, broadcasting: 3 I0215 14:05:36.540181 8 log.go:172] (0xc0005e9600) Reply frame received for 3 I0215 14:05:36.540219 8 log.go:172] (0xc0005e9600) (0xc0018b8b40) Create stream I0215 14:05:36.540233 8 log.go:172] (0xc0005e9600) (0xc0018b8b40) Stream added, broadcasting: 5 I0215 14:05:36.541869 8 log.go:172] (0xc0005e9600) Reply frame received for 5 I0215 14:05:36.844745 8 log.go:172] (0xc0005e9600) Data frame received for 3 I0215 14:05:36.844858 8 log.go:172] (0xc000f5ca00) (3) Data frame handling I0215 14:05:36.844885 8 log.go:172] (0xc000f5ca00) (3) Data frame sent I0215 14:05:36.994495 8 log.go:172] (0xc0005e9600) Data frame received for 1 I0215 14:05:36.994834 8 log.go:172] (0xc0005e9600) (0xc000f5ca00) Stream removed, broadcasting: 3 I0215 14:05:36.994966 8 log.go:172] (0xc001e2d860) (1) Data frame handling I0215 14:05:36.995012 8 log.go:172] (0xc001e2d860) (1) Data frame sent I0215 14:05:36.995033 8 log.go:172] (0xc0005e9600) (0xc0018b8b40) Stream removed, broadcasting: 5 I0215 14:05:36.995094 8 log.go:172] (0xc0005e9600) (0xc001e2d860) Stream removed, broadcasting: 1 I0215 14:05:36.995453 8 log.go:172] (0xc0005e9600) (0xc001e2d860) Stream removed, broadcasting: 1 I0215 14:05:36.995473 8 log.go:172] (0xc0005e9600) (0xc000f5ca00) Stream removed, broadcasting: 3 I0215 14:05:36.995484 8 log.go:172] (0xc0005e9600) (0xc0018b8b40) Stream removed, broadcasting: 5 Feb 15 14:05:36.995: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 14:05:36.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0215 14:05:36.996907 8 log.go:172] (0xc0005e9600) Go away received STEP: Destroying namespace "pod-network-test-4418" for this suite. Feb 15 14:06:01.088: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 14:06:01.208: INFO: namespace pod-network-test-4418 deletion completed in 24.200027648s • [SLOW TEST:61.549 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 14:06:01.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium Feb 15 14:06:01.307: INFO: Waiting up to 5m0s for pod "pod-9228772f-10e1-4dd7-949e-7356d2ace9e6" in namespace "emptydir-2725" to be "success or failure" Feb 15 14:06:01.317: INFO: Pod "pod-9228772f-10e1-4dd7-949e-7356d2ace9e6": Phase="Pending", Reason="", readiness=false. Elapsed: 9.783031ms Feb 15 14:06:03.329: INFO: Pod "pod-9228772f-10e1-4dd7-949e-7356d2ace9e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021570086s Feb 15 14:06:05.340: INFO: Pod "pod-9228772f-10e1-4dd7-949e-7356d2ace9e6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032492167s Feb 15 14:06:08.798: INFO: Pod "pod-9228772f-10e1-4dd7-949e-7356d2ace9e6": Phase="Pending", Reason="", readiness=false. Elapsed: 7.490240097s Feb 15 14:06:10.824: INFO: Pod "pod-9228772f-10e1-4dd7-949e-7356d2ace9e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.516938898s STEP: Saw pod success Feb 15 14:06:10.825: INFO: Pod "pod-9228772f-10e1-4dd7-949e-7356d2ace9e6" satisfied condition "success or failure" Feb 15 14:06:10.833: INFO: Trying to get logs from node iruya-node pod pod-9228772f-10e1-4dd7-949e-7356d2ace9e6 container test-container: STEP: delete the pod Feb 15 14:06:10.918: INFO: Waiting for pod pod-9228772f-10e1-4dd7-949e-7356d2ace9e6 to disappear Feb 15 14:06:10.936: INFO: Pod pod-9228772f-10e1-4dd7-949e-7356d2ace9e6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 14:06:10.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2725" for this suite. Feb 15 14:06:17.028: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 14:06:17.172: INFO: namespace emptydir-2725 deletion completed in 6.200148478s • [SLOW TEST:15.963 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 14:06:17.172: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-f8cb24d0-50eb-4890-ab52-80663e5f93be STEP: Creating a pod to test consume secrets Feb 15 14:06:17.371: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-fbc4e7b0-ea47-4512-8e96-f7102933cdd4" in namespace "projected-4464" to be "success or failure" Feb 15 14:06:17.380: INFO: Pod "pod-projected-secrets-fbc4e7b0-ea47-4512-8e96-f7102933cdd4": Phase="Pending", Reason="", readiness=false. Elapsed: 9.081932ms Feb 15 14:06:19.390: INFO: Pod "pod-projected-secrets-fbc4e7b0-ea47-4512-8e96-f7102933cdd4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01854374s Feb 15 14:06:21.404: INFO: Pod "pod-projected-secrets-fbc4e7b0-ea47-4512-8e96-f7102933cdd4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033169378s Feb 15 14:06:23.421: INFO: Pod "pod-projected-secrets-fbc4e7b0-ea47-4512-8e96-f7102933cdd4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049503787s Feb 15 14:06:25.429: INFO: Pod "pod-projected-secrets-fbc4e7b0-ea47-4512-8e96-f7102933cdd4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.058004145s Feb 15 14:06:27.441: INFO: Pod "pod-projected-secrets-fbc4e7b0-ea47-4512-8e96-f7102933cdd4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.070184745s STEP: Saw pod success Feb 15 14:06:27.441: INFO: Pod "pod-projected-secrets-fbc4e7b0-ea47-4512-8e96-f7102933cdd4" satisfied condition "success or failure" Feb 15 14:06:27.456: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-fbc4e7b0-ea47-4512-8e96-f7102933cdd4 container projected-secret-volume-test: STEP: delete the pod Feb 15 14:06:27.645: INFO: Waiting for pod pod-projected-secrets-fbc4e7b0-ea47-4512-8e96-f7102933cdd4 to disappear Feb 15 14:06:27.656: INFO: Pod pod-projected-secrets-fbc4e7b0-ea47-4512-8e96-f7102933cdd4 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 14:06:27.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4464" for this suite. Feb 15 14:06:33.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 14:06:33.872: INFO: namespace projected-4464 deletion completed in 6.209512643s • [SLOW TEST:16.701 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 14:06:33.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Feb 15 14:06:54.180: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 15 14:06:54.197: INFO: Pod pod-with-poststart-http-hook still exists Feb 15 14:06:56.198: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 15 14:06:56.207: INFO: Pod pod-with-poststart-http-hook still exists Feb 15 14:06:58.197: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 15 14:06:58.206: INFO: Pod pod-with-poststart-http-hook still exists Feb 15 14:07:00.197: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 15 14:07:00.206: INFO: Pod pod-with-poststart-http-hook still exists Feb 15 14:07:02.198: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 15 14:07:02.207: INFO: Pod pod-with-poststart-http-hook still exists Feb 15 14:07:04.198: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 15 14:07:04.207: INFO: Pod pod-with-poststart-http-hook still exists Feb 15 14:07:06.198: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 15 14:07:06.224: INFO: Pod pod-with-poststart-http-hook still exists Feb 15 14:07:08.198: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 15 14:07:08.207: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 14:07:08.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3395" for this suite. Feb 15 14:07:30.242: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 14:07:30.327: INFO: namespace container-lifecycle-hook-3395 deletion completed in 22.112616301s • [SLOW TEST:56.453 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 14:07:30.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-3570, will wait for the garbage collector to delete the pods Feb 15 14:07:40.470: INFO: Deleting Job.batch foo took: 14.402755ms Feb 15 14:07:40.771: INFO: Terminating Job.batch foo pods took: 300.488923ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 14:08:26.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3570" for this suite. Feb 15 14:08:32.809: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 14:08:32.947: INFO: namespace job-3570 deletion completed in 6.257089698s • [SLOW TEST:62.620 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 14:08:32.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-c5ec1e47-a787-4fbd-8077-d0c632e4568f STEP: Creating a pod to test consume secrets Feb 15 14:08:33.143: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b3923fcd-0b3e-422c-b0e1-340dfe8a2ae8" in namespace "projected-4751" to be "success or failure" Feb 15 14:08:33.151: INFO: Pod "pod-projected-secrets-b3923fcd-0b3e-422c-b0e1-340dfe8a2ae8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.665475ms Feb 15 14:08:35.160: INFO: Pod "pod-projected-secrets-b3923fcd-0b3e-422c-b0e1-340dfe8a2ae8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017666691s Feb 15 14:08:37.168: INFO: Pod "pod-projected-secrets-b3923fcd-0b3e-422c-b0e1-340dfe8a2ae8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024829879s Feb 15 14:08:39.177: INFO: Pod "pod-projected-secrets-b3923fcd-0b3e-422c-b0e1-340dfe8a2ae8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034030672s Feb 15 14:08:41.186: INFO: Pod "pod-projected-secrets-b3923fcd-0b3e-422c-b0e1-340dfe8a2ae8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.043360257s STEP: Saw pod success Feb 15 14:08:41.186: INFO: Pod "pod-projected-secrets-b3923fcd-0b3e-422c-b0e1-340dfe8a2ae8" satisfied condition "success or failure" Feb 15 14:08:41.193: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-b3923fcd-0b3e-422c-b0e1-340dfe8a2ae8 container projected-secret-volume-test: STEP: delete the pod Feb 15 14:08:41.254: INFO: Waiting for pod pod-projected-secrets-b3923fcd-0b3e-422c-b0e1-340dfe8a2ae8 to disappear Feb 15 14:08:41.330: INFO: Pod pod-projected-secrets-b3923fcd-0b3e-422c-b0e1-340dfe8a2ae8 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 14:08:41.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4751" for this suite. Feb 15 14:08:47.384: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 14:08:47.508: INFO: namespace projected-4751 deletion completed in 6.161735793s • [SLOW TEST:14.561 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 14:08:47.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 14:08:47.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4967" for this suite. Feb 15 14:09:11.728: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 14:09:11.887: INFO: namespace pods-4967 deletion completed in 24.187176623s • [SLOW TEST:24.379 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 14:09:11.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-f8c47cf6-3ec3-4986-92fb-172e75636cab STEP: Creating a pod to test consume configMaps Feb 15 14:09:12.061: INFO: Waiting up to 5m0s for pod "pod-configmaps-249c2e5d-4e38-4381-9e32-af2eef68a8c0" in namespace "configmap-5601" to be "success or failure" Feb 15 14:09:12.092: INFO: Pod "pod-configmaps-249c2e5d-4e38-4381-9e32-af2eef68a8c0": Phase="Pending", Reason="", readiness=false. Elapsed: 31.198771ms Feb 15 14:09:14.113: INFO: Pod "pod-configmaps-249c2e5d-4e38-4381-9e32-af2eef68a8c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051941058s Feb 15 14:09:16.122: INFO: Pod "pod-configmaps-249c2e5d-4e38-4381-9e32-af2eef68a8c0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060470686s Feb 15 14:09:18.136: INFO: Pod "pod-configmaps-249c2e5d-4e38-4381-9e32-af2eef68a8c0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.074354293s Feb 15 14:09:20.147: INFO: Pod "pod-configmaps-249c2e5d-4e38-4381-9e32-af2eef68a8c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.08594626s STEP: Saw pod success Feb 15 14:09:20.147: INFO: Pod "pod-configmaps-249c2e5d-4e38-4381-9e32-af2eef68a8c0" satisfied condition "success or failure" Feb 15 14:09:20.151: INFO: Trying to get logs from node iruya-node pod pod-configmaps-249c2e5d-4e38-4381-9e32-af2eef68a8c0 container configmap-volume-test: STEP: delete the pod Feb 15 14:09:20.345: INFO: Waiting for pod pod-configmaps-249c2e5d-4e38-4381-9e32-af2eef68a8c0 to disappear Feb 15 14:09:20.362: INFO: Pod pod-configmaps-249c2e5d-4e38-4381-9e32-af2eef68a8c0 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 14:09:20.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5601" for this suite. Feb 15 14:09:26.460: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 14:09:26.969: INFO: namespace configmap-5601 deletion completed in 6.584875462s • [SLOW TEST:15.082 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 14:09:26.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-e4fad6cc-15fb-4e7e-8a2d-b98e2106ed6e STEP: Creating a pod to test consume secrets Feb 15 14:09:27.112: INFO: Waiting up to 5m0s for pod "pod-secrets-1e91288f-094d-4bc0-bd33-eded8d7be49c" in namespace "secrets-7151" to be "success or failure" Feb 15 14:09:27.133: INFO: Pod "pod-secrets-1e91288f-094d-4bc0-bd33-eded8d7be49c": Phase="Pending", Reason="", readiness=false. Elapsed: 20.865311ms Feb 15 14:09:29.142: INFO: Pod "pod-secrets-1e91288f-094d-4bc0-bd33-eded8d7be49c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030367265s Feb 15 14:09:31.153: INFO: Pod "pod-secrets-1e91288f-094d-4bc0-bd33-eded8d7be49c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040726415s Feb 15 14:09:33.164: INFO: Pod "pod-secrets-1e91288f-094d-4bc0-bd33-eded8d7be49c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052218585s Feb 15 14:09:35.175: INFO: Pod "pod-secrets-1e91288f-094d-4bc0-bd33-eded8d7be49c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.063162256s STEP: Saw pod success Feb 15 14:09:35.175: INFO: Pod "pod-secrets-1e91288f-094d-4bc0-bd33-eded8d7be49c" satisfied condition "success or failure" Feb 15 14:09:35.179: INFO: Trying to get logs from node iruya-node pod pod-secrets-1e91288f-094d-4bc0-bd33-eded8d7be49c container secret-volume-test: STEP: delete the pod Feb 15 14:09:35.259: INFO: Waiting for pod pod-secrets-1e91288f-094d-4bc0-bd33-eded8d7be49c to disappear Feb 15 14:09:35.276: INFO: Pod pod-secrets-1e91288f-094d-4bc0-bd33-eded8d7be49c no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 14:09:35.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7151" for this suite. Feb 15 14:09:41.304: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 14:09:41.458: INFO: namespace secrets-7151 deletion completed in 6.17452311s • [SLOW TEST:14.488 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 14:09:41.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Feb 15 14:09:41.573: INFO: Waiting up to 5m0s for pod "pod-b0eb869d-5602-468c-91a3-c8cb0dee953d" in namespace "emptydir-1255" to be "success or failure" Feb 15 14:09:41.586: INFO: Pod "pod-b0eb869d-5602-468c-91a3-c8cb0dee953d": Phase="Pending", Reason="", readiness=false. Elapsed: 13.060748ms Feb 15 14:09:43.597: INFO: Pod "pod-b0eb869d-5602-468c-91a3-c8cb0dee953d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024046646s Feb 15 14:09:45.613: INFO: Pod "pod-b0eb869d-5602-468c-91a3-c8cb0dee953d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039378478s Feb 15 14:09:47.621: INFO: Pod "pod-b0eb869d-5602-468c-91a3-c8cb0dee953d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048167027s Feb 15 14:09:49.632: INFO: Pod "pod-b0eb869d-5602-468c-91a3-c8cb0dee953d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.058859765s Feb 15 14:09:51.644: INFO: Pod "pod-b0eb869d-5602-468c-91a3-c8cb0dee953d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.070290295s STEP: Saw pod success Feb 15 14:09:51.644: INFO: Pod "pod-b0eb869d-5602-468c-91a3-c8cb0dee953d" satisfied condition "success or failure" Feb 15 14:09:51.650: INFO: Trying to get logs from node iruya-node pod pod-b0eb869d-5602-468c-91a3-c8cb0dee953d container test-container: STEP: delete the pod Feb 15 14:09:51.995: INFO: Waiting for pod pod-b0eb869d-5602-468c-91a3-c8cb0dee953d to disappear Feb 15 14:09:52.013: INFO: Pod pod-b0eb869d-5602-468c-91a3-c8cb0dee953d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 14:09:52.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1255" for this suite. Feb 15 14:09:58.070: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 14:09:58.252: INFO: namespace emptydir-1255 deletion completed in 6.214090126s • [SLOW TEST:16.794 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 14:09:58.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0215 14:10:41.817113 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 15 14:10:41.817: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 14:10:41.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1824" for this suite. Feb 15 14:10:53.120: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 14:10:55.189: INFO: namespace gc-1824 deletion completed in 13.354169858s • [SLOW TEST:56.937 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 14:10:55.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args Feb 15 14:10:56.685: INFO: Waiting up to 5m0s for pod "var-expansion-089692e3-2014-441e-8646-1aef13b86ad9" in namespace "var-expansion-818" to be "success or failure" Feb 15 14:10:57.748: INFO: Pod "var-expansion-089692e3-2014-441e-8646-1aef13b86ad9": Phase="Pending", Reason="", readiness=false. Elapsed: 1.06297094s Feb 15 14:11:00.048: INFO: Pod "var-expansion-089692e3-2014-441e-8646-1aef13b86ad9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.362551599s Feb 15 14:11:02.056: INFO: Pod "var-expansion-089692e3-2014-441e-8646-1aef13b86ad9": Phase="Pending", Reason="", readiness=false. Elapsed: 5.370558427s Feb 15 14:11:04.061: INFO: Pod "var-expansion-089692e3-2014-441e-8646-1aef13b86ad9": Phase="Pending", Reason="", readiness=false. Elapsed: 7.376018335s Feb 15 14:11:06.071: INFO: Pod "var-expansion-089692e3-2014-441e-8646-1aef13b86ad9": Phase="Pending", Reason="", readiness=false. Elapsed: 9.385753594s Feb 15 14:11:08.078: INFO: Pod "var-expansion-089692e3-2014-441e-8646-1aef13b86ad9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.393002545s STEP: Saw pod success Feb 15 14:11:08.078: INFO: Pod "var-expansion-089692e3-2014-441e-8646-1aef13b86ad9" satisfied condition "success or failure" Feb 15 14:11:08.082: INFO: Trying to get logs from node iruya-node pod var-expansion-089692e3-2014-441e-8646-1aef13b86ad9 container dapi-container: STEP: delete the pod Feb 15 14:11:08.163: INFO: Waiting for pod var-expansion-089692e3-2014-441e-8646-1aef13b86ad9 to disappear Feb 15 14:11:08.170: INFO: Pod var-expansion-089692e3-2014-441e-8646-1aef13b86ad9 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 14:11:08.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-818" for this suite. Feb 15 14:11:14.302: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 14:11:14.445: INFO: namespace var-expansion-818 deletion completed in 6.25510165s • [SLOW TEST:19.255 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 14:11:14.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-9962 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 15 14:11:14.509: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 15 14:11:50.795: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9962 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 15 14:11:50.795: INFO: >>> kubeConfig: /root/.kube/config I0215 14:11:50.886419 8 log.go:172] (0xc00205e8f0) (0xc00038b860) Create stream I0215 14:11:50.886594 8 log.go:172] (0xc00205e8f0) (0xc00038b860) Stream added, broadcasting: 1 I0215 14:11:50.897498 8 log.go:172] (0xc00205e8f0) Reply frame received for 1 I0215 14:11:50.897528 8 log.go:172] (0xc00205e8f0) (0xc0013c3f40) Create stream I0215 14:11:50.897538 8 log.go:172] (0xc00205e8f0) (0xc0013c3f40) Stream added, broadcasting: 3 I0215 14:11:50.900445 8 log.go:172] (0xc00205e8f0) Reply frame received for 3 I0215 14:11:50.900483 8 log.go:172] (0xc00205e8f0) (0xc00231a140) Create stream I0215 14:11:50.900499 8 log.go:172] (0xc00205e8f0) (0xc00231a140) Stream added, broadcasting: 5 I0215 14:11:50.903080 8 log.go:172] (0xc00205e8f0) Reply frame received for 5 I0215 14:11:52.043322 8 log.go:172] (0xc00205e8f0) Data frame received for 3 I0215 14:11:52.043563 8 log.go:172] (0xc0013c3f40) (3) Data frame handling I0215 14:11:52.043628 8 log.go:172] (0xc0013c3f40) (3) Data frame sent I0215 14:11:52.256904 8 log.go:172] (0xc00205e8f0) (0xc00231a140) Stream removed, broadcasting: 5 I0215 14:11:52.257196 8 log.go:172] (0xc00205e8f0) Data frame received for 1 I0215 14:11:52.257247 8 log.go:172] (0xc00205e8f0) (0xc0013c3f40) Stream removed, broadcasting: 3 I0215 14:11:52.257307 8 log.go:172] (0xc00038b860) (1) Data frame handling I0215 14:11:52.257337 8 log.go:172] (0xc00038b860) (1) Data frame sent I0215 14:11:52.257918 8 log.go:172] (0xc00205e8f0) (0xc00038b860) Stream removed, broadcasting: 1 I0215 14:11:52.258781 8 log.go:172] (0xc00205e8f0) Go away received I0215 14:11:52.259299 8 log.go:172] (0xc00205e8f0) (0xc00038b860) Stream removed, broadcasting: 1 I0215 14:11:52.259361 8 log.go:172] (0xc00205e8f0) (0xc0013c3f40) Stream removed, broadcasting: 3 I0215 14:11:52.259401 8 log.go:172] (0xc00205e8f0) (0xc00231a140) Stream removed, broadcasting: 5 Feb 15 14:11:52.259: INFO: Found all expected endpoints: [netserver-0] Feb 15 14:11:52.271: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9962 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 15 14:11:52.271: INFO: >>> kubeConfig: /root/.kube/config I0215 14:11:52.409907 8 log.go:172] (0xc001d04580) (0xc001c86460) Create stream I0215 14:11:52.410358 8 log.go:172] (0xc001d04580) (0xc001c86460) Stream added, broadcasting: 1 I0215 14:11:52.432002 8 log.go:172] (0xc001d04580) Reply frame received for 1 I0215 14:11:52.432196 8 log.go:172] (0xc001d04580) (0xc001c86780) Create stream I0215 14:11:52.432225 8 log.go:172] (0xc001d04580) (0xc001c86780) Stream added, broadcasting: 3 I0215 14:11:52.436977 8 log.go:172] (0xc001d04580) Reply frame received for 3 I0215 14:11:52.437085 8 log.go:172] (0xc001d04580) (0xc0030e37c0) Create stream I0215 14:11:52.437164 8 log.go:172] (0xc001d04580) (0xc0030e37c0) Stream added, broadcasting: 5 I0215 14:11:52.439884 8 log.go:172] (0xc001d04580) Reply frame received for 5 I0215 14:11:53.766884 8 log.go:172] (0xc001d04580) Data frame received for 3 I0215 14:11:53.767036 8 log.go:172] (0xc001c86780) (3) Data frame handling I0215 14:11:53.767071 8 log.go:172] (0xc001c86780) (3) Data frame sent I0215 14:11:53.949712 8 log.go:172] (0xc001d04580) Data frame received for 1 I0215 14:11:53.950003 8 log.go:172] (0xc001d04580) (0xc001c86780) Stream removed, broadcasting: 3 I0215 14:11:53.950056 8 log.go:172] (0xc001c86460) (1) Data frame handling I0215 14:11:53.950080 8 log.go:172] (0xc001c86460) (1) Data frame sent I0215 14:11:53.950631 8 log.go:172] (0xc001d04580) (0xc001c86460) Stream removed, broadcasting: 1 I0215 14:11:53.951132 8 log.go:172] (0xc001d04580) (0xc0030e37c0) Stream removed, broadcasting: 5 I0215 14:11:53.951187 8 log.go:172] (0xc001d04580) Go away received I0215 14:11:53.951353 8 log.go:172] (0xc001d04580) (0xc001c86460) Stream removed, broadcasting: 1 I0215 14:11:53.951392 8 log.go:172] (0xc001d04580) (0xc001c86780) Stream removed, broadcasting: 3 I0215 14:11:53.951419 8 log.go:172] (0xc001d04580) (0xc0030e37c0) Stream removed, broadcasting: 5 Feb 15 14:11:53.951: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 14:11:53.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9962" for this suite. Feb 15 14:12:17.983: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 14:12:18.095: INFO: namespace pod-network-test-9962 deletion completed in 24.131586305s • [SLOW TEST:63.649 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 14:12:18.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-2304 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-2304 STEP: Creating statefulset with conflicting port in namespace statefulset-2304 STEP: Waiting until pod test-pod will start running in namespace statefulset-2304 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-2304 Feb 15 14:12:28.309: INFO: Observed stateful pod in namespace: statefulset-2304, name: ss-0, uid: ea9fd647-3d50-4264-b514-24404388b9a7, status phase: Pending. Waiting for statefulset controller to delete. Feb 15 14:12:28.688: INFO: Observed stateful pod in namespace: statefulset-2304, name: ss-0, uid: ea9fd647-3d50-4264-b514-24404388b9a7, status phase: Failed. Waiting for statefulset controller to delete. Feb 15 14:12:28.816: INFO: Observed stateful pod in namespace: statefulset-2304, name: ss-0, uid: ea9fd647-3d50-4264-b514-24404388b9a7, status phase: Failed. Waiting for statefulset controller to delete. Feb 15 14:12:28.827: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-2304 STEP: Removing pod with conflicting port in namespace statefulset-2304 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-2304 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Feb 15 14:12:41.073: INFO: Deleting all statefulset in ns statefulset-2304 Feb 15 14:12:41.079: INFO: Scaling statefulset ss to 0 Feb 15 14:13:01.111: INFO: Waiting for statefulset status.replicas updated to 0 Feb 15 14:13:01.116: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 14:13:01.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2304" for this suite. Feb 15 14:13:07.195: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 14:13:07.281: INFO: namespace statefulset-2304 deletion completed in 6.109418574s • [SLOW TEST:49.184 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 14:13:07.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 15 14:13:07.417: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c99932d6-111a-44a3-91ee-edf0cfaebabd" in namespace "projected-5331" to be "success or failure" Feb 15 14:13:07.428: INFO: Pod "downwardapi-volume-c99932d6-111a-44a3-91ee-edf0cfaebabd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.620056ms Feb 15 14:13:09.442: INFO: Pod "downwardapi-volume-c99932d6-111a-44a3-91ee-edf0cfaebabd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025010987s Feb 15 14:13:11.451: INFO: Pod "downwardapi-volume-c99932d6-111a-44a3-91ee-edf0cfaebabd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033394598s Feb 15 14:13:13.458: INFO: Pod "downwardapi-volume-c99932d6-111a-44a3-91ee-edf0cfaebabd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040493624s Feb 15 14:13:15.464: INFO: Pod "downwardapi-volume-c99932d6-111a-44a3-91ee-edf0cfaebabd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.04644129s STEP: Saw pod success Feb 15 14:13:15.464: INFO: Pod "downwardapi-volume-c99932d6-111a-44a3-91ee-edf0cfaebabd" satisfied condition "success or failure" Feb 15 14:13:15.467: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-c99932d6-111a-44a3-91ee-edf0cfaebabd container client-container: STEP: delete the pod Feb 15 14:13:15.560: INFO: Waiting for pod downwardapi-volume-c99932d6-111a-44a3-91ee-edf0cfaebabd to disappear Feb 15 14:13:15.661: INFO: Pod downwardapi-volume-c99932d6-111a-44a3-91ee-edf0cfaebabd no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 14:13:15.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5331" for this suite. Feb 15 14:13:21.693: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 14:13:21.847: INFO: namespace projected-5331 deletion completed in 6.17836423s • [SLOW TEST:14.566 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 14:13:21.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 15 14:13:21.958: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c51fcce3-ea6e-421a-9a70-431cd26143ab" in namespace "projected-4492" to be "success or failure" Feb 15 14:13:21.992: INFO: Pod "downwardapi-volume-c51fcce3-ea6e-421a-9a70-431cd26143ab": Phase="Pending", Reason="", readiness=false. Elapsed: 34.113512ms Feb 15 14:13:24.001: INFO: Pod "downwardapi-volume-c51fcce3-ea6e-421a-9a70-431cd26143ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042892328s Feb 15 14:13:26.020: INFO: Pod "downwardapi-volume-c51fcce3-ea6e-421a-9a70-431cd26143ab": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062236231s Feb 15 14:13:28.033: INFO: Pod "downwardapi-volume-c51fcce3-ea6e-421a-9a70-431cd26143ab": Phase="Pending", Reason="", readiness=false. Elapsed: 6.075470441s Feb 15 14:13:30.049: INFO: Pod "downwardapi-volume-c51fcce3-ea6e-421a-9a70-431cd26143ab": Phase="Pending", Reason="", readiness=false. Elapsed: 8.090698965s Feb 15 14:13:32.068: INFO: Pod "downwardapi-volume-c51fcce3-ea6e-421a-9a70-431cd26143ab": Phase="Pending", Reason="", readiness=false. Elapsed: 10.109839023s Feb 15 14:13:34.077: INFO: Pod "downwardapi-volume-c51fcce3-ea6e-421a-9a70-431cd26143ab": Phase="Pending", Reason="", readiness=false. Elapsed: 12.119309916s Feb 15 14:13:36.091: INFO: Pod "downwardapi-volume-c51fcce3-ea6e-421a-9a70-431cd26143ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.132710507s STEP: Saw pod success Feb 15 14:13:36.091: INFO: Pod "downwardapi-volume-c51fcce3-ea6e-421a-9a70-431cd26143ab" satisfied condition "success or failure" Feb 15 14:13:36.097: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-c51fcce3-ea6e-421a-9a70-431cd26143ab container client-container: STEP: delete the pod Feb 15 14:13:36.509: INFO: Waiting for pod downwardapi-volume-c51fcce3-ea6e-421a-9a70-431cd26143ab to disappear Feb 15 14:13:36.521: INFO: Pod downwardapi-volume-c51fcce3-ea6e-421a-9a70-431cd26143ab no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 14:13:36.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4492" for this suite. Feb 15 14:13:42.600: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 14:13:42.710: INFO: namespace projected-4492 deletion completed in 6.180240724s • [SLOW TEST:20.862 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 14:13:42.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command Feb 15 14:13:42.787: INFO: Waiting up to 5m0s for pod "client-containers-2174833e-3dfc-4ef7-8509-9a4934451b59" in namespace "containers-3994" to be "success or failure" Feb 15 14:13:42.808: INFO: Pod "client-containers-2174833e-3dfc-4ef7-8509-9a4934451b59": Phase="Pending", Reason="", readiness=false. Elapsed: 20.458607ms Feb 15 14:13:44.836: INFO: Pod "client-containers-2174833e-3dfc-4ef7-8509-9a4934451b59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048408427s Feb 15 14:13:46.858: INFO: Pod "client-containers-2174833e-3dfc-4ef7-8509-9a4934451b59": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070654732s Feb 15 14:13:48.871: INFO: Pod "client-containers-2174833e-3dfc-4ef7-8509-9a4934451b59": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083808562s Feb 15 14:13:50.920: INFO: Pod "client-containers-2174833e-3dfc-4ef7-8509-9a4934451b59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.132283513s STEP: Saw pod success Feb 15 14:13:50.920: INFO: Pod "client-containers-2174833e-3dfc-4ef7-8509-9a4934451b59" satisfied condition "success or failure" Feb 15 14:13:50.927: INFO: Trying to get logs from node iruya-node pod client-containers-2174833e-3dfc-4ef7-8509-9a4934451b59 container test-container: STEP: delete the pod Feb 15 14:13:50.994: INFO: Waiting for pod client-containers-2174833e-3dfc-4ef7-8509-9a4934451b59 to disappear Feb 15 14:13:51.001: INFO: Pod client-containers-2174833e-3dfc-4ef7-8509-9a4934451b59 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 14:13:51.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3994" for this suite. Feb 15 14:13:57.076: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 14:13:57.214: INFO: namespace containers-3994 deletion completed in 6.204975951s • [SLOW TEST:14.503 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 14:13:57.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy Feb 15 14:13:57.336: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix186687952/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 14:13:57.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1452" for this suite. Feb 15 14:14:03.506: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 14:14:03.609: INFO: namespace kubectl-1452 deletion completed in 6.136637797s • [SLOW TEST:6.394 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 14:14:03.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 15 14:14:03.772: INFO: Pod name rollover-pod: Found 0 pods out of 1 Feb 15 14:14:08.787: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 15 14:14:12.796: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Feb 15 14:14:14.803: INFO: Creating deployment "test-rollover-deployment" Feb 15 14:14:14.837: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Feb 15 14:14:16.870: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Feb 15 14:14:16.882: INFO: Ensure that both replica sets have 1 created replica Feb 15 14:14:16.894: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Feb 15 14:14:16.906: INFO: Updating deployment test-rollover-deployment Feb 15 14:14:16.906: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Feb 15 14:14:18.959: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Feb 15 14:14:18.988: INFO: Make sure deployment "test-rollover-deployment" is complete Feb 15 14:14:18.999: INFO: all replica sets need to contain the pod-template-hash label Feb 15 14:14:19.000: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717372854, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717372854, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717372857, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717372854, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 15 14:14:21.010: INFO: all replica sets need to contain the pod-template-hash label Feb 15 14:14:21.010: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717372854, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717372854, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717372857, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717372854, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 15 14:14:23.028: INFO: all replica sets need to contain the pod-template-hash label Feb 15 14:14:23.028: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717372854, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717372854, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717372857, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717372854, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 15 14:14:25.017: INFO: all replica sets need to contain the pod-template-hash label Feb 15 14:14:25.017: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717372854, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717372854, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717372864, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717372854, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 15 14:14:27.014: INFO: all replica sets need to contain the pod-template-hash label Feb 15 14:14:27.014: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717372854, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717372854, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717372864, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717372854, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 15 14:14:29.011: INFO: all replica sets need to contain the pod-template-hash label Feb 15 14:14:29.012: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717372854, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717372854, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717372864, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717372854, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 15 14:14:31.009: INFO: all replica sets need to contain the pod-template-hash label Feb 15 14:14:31.009: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717372854, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717372854, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717372864, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717372854, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 15 14:14:33.023: INFO: all replica sets need to contain the pod-template-hash label Feb 15 14:14:33.024: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717372854, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717372854, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717372864, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717372854, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 15 14:14:35.041: INFO: Feb 15 14:14:35.041: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Feb 15 14:14:35.092: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-6051,SelfLink:/apis/apps/v1/namespaces/deployment-6051/deployments/test-rollover-deployment,UID:6a936d77-488f-4395-859b-00b7bb00ee74,ResourceVersion:24456591,Generation:2,CreationTimestamp:2020-02-15 14:14:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-15 14:14:14 +0000 UTC 2020-02-15 14:14:14 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-15 14:14:34 +0000 UTC 2020-02-15 14:14:14 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Feb 15 14:14:35.107: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-6051,SelfLink:/apis/apps/v1/namespaces/deployment-6051/replicasets/test-rollover-deployment-854595fc44,UID:814fd8ed-dc88-498d-a542-8c45c88ff93f,ResourceVersion:24456581,Generation:2,CreationTimestamp:2020-02-15 14:14:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 6a936d77-488f-4395-859b-00b7bb00ee74 0xc000aad377 0xc000aad378}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Feb 15 14:14:35.107: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Feb 15 14:14:35.108: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-6051,SelfLink:/apis/apps/v1/namespaces/deployment-6051/replicasets/test-rollover-controller,UID:5c2b493b-783f-476e-8b9e-e03d8819eb4c,ResourceVersion:24456590,Generation:2,CreationTimestamp:2020-02-15 14:14:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 6a936d77-488f-4395-859b-00b7bb00ee74 0xc000aad1ef 0xc000aad200}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 15 14:14:35.108: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-6051,SelfLink:/apis/apps/v1/namespaces/deployment-6051/replicasets/test-rollover-deployment-9b8b997cf,UID:b4b09b4b-fbb3-4d63-9c15-47f2df74ce94,ResourceVersion:24456549,Generation:2,CreationTimestamp:2020-02-15 14:14:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 6a936d77-488f-4395-859b-00b7bb00ee74 0xc000aad720 0xc000aad721}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 15 14:14:35.111: INFO: Pod "test-rollover-deployment-854595fc44-qssk5" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-qssk5,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-6051,SelfLink:/api/v1/namespaces/deployment-6051/pods/test-rollover-deployment-854595fc44-qssk5,UID:53d6d041-ce74-43ea-aaf7-7160f4cde125,ResourceVersion:24456565,Generation:0,CreationTimestamp:2020-02-15 14:14:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 814fd8ed-dc88-498d-a542-8c45c88ff93f 0xc000357c57 0xc000357c58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-s9dls {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s9dls,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-s9dls true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d3e040} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d3e060}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 14:14:17 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 14:14:23 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 14:14:23 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 14:14:17 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-02-15 14:14:17 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-15 14:14:23 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://c6d4e9db13cdac784c2b3372ae572ac1ab126600a722ff86fd0ee59d98f6d2c4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 14:14:35.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6051" for this suite. Feb 15 14:14:41.221: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 14:14:41.333: INFO: namespace deployment-6051 deletion completed in 6.217970904s • [SLOW TEST:37.724 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 14:14:41.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-ceaf90b2-dd41-4b03-8fcf-12ace82931e6 in namespace container-probe-4254 Feb 15 14:14:49.519: INFO: Started pod busybox-ceaf90b2-dd41-4b03-8fcf-12ace82931e6 in namespace container-probe-4254 STEP: checking the pod's current state and verifying that restartCount is present Feb 15 14:14:49.524: INFO: Initial restart count of pod busybox-ceaf90b2-dd41-4b03-8fcf-12ace82931e6 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 14:18:49.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4254" for this suite. Feb 15 14:18:55.809: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 14:18:55.961: INFO: namespace container-probe-4254 deletion completed in 6.243761698s • [SLOW TEST:254.628 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 14:18:55.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-d6f3311d-9699-4dd0-bd36-b72dc5f88563 STEP: Creating a pod to test consume configMaps Feb 15 14:18:56.132: INFO: Waiting up to 5m0s for pod "pod-configmaps-bea4614b-df3c-4695-a013-651cedff8818" in namespace "configmap-840" to be "success or failure" Feb 15 14:18:56.211: INFO: Pod "pod-configmaps-bea4614b-df3c-4695-a013-651cedff8818": Phase="Pending", Reason="", readiness=false. Elapsed: 78.414069ms Feb 15 14:18:58.255: INFO: Pod "pod-configmaps-bea4614b-df3c-4695-a013-651cedff8818": Phase="Pending", Reason="", readiness=false. Elapsed: 2.122888052s Feb 15 14:19:00.285: INFO: Pod "pod-configmaps-bea4614b-df3c-4695-a013-651cedff8818": Phase="Pending", Reason="", readiness=false. Elapsed: 4.152802943s Feb 15 14:19:02.292: INFO: Pod "pod-configmaps-bea4614b-df3c-4695-a013-651cedff8818": Phase="Pending", Reason="", readiness=false. Elapsed: 6.159583696s Feb 15 14:19:04.317: INFO: Pod "pod-configmaps-bea4614b-df3c-4695-a013-651cedff8818": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.184883621s STEP: Saw pod success Feb 15 14:19:04.318: INFO: Pod "pod-configmaps-bea4614b-df3c-4695-a013-651cedff8818" satisfied condition "success or failure" Feb 15 14:19:04.358: INFO: Trying to get logs from node iruya-node pod pod-configmaps-bea4614b-df3c-4695-a013-651cedff8818 container configmap-volume-test: STEP: delete the pod Feb 15 14:19:04.522: INFO: Waiting for pod pod-configmaps-bea4614b-df3c-4695-a013-651cedff8818 to disappear Feb 15 14:19:04.533: INFO: Pod pod-configmaps-bea4614b-df3c-4695-a013-651cedff8818 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 14:19:04.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-840" for this suite. Feb 15 14:19:10.593: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 14:19:10.714: INFO: namespace configmap-840 deletion completed in 6.175426429s • [SLOW TEST:14.753 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 14:19:10.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-1981186f-99fc-41f9-a9bf-7686dfedd667 STEP: Creating a pod to test consume configMaps Feb 15 14:19:10.885: INFO: Waiting up to 5m0s for pod "pod-configmaps-93dbcc52-5781-40bd-8e9a-696c205b838a" in namespace "configmap-5362" to be "success or failure" Feb 15 14:19:10.895: INFO: Pod "pod-configmaps-93dbcc52-5781-40bd-8e9a-696c205b838a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.575818ms Feb 15 14:19:12.902: INFO: Pod "pod-configmaps-93dbcc52-5781-40bd-8e9a-696c205b838a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016976532s Feb 15 14:19:14.914: INFO: Pod "pod-configmaps-93dbcc52-5781-40bd-8e9a-696c205b838a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028898623s Feb 15 14:19:16.922: INFO: Pod "pod-configmaps-93dbcc52-5781-40bd-8e9a-696c205b838a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03694167s Feb 15 14:19:18.933: INFO: Pod "pod-configmaps-93dbcc52-5781-40bd-8e9a-696c205b838a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.048362738s Feb 15 14:19:20.946: INFO: Pod "pod-configmaps-93dbcc52-5781-40bd-8e9a-696c205b838a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.060648688s STEP: Saw pod success Feb 15 14:19:20.946: INFO: Pod "pod-configmaps-93dbcc52-5781-40bd-8e9a-696c205b838a" satisfied condition "success or failure" Feb 15 14:19:20.957: INFO: Trying to get logs from node iruya-node pod pod-configmaps-93dbcc52-5781-40bd-8e9a-696c205b838a container configmap-volume-test: STEP: delete the pod Feb 15 14:19:21.023: INFO: Waiting for pod pod-configmaps-93dbcc52-5781-40bd-8e9a-696c205b838a to disappear Feb 15 14:19:21.035: INFO: Pod pod-configmaps-93dbcc52-5781-40bd-8e9a-696c205b838a no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 14:19:21.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5362" for this suite. Feb 15 14:19:27.146: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 14:19:27.268: INFO: namespace configmap-5362 deletion completed in 6.225687039s • [SLOW TEST:16.554 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 14:19:27.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Feb 15 14:19:36.634: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 14:19:36.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3129" for this suite. Feb 15 14:19:42.714: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 14:19:42.858: INFO: namespace container-runtime-3129 deletion completed in 6.179548301s • [SLOW TEST:15.590 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 14:19:42.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Feb 15 14:19:42.961: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-4706,SelfLink:/api/v1/namespaces/watch-4706/configmaps/e2e-watch-test-watch-closed,UID:570e3897-c425-47d0-afc4-2818b30acb98,ResourceVersion:24457134,Generation:0,CreationTimestamp:2020-02-15 14:19:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 15 14:19:42.961: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-4706,SelfLink:/api/v1/namespaces/watch-4706/configmaps/e2e-watch-test-watch-closed,UID:570e3897-c425-47d0-afc4-2818b30acb98,ResourceVersion:24457135,Generation:0,CreationTimestamp:2020-02-15 14:19:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Feb 15 14:19:43.019: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-4706,SelfLink:/api/v1/namespaces/watch-4706/configmaps/e2e-watch-test-watch-closed,UID:570e3897-c425-47d0-afc4-2818b30acb98,ResourceVersion:24457136,Generation:0,CreationTimestamp:2020-02-15 14:19:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 15 14:19:43.019: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-4706,SelfLink:/api/v1/namespaces/watch-4706/configmaps/e2e-watch-test-watch-closed,UID:570e3897-c425-47d0-afc4-2818b30acb98,ResourceVersion:24457137,Generation:0,CreationTimestamp:2020-02-15 14:19:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 14:19:43.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4706" for this suite. Feb 15 14:19:49.039: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 14:19:49.218: INFO: namespace watch-4706 deletion completed in 6.195082197s • [SLOW TEST:6.360 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 14:19:49.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 15 14:19:49.332: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b04cb340-fa90-4f7f-b025-7e4b1203e930" in namespace "downward-api-930" to be "success or failure" Feb 15 14:19:49.341: INFO: Pod "downwardapi-volume-b04cb340-fa90-4f7f-b025-7e4b1203e930": Phase="Pending", Reason="", readiness=false. Elapsed: 8.382709ms Feb 15 14:19:51.347: INFO: Pod "downwardapi-volume-b04cb340-fa90-4f7f-b025-7e4b1203e930": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014559386s Feb 15 14:19:53.357: INFO: Pod "downwardapi-volume-b04cb340-fa90-4f7f-b025-7e4b1203e930": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025058064s Feb 15 14:19:55.371: INFO: Pod "downwardapi-volume-b04cb340-fa90-4f7f-b025-7e4b1203e930": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03919408s Feb 15 14:19:57.383: INFO: Pod "downwardapi-volume-b04cb340-fa90-4f7f-b025-7e4b1203e930": Phase="Pending", Reason="", readiness=false. Elapsed: 8.050596483s Feb 15 14:19:59.391: INFO: Pod "downwardapi-volume-b04cb340-fa90-4f7f-b025-7e4b1203e930": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.059120518s STEP: Saw pod success Feb 15 14:19:59.391: INFO: Pod "downwardapi-volume-b04cb340-fa90-4f7f-b025-7e4b1203e930" satisfied condition "success or failure" Feb 15 14:19:59.395: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-b04cb340-fa90-4f7f-b025-7e4b1203e930 container client-container: STEP: delete the pod Feb 15 14:19:59.472: INFO: Waiting for pod downwardapi-volume-b04cb340-fa90-4f7f-b025-7e4b1203e930 to disappear Feb 15 14:19:59.516: INFO: Pod downwardapi-volume-b04cb340-fa90-4f7f-b025-7e4b1203e930 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 14:19:59.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-930" for this suite. Feb 15 14:20:05.556: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 14:20:05.672: INFO: namespace downward-api-930 deletion completed in 6.147948047s • [SLOW TEST:16.453 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 14:20:05.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode Feb 15 14:20:05.865: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-7402" to be "success or failure" Feb 15 14:20:06.007: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 141.984177ms Feb 15 14:20:08.018: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.152850624s Feb 15 14:20:10.025: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.160211918s Feb 15 14:20:12.042: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.176845328s Feb 15 14:20:14.061: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.196179955s Feb 15 14:20:16.075: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.209676311s STEP: Saw pod success Feb 15 14:20:16.075: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Feb 15 14:20:16.080: INFO: Trying to get logs from node iruya-node pod pod-host-path-test container test-container-1: STEP: delete the pod Feb 15 14:20:16.225: INFO: Waiting for pod pod-host-path-test to disappear Feb 15 14:20:16.247: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 14:20:16.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-7402" for this suite. Feb 15 14:20:22.289: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 14:20:22.391: INFO: namespace hostpath-7402 deletion completed in 6.124189453s • [SLOW TEST:16.717 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 14:20:22.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0215 14:20:34.661395 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 15 14:20:34.661: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 14:20:34.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2420" for this suite. Feb 15 14:20:40.707: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 14:20:40.816: INFO: namespace gc-2420 deletion completed in 6.147180618s • [SLOW TEST:18.425 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 14:20:40.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Feb 15 14:20:40.953: INFO: Waiting up to 5m0s for pod "downward-api-c47351d0-85ae-4e1d-a7e1-6c89985e0e21" in namespace "downward-api-2481" to be "success or failure" Feb 15 14:20:40.967: INFO: Pod "downward-api-c47351d0-85ae-4e1d-a7e1-6c89985e0e21": Phase="Pending", Reason="", readiness=false. Elapsed: 14.271125ms Feb 15 14:20:42.975: INFO: Pod "downward-api-c47351d0-85ae-4e1d-a7e1-6c89985e0e21": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022359617s Feb 15 14:20:44.980: INFO: Pod "downward-api-c47351d0-85ae-4e1d-a7e1-6c89985e0e21": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027109421s Feb 15 14:20:46.989: INFO: Pod "downward-api-c47351d0-85ae-4e1d-a7e1-6c89985e0e21": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035463898s Feb 15 14:20:49.010: INFO: Pod "downward-api-c47351d0-85ae-4e1d-a7e1-6c89985e0e21": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.057120933s STEP: Saw pod success Feb 15 14:20:49.010: INFO: Pod "downward-api-c47351d0-85ae-4e1d-a7e1-6c89985e0e21" satisfied condition "success or failure" Feb 15 14:20:49.014: INFO: Trying to get logs from node iruya-node pod downward-api-c47351d0-85ae-4e1d-a7e1-6c89985e0e21 container dapi-container: STEP: delete the pod Feb 15 14:20:49.131: INFO: Waiting for pod downward-api-c47351d0-85ae-4e1d-a7e1-6c89985e0e21 to disappear Feb 15 14:20:49.137: INFO: Pod downward-api-c47351d0-85ae-4e1d-a7e1-6c89985e0e21 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 15 14:20:49.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2481" for this suite. Feb 15 14:20:55.165: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 15 14:20:55.272: INFO: namespace downward-api-2481 deletion completed in 6.124601579s • [SLOW TEST:14.455 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 15 14:20:55.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 15 14:20:55.425: INFO: (0) /api/v1/nodes/iruya-node/proxy/logs/:
alternatives.log
alternatives.l... (200; 14.301528ms)
Feb 15 14:20:55.433: INFO: (1) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.805915ms)
Feb 15 14:20:55.440: INFO: (2) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.365109ms)
Feb 15 14:20:55.447: INFO: (3) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.264638ms)
Feb 15 14:20:55.452: INFO: (4) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.533574ms)
Feb 15 14:20:55.458: INFO: (5) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.703042ms)
Feb 15 14:20:55.464: INFO: (6) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.870766ms)
Feb 15 14:20:55.470: INFO: (7) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.910797ms)
Feb 15 14:20:55.475: INFO: (8) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.325356ms)
Feb 15 14:20:55.481: INFO: (9) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.108626ms)
Feb 15 14:20:55.486: INFO: (10) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.190571ms)
Feb 15 14:20:55.519: INFO: (11) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 32.809607ms)
Feb 15 14:20:55.528: INFO: (12) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.030769ms)
Feb 15 14:20:55.533: INFO: (13) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.518177ms)
Feb 15 14:20:55.540: INFO: (14) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.413403ms)
Feb 15 14:20:55.545: INFO: (15) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.3383ms)
Feb 15 14:20:55.551: INFO: (16) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.088302ms)
Feb 15 14:20:55.556: INFO: (17) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.827078ms)
Feb 15 14:20:55.561: INFO: (18) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.471809ms)
Feb 15 14:20:55.567: INFO: (19) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.624671ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:20:55.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-3969" for this suite.
Feb 15 14:21:01.601: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:21:01.738: INFO: namespace proxy-3969 deletion completed in 6.165962787s

• [SLOW TEST:6.466 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:21:01.738: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Feb 15 14:21:09.939: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-68324e6a-4222-4dce-b77e-01e3d29e4ce3,GenerateName:,Namespace:events-7558,SelfLink:/api/v1/namespaces/events-7558/pods/send-events-68324e6a-4222-4dce-b77e-01e3d29e4ce3,UID:5fb35804-ff6d-4f3c-95a1-1cd153a2b5bc,ResourceVersion:24457397,Generation:0,CreationTimestamp:2020-02-15 14:21:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 900435974,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-47c8j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-47c8j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-47c8j true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00274b4b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00274b4d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 14:21:01 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 14:21:09 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 14:21:09 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 14:21:01 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-02-15 14:21:01 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-02-15 14:21:09 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://85435971ce376033d60b22363b1f831de6ca1a885113b0d5d50108d47a495e04}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Feb 15 14:21:11.952: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Feb 15 14:21:13.978: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:21:13.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-7558" for this suite.
Feb 15 14:21:52.170: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:21:52.301: INFO: namespace events-7558 deletion completed in 38.281484874s

• [SLOW TEST:50.563 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:21:52.301: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Feb 15 14:21:52.500: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5465,SelfLink:/api/v1/namespaces/watch-5465/configmaps/e2e-watch-test-configmap-a,UID:1d141ad1-bd16-4089-9a51-21e6dbee91e8,ResourceVersion:24457472,Generation:0,CreationTimestamp:2020-02-15 14:21:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 15 14:21:52.501: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5465,SelfLink:/api/v1/namespaces/watch-5465/configmaps/e2e-watch-test-configmap-a,UID:1d141ad1-bd16-4089-9a51-21e6dbee91e8,ResourceVersion:24457472,Generation:0,CreationTimestamp:2020-02-15 14:21:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Feb 15 14:22:02.525: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5465,SelfLink:/api/v1/namespaces/watch-5465/configmaps/e2e-watch-test-configmap-a,UID:1d141ad1-bd16-4089-9a51-21e6dbee91e8,ResourceVersion:24457489,Generation:0,CreationTimestamp:2020-02-15 14:21:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb 15 14:22:02.527: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5465,SelfLink:/api/v1/namespaces/watch-5465/configmaps/e2e-watch-test-configmap-a,UID:1d141ad1-bd16-4089-9a51-21e6dbee91e8,ResourceVersion:24457489,Generation:0,CreationTimestamp:2020-02-15 14:21:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Feb 15 14:22:12.553: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5465,SelfLink:/api/v1/namespaces/watch-5465/configmaps/e2e-watch-test-configmap-a,UID:1d141ad1-bd16-4089-9a51-21e6dbee91e8,ResourceVersion:24457503,Generation:0,CreationTimestamp:2020-02-15 14:21:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 15 14:22:12.555: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5465,SelfLink:/api/v1/namespaces/watch-5465/configmaps/e2e-watch-test-configmap-a,UID:1d141ad1-bd16-4089-9a51-21e6dbee91e8,ResourceVersion:24457503,Generation:0,CreationTimestamp:2020-02-15 14:21:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Feb 15 14:22:22.577: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5465,SelfLink:/api/v1/namespaces/watch-5465/configmaps/e2e-watch-test-configmap-a,UID:1d141ad1-bd16-4089-9a51-21e6dbee91e8,ResourceVersion:24457517,Generation:0,CreationTimestamp:2020-02-15 14:21:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 15 14:22:22.578: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5465,SelfLink:/api/v1/namespaces/watch-5465/configmaps/e2e-watch-test-configmap-a,UID:1d141ad1-bd16-4089-9a51-21e6dbee91e8,ResourceVersion:24457517,Generation:0,CreationTimestamp:2020-02-15 14:21:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Feb 15 14:22:32.598: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-5465,SelfLink:/api/v1/namespaces/watch-5465/configmaps/e2e-watch-test-configmap-b,UID:8d75f740-3633-48a4-9105-8245facec3ec,ResourceVersion:24457531,Generation:0,CreationTimestamp:2020-02-15 14:22:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 15 14:22:32.598: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-5465,SelfLink:/api/v1/namespaces/watch-5465/configmaps/e2e-watch-test-configmap-b,UID:8d75f740-3633-48a4-9105-8245facec3ec,ResourceVersion:24457531,Generation:0,CreationTimestamp:2020-02-15 14:22:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Feb 15 14:22:42.630: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-5465,SelfLink:/api/v1/namespaces/watch-5465/configmaps/e2e-watch-test-configmap-b,UID:8d75f740-3633-48a4-9105-8245facec3ec,ResourceVersion:24457545,Generation:0,CreationTimestamp:2020-02-15 14:22:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 15 14:22:42.631: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-5465,SelfLink:/api/v1/namespaces/watch-5465/configmaps/e2e-watch-test-configmap-b,UID:8d75f740-3633-48a4-9105-8245facec3ec,ResourceVersion:24457545,Generation:0,CreationTimestamp:2020-02-15 14:22:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:22:52.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5465" for this suite.
Feb 15 14:22:58.705: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:22:58.881: INFO: namespace watch-5465 deletion completed in 6.232224824s

• [SLOW TEST:66.580 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:22:58.882: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 15 14:22:59.001: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Feb 15 14:23:02.155: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:23:02.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-3569" for this suite.
Feb 15 14:23:12.376: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:23:12.513: INFO: namespace replication-controller-3569 deletion completed in 10.18324972s

• [SLOW TEST:13.632 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:23:12.515: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-042ab3c4-c3c6-4ee6-82f1-606a2591832e
STEP: Creating a pod to test consume configMaps
Feb 15 14:23:12.651: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-514c806c-f21e-4317-8370-f70c5a9c75db" in namespace "projected-5304" to be "success or failure"
Feb 15 14:23:12.657: INFO: Pod "pod-projected-configmaps-514c806c-f21e-4317-8370-f70c5a9c75db": Phase="Pending", Reason="", readiness=false. Elapsed: 6.328448ms
Feb 15 14:23:14.671: INFO: Pod "pod-projected-configmaps-514c806c-f21e-4317-8370-f70c5a9c75db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020319077s
Feb 15 14:23:16.686: INFO: Pod "pod-projected-configmaps-514c806c-f21e-4317-8370-f70c5a9c75db": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034651189s
Feb 15 14:23:18.700: INFO: Pod "pod-projected-configmaps-514c806c-f21e-4317-8370-f70c5a9c75db": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048999317s
Feb 15 14:23:20.715: INFO: Pod "pod-projected-configmaps-514c806c-f21e-4317-8370-f70c5a9c75db": Phase="Pending", Reason="", readiness=false. Elapsed: 8.063696639s
Feb 15 14:23:22.723: INFO: Pod "pod-projected-configmaps-514c806c-f21e-4317-8370-f70c5a9c75db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.071835508s
STEP: Saw pod success
Feb 15 14:23:22.723: INFO: Pod "pod-projected-configmaps-514c806c-f21e-4317-8370-f70c5a9c75db" satisfied condition "success or failure"
Feb 15 14:23:22.727: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-514c806c-f21e-4317-8370-f70c5a9c75db container projected-configmap-volume-test: 
STEP: delete the pod
Feb 15 14:23:23.280: INFO: Waiting for pod pod-projected-configmaps-514c806c-f21e-4317-8370-f70c5a9c75db to disappear
Feb 15 14:23:23.332: INFO: Pod pod-projected-configmaps-514c806c-f21e-4317-8370-f70c5a9c75db no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:23:23.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5304" for this suite.
Feb 15 14:23:29.396: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:23:29.544: INFO: namespace projected-5304 deletion completed in 6.199912875s

• [SLOW TEST:17.029 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:23:29.544: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 15 14:23:29.705: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5eb6adc0-3756-4b44-b0a5-c370194ecc6e" in namespace "projected-1280" to be "success or failure"
Feb 15 14:23:29.737: INFO: Pod "downwardapi-volume-5eb6adc0-3756-4b44-b0a5-c370194ecc6e": Phase="Pending", Reason="", readiness=false. Elapsed: 31.532381ms
Feb 15 14:23:31.746: INFO: Pod "downwardapi-volume-5eb6adc0-3756-4b44-b0a5-c370194ecc6e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04051973s
Feb 15 14:23:33.759: INFO: Pod "downwardapi-volume-5eb6adc0-3756-4b44-b0a5-c370194ecc6e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053967023s
Feb 15 14:23:35.772: INFO: Pod "downwardapi-volume-5eb6adc0-3756-4b44-b0a5-c370194ecc6e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066410218s
Feb 15 14:23:37.782: INFO: Pod "downwardapi-volume-5eb6adc0-3756-4b44-b0a5-c370194ecc6e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.076232061s
Feb 15 14:23:39.795: INFO: Pod "downwardapi-volume-5eb6adc0-3756-4b44-b0a5-c370194ecc6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.08998367s
STEP: Saw pod success
Feb 15 14:23:39.795: INFO: Pod "downwardapi-volume-5eb6adc0-3756-4b44-b0a5-c370194ecc6e" satisfied condition "success or failure"
Feb 15 14:23:39.803: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-5eb6adc0-3756-4b44-b0a5-c370194ecc6e container client-container: 
STEP: delete the pod
Feb 15 14:23:40.124: INFO: Waiting for pod downwardapi-volume-5eb6adc0-3756-4b44-b0a5-c370194ecc6e to disappear
Feb 15 14:23:40.188: INFO: Pod downwardapi-volume-5eb6adc0-3756-4b44-b0a5-c370194ecc6e no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:23:40.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1280" for this suite.
Feb 15 14:23:46.229: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:23:46.414: INFO: namespace projected-1280 deletion completed in 6.217270115s

• [SLOW TEST:16.870 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:23:46.415: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 15 14:23:46.614: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9b1dc68a-6354-4d28-a536-1abce888e398" in namespace "projected-8077" to be "success or failure"
Feb 15 14:23:46.700: INFO: Pod "downwardapi-volume-9b1dc68a-6354-4d28-a536-1abce888e398": Phase="Pending", Reason="", readiness=false. Elapsed: 85.12407ms
Feb 15 14:23:48.710: INFO: Pod "downwardapi-volume-9b1dc68a-6354-4d28-a536-1abce888e398": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09562311s
Feb 15 14:23:51.015: INFO: Pod "downwardapi-volume-9b1dc68a-6354-4d28-a536-1abce888e398": Phase="Pending", Reason="", readiness=false. Elapsed: 4.400613713s
Feb 15 14:23:53.035: INFO: Pod "downwardapi-volume-9b1dc68a-6354-4d28-a536-1abce888e398": Phase="Pending", Reason="", readiness=false. Elapsed: 6.420753803s
Feb 15 14:23:55.046: INFO: Pod "downwardapi-volume-9b1dc68a-6354-4d28-a536-1abce888e398": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.431738266s
STEP: Saw pod success
Feb 15 14:23:55.047: INFO: Pod "downwardapi-volume-9b1dc68a-6354-4d28-a536-1abce888e398" satisfied condition "success or failure"
Feb 15 14:23:55.050: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-9b1dc68a-6354-4d28-a536-1abce888e398 container client-container: 
STEP: delete the pod
Feb 15 14:23:55.148: INFO: Waiting for pod downwardapi-volume-9b1dc68a-6354-4d28-a536-1abce888e398 to disappear
Feb 15 14:23:55.152: INFO: Pod downwardapi-volume-9b1dc68a-6354-4d28-a536-1abce888e398 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:23:55.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8077" for this suite.
Feb 15 14:24:01.201: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:24:01.364: INFO: namespace projected-8077 deletion completed in 6.205141247s

• [SLOW TEST:14.949 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:24:01.364: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap that has name configmap-test-emptyKey-049c5d75-1708-491c-9609-6be8d407850a
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:24:01.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2415" for this suite.
Feb 15 14:24:07.495: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:24:07.662: INFO: namespace configmap-2415 deletion completed in 6.187349387s

• [SLOW TEST:6.298 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:24:07.663: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Feb 15 14:24:07.809: INFO: namespace kubectl-9702
Feb 15 14:24:07.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9702'
Feb 15 14:24:10.241: INFO: stderr: ""
Feb 15 14:24:10.242: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb 15 14:24:11.251: INFO: Selector matched 1 pods for map[app:redis]
Feb 15 14:24:11.251: INFO: Found 0 / 1
Feb 15 14:24:12.251: INFO: Selector matched 1 pods for map[app:redis]
Feb 15 14:24:12.251: INFO: Found 0 / 1
Feb 15 14:24:13.256: INFO: Selector matched 1 pods for map[app:redis]
Feb 15 14:24:13.256: INFO: Found 0 / 1
Feb 15 14:24:14.261: INFO: Selector matched 1 pods for map[app:redis]
Feb 15 14:24:14.262: INFO: Found 0 / 1
Feb 15 14:24:15.258: INFO: Selector matched 1 pods for map[app:redis]
Feb 15 14:24:15.258: INFO: Found 0 / 1
Feb 15 14:24:16.249: INFO: Selector matched 1 pods for map[app:redis]
Feb 15 14:24:16.250: INFO: Found 0 / 1
Feb 15 14:24:17.251: INFO: Selector matched 1 pods for map[app:redis]
Feb 15 14:24:17.251: INFO: Found 0 / 1
Feb 15 14:24:18.254: INFO: Selector matched 1 pods for map[app:redis]
Feb 15 14:24:18.255: INFO: Found 1 / 1
Feb 15 14:24:18.255: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb 15 14:24:18.260: INFO: Selector matched 1 pods for map[app:redis]
Feb 15 14:24:18.260: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb 15 14:24:18.260: INFO: wait on redis-master startup in kubectl-9702 
Feb 15 14:24:18.261: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-rvk7c redis-master --namespace=kubectl-9702'
Feb 15 14:24:18.465: INFO: stderr: ""
Feb 15 14:24:18.465: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 15 Feb 14:24:17.087 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 15 Feb 14:24:17.087 # Server started, Redis version 3.2.12\n1:M 15 Feb 14:24:17.088 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 15 Feb 14:24:17.088 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Feb 15 14:24:18.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-9702'
Feb 15 14:24:18.715: INFO: stderr: ""
Feb 15 14:24:18.715: INFO: stdout: "service/rm2 exposed\n"
Feb 15 14:24:18.723: INFO: Service rm2 in namespace kubectl-9702 found.
STEP: exposing service
Feb 15 14:24:20.741: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-9702'
Feb 15 14:24:20.936: INFO: stderr: ""
Feb 15 14:24:20.936: INFO: stdout: "service/rm3 exposed\n"
Feb 15 14:24:20.968: INFO: Service rm3 in namespace kubectl-9702 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:24:22.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9702" for this suite.
Feb 15 14:24:45.043: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:24:45.133: INFO: namespace kubectl-9702 deletion completed in 22.147981548s

• [SLOW TEST:37.470 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:24:45.134: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 15 14:24:45.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-9909'
Feb 15 14:24:45.300: INFO: stderr: ""
Feb 15 14:24:45.300: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Feb 15 14:24:45.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-9909'
Feb 15 14:24:50.260: INFO: stderr: ""
Feb 15 14:24:50.261: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:24:50.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9909" for this suite.
Feb 15 14:24:56.337: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:24:56.473: INFO: namespace kubectl-9909 deletion completed in 6.169714063s

• [SLOW TEST:11.339 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:24:56.474: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service endpoint-test2 in namespace services-6517
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6517 to expose endpoints map[]
Feb 15 14:24:56.712: INFO: Get endpoints failed (64.137352ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Feb 15 14:24:57.719: INFO: successfully validated that service endpoint-test2 in namespace services-6517 exposes endpoints map[] (1.071671629s elapsed)
STEP: Creating pod pod1 in namespace services-6517
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6517 to expose endpoints map[pod1:[80]]
Feb 15 14:25:01.977: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.22712064s elapsed, will retry)
Feb 15 14:25:05.048: INFO: successfully validated that service endpoint-test2 in namespace services-6517 exposes endpoints map[pod1:[80]] (7.298351576s elapsed)
STEP: Creating pod pod2 in namespace services-6517
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6517 to expose endpoints map[pod1:[80] pod2:[80]]
Feb 15 14:25:09.644: INFO: Unexpected endpoints: found map[a2f4ead8-b625-42ea-a026-85f094b28698:[80]], expected map[pod1:[80] pod2:[80]] (4.588155971s elapsed, will retry)
Feb 15 14:25:12.713: INFO: successfully validated that service endpoint-test2 in namespace services-6517 exposes endpoints map[pod1:[80] pod2:[80]] (7.657132676s elapsed)
STEP: Deleting pod pod1 in namespace services-6517
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6517 to expose endpoints map[pod2:[80]]
Feb 15 14:25:13.819: INFO: successfully validated that service endpoint-test2 in namespace services-6517 exposes endpoints map[pod2:[80]] (1.082411461s elapsed)
STEP: Deleting pod pod2 in namespace services-6517
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6517 to expose endpoints map[]
Feb 15 14:25:15.213: INFO: successfully validated that service endpoint-test2 in namespace services-6517 exposes endpoints map[] (1.375550303s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:25:15.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6517" for this suite.
Feb 15 14:25:39.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:25:39.946: INFO: namespace services-6517 deletion completed in 24.199481954s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:43.472 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:25:39.947: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb 15 14:25:40.112: INFO: PodSpec: initContainers in spec.initContainers
Feb 15 14:26:44.302: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-76694143-080c-4b1c-83ad-82b197ef9e2f", GenerateName:"", Namespace:"init-container-8474", SelfLink:"/api/v1/namespaces/init-container-8474/pods/pod-init-76694143-080c-4b1c-83ad-82b197ef9e2f", UID:"9fd05999-7d06-41f4-8053-53fae6a64622", ResourceVersion:"24458151", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63717373540, loc:(*time.Location)(0x7ea48a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"111848472"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-hkdpn", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc000c92080), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-hkdpn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-hkdpn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-hkdpn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0010cc238), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001d9a000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0010cc490)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0010cc6a0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0010cc6a8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0010cc6ac), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717373540, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717373540, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717373540, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717373540, loc:(*time.Location)(0x7ea48a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.3.65", PodIP:"10.44.0.1", StartTime:(*v1.Time)(0xc001ee8840), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001b90070)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001b900e0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://9ff7c18215feed5d26ee3d093804d8f1eb755543f22d95c57f283c9bf412354a"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001ee8e40), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001ee89c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:26:44.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-8474" for this suite.
Feb 15 14:27:06.427: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:27:06.548: INFO: namespace init-container-8474 deletion completed in 22.154006483s

• [SLOW TEST:86.601 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:27:06.549: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47
[It] should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Feb 15 14:27:14.719: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Feb 15 14:27:24.920: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:27:24.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9480" for this suite.
Feb 15 14:27:30.983: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:27:31.111: INFO: namespace pods-9480 deletion completed in 6.162031229s

• [SLOW TEST:24.563 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:27:31.112: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Feb 15 14:27:41.829: INFO: Successfully updated pod "labelsupdatec3127de1-840d-47f0-b65a-b51e40873d43"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:27:43.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5706" for this suite.
Feb 15 14:28:06.072: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:28:06.203: INFO: namespace projected-5706 deletion completed in 22.195261125s

• [SLOW TEST:35.091 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:28:06.204: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Feb 15 14:28:18.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-2a5ba15d-be5b-4bcb-b03e-18ca34314d5a -c busybox-main-container --namespace=emptydir-3202 -- cat /usr/share/volumeshare/shareddata.txt'
Feb 15 14:28:19.012: INFO: stderr: "I0215 14:28:18.698065    2457 log.go:172] (0xc0008b0420) (0xc00062aa00) Create stream\nI0215 14:28:18.698176    2457 log.go:172] (0xc0008b0420) (0xc00062aa00) Stream added, broadcasting: 1\nI0215 14:28:18.705361    2457 log.go:172] (0xc0008b0420) Reply frame received for 1\nI0215 14:28:18.705402    2457 log.go:172] (0xc0008b0420) (0xc0002e0000) Create stream\nI0215 14:28:18.705419    2457 log.go:172] (0xc0008b0420) (0xc0002e0000) Stream added, broadcasting: 3\nI0215 14:28:18.707855    2457 log.go:172] (0xc0008b0420) Reply frame received for 3\nI0215 14:28:18.707965    2457 log.go:172] (0xc0008b0420) (0xc000806000) Create stream\nI0215 14:28:18.708009    2457 log.go:172] (0xc0008b0420) (0xc000806000) Stream added, broadcasting: 5\nI0215 14:28:18.709873    2457 log.go:172] (0xc0008b0420) Reply frame received for 5\nI0215 14:28:18.821665    2457 log.go:172] (0xc0008b0420) Data frame received for 3\nI0215 14:28:18.822286    2457 log.go:172] (0xc0002e0000) (3) Data frame handling\nI0215 14:28:18.822424    2457 log.go:172] (0xc0002e0000) (3) Data frame sent\nI0215 14:28:19.005380    2457 log.go:172] (0xc0008b0420) (0xc0002e0000) Stream removed, broadcasting: 3\nI0215 14:28:19.005527    2457 log.go:172] (0xc0008b0420) Data frame received for 1\nI0215 14:28:19.005545    2457 log.go:172] (0xc00062aa00) (1) Data frame handling\nI0215 14:28:19.005564    2457 log.go:172] (0xc00062aa00) (1) Data frame sent\nI0215 14:28:19.005632    2457 log.go:172] (0xc0008b0420) (0xc000806000) Stream removed, broadcasting: 5\nI0215 14:28:19.005675    2457 log.go:172] (0xc0008b0420) (0xc00062aa00) Stream removed, broadcasting: 1\nI0215 14:28:19.005690    2457 log.go:172] (0xc0008b0420) Go away received\nI0215 14:28:19.006431    2457 log.go:172] (0xc0008b0420) (0xc00062aa00) Stream removed, broadcasting: 1\nI0215 14:28:19.006446    2457 log.go:172] (0xc0008b0420) (0xc0002e0000) Stream removed, broadcasting: 3\nI0215 14:28:19.006453    2457 log.go:172] (0xc0008b0420) (0xc000806000) Stream removed, broadcasting: 5\n"
Feb 15 14:28:19.012: INFO: stdout: "Hello from the busy-box sub-container\n"
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:28:19.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3202" for this suite.
Feb 15 14:28:25.148: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:28:25.246: INFO: namespace emptydir-3202 deletion completed in 6.225677769s

• [SLOW TEST:19.042 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:28:25.246: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 15 14:28:25.365: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c93627b3-5702-4419-bbb7-20fc81b4a7c0" in namespace "downward-api-8194" to be "success or failure"
Feb 15 14:28:25.375: INFO: Pod "downwardapi-volume-c93627b3-5702-4419-bbb7-20fc81b4a7c0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.96102ms
Feb 15 14:28:27.380: INFO: Pod "downwardapi-volume-c93627b3-5702-4419-bbb7-20fc81b4a7c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014229219s
Feb 15 14:28:29.396: INFO: Pod "downwardapi-volume-c93627b3-5702-4419-bbb7-20fc81b4a7c0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029883169s
Feb 15 14:28:31.404: INFO: Pod "downwardapi-volume-c93627b3-5702-4419-bbb7-20fc81b4a7c0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038499386s
Feb 15 14:28:33.420: INFO: Pod "downwardapi-volume-c93627b3-5702-4419-bbb7-20fc81b4a7c0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.054088286s
Feb 15 14:28:35.430: INFO: Pod "downwardapi-volume-c93627b3-5702-4419-bbb7-20fc81b4a7c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.064862866s
STEP: Saw pod success
Feb 15 14:28:35.431: INFO: Pod "downwardapi-volume-c93627b3-5702-4419-bbb7-20fc81b4a7c0" satisfied condition "success or failure"
Feb 15 14:28:35.436: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-c93627b3-5702-4419-bbb7-20fc81b4a7c0 container client-container: 
STEP: delete the pod
Feb 15 14:28:35.660: INFO: Waiting for pod downwardapi-volume-c93627b3-5702-4419-bbb7-20fc81b4a7c0 to disappear
Feb 15 14:28:35.666: INFO: Pod downwardapi-volume-c93627b3-5702-4419-bbb7-20fc81b4a7c0 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:28:35.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8194" for this suite.
Feb 15 14:28:41.791: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:28:41.944: INFO: namespace downward-api-8194 deletion completed in 6.270975718s

• [SLOW TEST:16.698 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:28:41.945: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb 15 14:28:42.073: INFO: Waiting up to 5m0s for pod "pod-f1ec2679-32c2-4c65-8ec2-04c98ba26735" in namespace "emptydir-963" to be "success or failure"
Feb 15 14:28:42.122: INFO: Pod "pod-f1ec2679-32c2-4c65-8ec2-04c98ba26735": Phase="Pending", Reason="", readiness=false. Elapsed: 48.638546ms
Feb 15 14:28:44.141: INFO: Pod "pod-f1ec2679-32c2-4c65-8ec2-04c98ba26735": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067663939s
Feb 15 14:28:46.165: INFO: Pod "pod-f1ec2679-32c2-4c65-8ec2-04c98ba26735": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09131543s
Feb 15 14:28:48.174: INFO: Pod "pod-f1ec2679-32c2-4c65-8ec2-04c98ba26735": Phase="Pending", Reason="", readiness=false. Elapsed: 6.100378388s
Feb 15 14:28:50.257: INFO: Pod "pod-f1ec2679-32c2-4c65-8ec2-04c98ba26735": Phase="Pending", Reason="", readiness=false. Elapsed: 8.18313409s
Feb 15 14:28:52.264: INFO: Pod "pod-f1ec2679-32c2-4c65-8ec2-04c98ba26735": Phase="Pending", Reason="", readiness=false. Elapsed: 10.19082723s
Feb 15 14:28:54.275: INFO: Pod "pod-f1ec2679-32c2-4c65-8ec2-04c98ba26735": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.20168488s
STEP: Saw pod success
Feb 15 14:28:54.275: INFO: Pod "pod-f1ec2679-32c2-4c65-8ec2-04c98ba26735" satisfied condition "success or failure"
Feb 15 14:28:54.280: INFO: Trying to get logs from node iruya-node pod pod-f1ec2679-32c2-4c65-8ec2-04c98ba26735 container test-container: 
STEP: delete the pod
Feb 15 14:28:54.480: INFO: Waiting for pod pod-f1ec2679-32c2-4c65-8ec2-04c98ba26735 to disappear
Feb 15 14:28:54.491: INFO: Pod pod-f1ec2679-32c2-4c65-8ec2-04c98ba26735 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:28:54.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-963" for this suite.
Feb 15 14:29:00.533: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:29:00.647: INFO: namespace emptydir-963 deletion completed in 6.150204404s

• [SLOW TEST:18.703 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:29:00.649: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:29:08.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-7805" for this suite.
Feb 15 14:29:15.049: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:29:15.165: INFO: namespace emptydir-wrapper-7805 deletion completed in 6.153427044s

• [SLOW TEST:14.517 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:29:15.165: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:29:23.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-212" for this suite.
Feb 15 14:30:07.431: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:30:07.547: INFO: namespace kubelet-test-212 deletion completed in 44.146795769s

• [SLOW TEST:52.382 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:30:07.548: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb 15 14:30:07.628: INFO: Waiting up to 5m0s for pod "downward-api-68d61263-43f1-47de-b959-10b0cf32eb60" in namespace "downward-api-6483" to be "success or failure"
Feb 15 14:30:07.694: INFO: Pod "downward-api-68d61263-43f1-47de-b959-10b0cf32eb60": Phase="Pending", Reason="", readiness=false. Elapsed: 65.542676ms
Feb 15 14:30:09.746: INFO: Pod "downward-api-68d61263-43f1-47de-b959-10b0cf32eb60": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117327436s
Feb 15 14:30:11.755: INFO: Pod "downward-api-68d61263-43f1-47de-b959-10b0cf32eb60": Phase="Pending", Reason="", readiness=false. Elapsed: 4.126567132s
Feb 15 14:30:13.768: INFO: Pod "downward-api-68d61263-43f1-47de-b959-10b0cf32eb60": Phase="Pending", Reason="", readiness=false. Elapsed: 6.139995502s
Feb 15 14:30:16.469: INFO: Pod "downward-api-68d61263-43f1-47de-b959-10b0cf32eb60": Phase="Pending", Reason="", readiness=false. Elapsed: 8.840417226s
Feb 15 14:30:18.480: INFO: Pod "downward-api-68d61263-43f1-47de-b959-10b0cf32eb60": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.851727959s
STEP: Saw pod success
Feb 15 14:30:18.480: INFO: Pod "downward-api-68d61263-43f1-47de-b959-10b0cf32eb60" satisfied condition "success or failure"
Feb 15 14:30:18.485: INFO: Trying to get logs from node iruya-node pod downward-api-68d61263-43f1-47de-b959-10b0cf32eb60 container dapi-container: 
STEP: delete the pod
Feb 15 14:30:18.642: INFO: Waiting for pod downward-api-68d61263-43f1-47de-b959-10b0cf32eb60 to disappear
Feb 15 14:30:18.654: INFO: Pod downward-api-68d61263-43f1-47de-b959-10b0cf32eb60 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:30:18.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6483" for this suite.
Feb 15 14:30:24.693: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:30:24.816: INFO: namespace downward-api-6483 deletion completed in 6.153288631s

• [SLOW TEST:17.268 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:30:24.817: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-196bdaac-fc54-47c5-88d6-7199a75b1c75
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:30:35.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-764" for this suite.
Feb 15 14:30:57.110: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:30:57.190: INFO: namespace configmap-764 deletion completed in 22.104033969s

• [SLOW TEST:32.373 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:30:57.190: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 15 14:31:08.423: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:31:08.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-5126" for this suite.
Feb 15 14:31:14.549: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:31:14.674: INFO: namespace container-runtime-5126 deletion completed in 6.19435501s

• [SLOW TEST:17.484 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:31:14.675: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Feb 15 14:31:15.529: INFO: Pod name wrapped-volume-race-65135fbc-f718-4a32-b52d-fde2b5ca4964: Found 0 pods out of 5
Feb 15 14:31:20.548: INFO: Pod name wrapped-volume-race-65135fbc-f718-4a32-b52d-fde2b5ca4964: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-65135fbc-f718-4a32-b52d-fde2b5ca4964 in namespace emptydir-wrapper-4961, will wait for the garbage collector to delete the pods
Feb 15 14:31:50.676: INFO: Deleting ReplicationController wrapped-volume-race-65135fbc-f718-4a32-b52d-fde2b5ca4964 took: 15.80937ms
Feb 15 14:31:51.077: INFO: Terminating ReplicationController wrapped-volume-race-65135fbc-f718-4a32-b52d-fde2b5ca4964 pods took: 400.995935ms
STEP: Creating RC which spawns configmap-volume pods
Feb 15 14:32:37.644: INFO: Pod name wrapped-volume-race-25d6c0cb-febc-44f8-aa5b-3aabbe804eec: Found 0 pods out of 5
Feb 15 14:32:42.678: INFO: Pod name wrapped-volume-race-25d6c0cb-febc-44f8-aa5b-3aabbe804eec: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-25d6c0cb-febc-44f8-aa5b-3aabbe804eec in namespace emptydir-wrapper-4961, will wait for the garbage collector to delete the pods
Feb 15 14:33:16.785: INFO: Deleting ReplicationController wrapped-volume-race-25d6c0cb-febc-44f8-aa5b-3aabbe804eec took: 12.282645ms
Feb 15 14:33:17.185: INFO: Terminating ReplicationController wrapped-volume-race-25d6c0cb-febc-44f8-aa5b-3aabbe804eec pods took: 400.642191ms
STEP: Creating RC which spawns configmap-volume pods
Feb 15 14:34:06.833: INFO: Pod name wrapped-volume-race-b493c6ae-34ca-41c5-bb41-68e4b061c8a8: Found 0 pods out of 5
Feb 15 14:34:11.846: INFO: Pod name wrapped-volume-race-b493c6ae-34ca-41c5-bb41-68e4b061c8a8: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-b493c6ae-34ca-41c5-bb41-68e4b061c8a8 in namespace emptydir-wrapper-4961, will wait for the garbage collector to delete the pods
Feb 15 14:34:43.997: INFO: Deleting ReplicationController wrapped-volume-race-b493c6ae-34ca-41c5-bb41-68e4b061c8a8 took: 13.715702ms
Feb 15 14:34:44.398: INFO: Terminating ReplicationController wrapped-volume-race-b493c6ae-34ca-41c5-bb41-68e4b061c8a8 pods took: 401.037037ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:35:28.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-4961" for this suite.
Feb 15 14:35:38.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:35:38.608: INFO: namespace emptydir-wrapper-4961 deletion completed in 10.185350459s

• [SLOW TEST:263.933 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:35:38.609: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 15 14:35:38.731: INFO: Create a RollingUpdate DaemonSet
Feb 15 14:35:38.740: INFO: Check that daemon pods launch on every node of the cluster
Feb 15 14:35:38.751: INFO: Number of nodes with available pods: 0
Feb 15 14:35:38.752: INFO: Node iruya-node is running more than one daemon pod
Feb 15 14:35:40.115: INFO: Number of nodes with available pods: 0
Feb 15 14:35:40.115: INFO: Node iruya-node is running more than one daemon pod
Feb 15 14:35:40.765: INFO: Number of nodes with available pods: 0
Feb 15 14:35:40.765: INFO: Node iruya-node is running more than one daemon pod
Feb 15 14:35:41.821: INFO: Number of nodes with available pods: 0
Feb 15 14:35:41.821: INFO: Node iruya-node is running more than one daemon pod
Feb 15 14:35:42.770: INFO: Number of nodes with available pods: 0
Feb 15 14:35:42.770: INFO: Node iruya-node is running more than one daemon pod
Feb 15 14:35:43.799: INFO: Number of nodes with available pods: 0
Feb 15 14:35:43.799: INFO: Node iruya-node is running more than one daemon pod
Feb 15 14:35:45.518: INFO: Number of nodes with available pods: 0
Feb 15 14:35:45.519: INFO: Node iruya-node is running more than one daemon pod
Feb 15 14:35:45.825: INFO: Number of nodes with available pods: 0
Feb 15 14:35:45.825: INFO: Node iruya-node is running more than one daemon pod
Feb 15 14:35:46.767: INFO: Number of nodes with available pods: 0
Feb 15 14:35:46.767: INFO: Node iruya-node is running more than one daemon pod
Feb 15 14:35:47.814: INFO: Number of nodes with available pods: 1
Feb 15 14:35:47.814: INFO: Node iruya-node is running more than one daemon pod
Feb 15 14:35:48.764: INFO: Number of nodes with available pods: 1
Feb 15 14:35:48.764: INFO: Node iruya-node is running more than one daemon pod
Feb 15 14:35:49.765: INFO: Number of nodes with available pods: 1
Feb 15 14:35:49.765: INFO: Node iruya-node is running more than one daemon pod
Feb 15 14:35:50.778: INFO: Number of nodes with available pods: 1
Feb 15 14:35:50.778: INFO: Node iruya-node is running more than one daemon pod
Feb 15 14:35:51.766: INFO: Number of nodes with available pods: 1
Feb 15 14:35:51.767: INFO: Node iruya-node is running more than one daemon pod
Feb 15 14:35:52.774: INFO: Number of nodes with available pods: 1
Feb 15 14:35:52.774: INFO: Node iruya-node is running more than one daemon pod
Feb 15 14:35:53.772: INFO: Number of nodes with available pods: 1
Feb 15 14:35:53.772: INFO: Node iruya-node is running more than one daemon pod
Feb 15 14:35:54.761: INFO: Number of nodes with available pods: 2
Feb 15 14:35:54.761: INFO: Number of running nodes: 2, number of available pods: 2
Feb 15 14:35:54.761: INFO: Update the DaemonSet to trigger a rollout
Feb 15 14:35:54.771: INFO: Updating DaemonSet daemon-set
Feb 15 14:36:08.807: INFO: Roll back the DaemonSet before rollout is complete
Feb 15 14:36:08.821: INFO: Updating DaemonSet daemon-set
Feb 15 14:36:08.821: INFO: Make sure DaemonSet rollback is complete
Feb 15 14:36:09.195: INFO: Wrong image for pod: daemon-set-s7gm9. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb 15 14:36:09.195: INFO: Pod daemon-set-s7gm9 is not available
Feb 15 14:36:10.396: INFO: Wrong image for pod: daemon-set-s7gm9. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb 15 14:36:10.396: INFO: Pod daemon-set-s7gm9 is not available
Feb 15 14:36:11.255: INFO: Wrong image for pod: daemon-set-s7gm9. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb 15 14:36:11.256: INFO: Pod daemon-set-s7gm9 is not available
Feb 15 14:36:12.441: INFO: Pod daemon-set-pjgsj is not available
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4559, will wait for the garbage collector to delete the pods
Feb 15 14:36:13.547: INFO: Deleting DaemonSet.extensions daemon-set took: 12.110123ms
Feb 15 14:36:14.248: INFO: Terminating DaemonSet.extensions daemon-set pods took: 700.867216ms
Feb 15 14:36:26.664: INFO: Number of nodes with available pods: 0
Feb 15 14:36:26.664: INFO: Number of running nodes: 0, number of available pods: 0
Feb 15 14:36:26.669: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4559/daemonsets","resourceVersion":"24460105"},"items":null}

Feb 15 14:36:26.672: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4559/pods","resourceVersion":"24460105"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:36:26.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4559" for this suite.
Feb 15 14:36:32.739: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:36:32.896: INFO: namespace daemonsets-4559 deletion completed in 6.20002814s

• [SLOW TEST:54.287 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:36:32.896: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 15 14:36:33.011: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Feb 15 14:36:38.018: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 15 14:36:42.031: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb 15 14:36:42.163: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-753,SelfLink:/apis/apps/v1/namespaces/deployment-753/deployments/test-cleanup-deployment,UID:bad7879d-d67c-4794-afe9-6044f3ee3312,ResourceVersion:24460174,Generation:1,CreationTimestamp:2020-02-15 14:36:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 1,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[{Progressing True 2020-02-15 14:36:42 +0000 UTC 2020-02-15 14:36:42 +0000 UTC NewReplicaSetCreated Created new replica set "test-cleanup-deployment-55bbcbc84c"}],ReadyReplicas:0,CollisionCount:nil,},}

Feb 15 14:36:42.274: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-753,SelfLink:/apis/apps/v1/namespaces/deployment-753/replicasets/test-cleanup-deployment-55bbcbc84c,UID:4af51cbd-c2fd-4699-9342-7b29318cd53d,ResourceVersion:24460176,Generation:1,CreationTimestamp:2020-02-15 14:36:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment bad7879d-d67c-4794-afe9-6044f3ee3312 0xc001285897 0xc001285898}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 15 14:36:42.274: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Feb 15 14:36:42.275: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-753,SelfLink:/apis/apps/v1/namespaces/deployment-753/replicasets/test-cleanup-controller,UID:daaa1e25-a174-4d0d-a664-c706b3ab627e,ResourceVersion:24460170,Generation:1,CreationTimestamp:2020-02-15 14:36:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment bad7879d-d67c-4794-afe9-6044f3ee3312 0xc0012857c7 0xc0012857c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb 15 14:36:42.327: INFO: Pod "test-cleanup-controller-6r9kh" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-6r9kh,GenerateName:test-cleanup-controller-,Namespace:deployment-753,SelfLink:/api/v1/namespaces/deployment-753/pods/test-cleanup-controller-6r9kh,UID:f5aed6f0-eb47-43ba-beb4-431124837ccd,ResourceVersion:24460164,Generation:0,CreationTimestamp:2020-02-15 14:36:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller daaa1e25-a174-4d0d-a664-c706b3ab627e 0xc00316e177 0xc00316e178}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-m7fj4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-m7fj4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-m7fj4 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00316e1f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00316e210}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 14:36:33 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 14:36:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 14:36:40 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 14:36:33 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-02-15 14:36:33 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-15 14:36:39 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://ae2dc7c05b265d4b0186a90616e873093c46ca29ccc9dac58ff91a12e0cd5686}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 15 14:36:42.328: INFO: Pod "test-cleanup-deployment-55bbcbc84c-2nn58" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-2nn58,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-753,SelfLink:/api/v1/namespaces/deployment-753/pods/test-cleanup-deployment-55bbcbc84c-2nn58,UID:614f3958-0f56-4d95-bc95-630a5d22a218,ResourceVersion:24460177,Generation:0,CreationTimestamp:2020-02-15 14:36:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 4af51cbd-c2fd-4699-9342-7b29318cd53d 0xc00316e2f7 0xc00316e2f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-m7fj4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-m7fj4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-m7fj4 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00316e370} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00316e390}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 14:36:42 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:36:42.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-753" for this suite.
Feb 15 14:36:48.449: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:36:48.576: INFO: namespace deployment-753 deletion completed in 6.228125549s

• [SLOW TEST:15.681 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:36:48.579: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 15 14:36:48.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Feb 15 14:36:48.850: INFO: stderr: ""
Feb 15 14:36:48.850: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:55:20Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.1\", GitCommit:\"4485c6f18cee9a5d3c3b4e523bd27972b1b53892\", GitTreeState:\"clean\", BuildDate:\"2019-07-18T09:09:21Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:36:48.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2089" for this suite.
Feb 15 14:36:54.909: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:36:55.046: INFO: namespace kubectl-2089 deletion completed in 6.185350954s

• [SLOW TEST:6.467 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:36:55.048: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service multi-endpoint-test in namespace services-7900
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7900 to expose endpoints map[]
Feb 15 14:36:55.236: INFO: Get endpoints failed (13.305417ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Feb 15 14:36:56.245: INFO: successfully validated that service multi-endpoint-test in namespace services-7900 exposes endpoints map[] (1.021995117s elapsed)
STEP: Creating pod pod1 in namespace services-7900
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7900 to expose endpoints map[pod1:[100]]
Feb 15 14:37:00.411: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.153054591s elapsed, will retry)
Feb 15 14:37:03.505: INFO: successfully validated that service multi-endpoint-test in namespace services-7900 exposes endpoints map[pod1:[100]] (7.247606217s elapsed)
STEP: Creating pod pod2 in namespace services-7900
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7900 to expose endpoints map[pod1:[100] pod2:[101]]
Feb 15 14:37:08.058: INFO: Unexpected endpoints: found map[9a03a41f-c792-44d2-8ead-72415617fa01:[100]], expected map[pod1:[100] pod2:[101]] (4.542283277s elapsed, will retry)
Feb 15 14:37:10.092: INFO: successfully validated that service multi-endpoint-test in namespace services-7900 exposes endpoints map[pod1:[100] pod2:[101]] (6.576623811s elapsed)
STEP: Deleting pod pod1 in namespace services-7900
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7900 to expose endpoints map[pod2:[101]]
Feb 15 14:37:11.206: INFO: successfully validated that service multi-endpoint-test in namespace services-7900 exposes endpoints map[pod2:[101]] (1.106780678s elapsed)
STEP: Deleting pod pod2 in namespace services-7900
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7900 to expose endpoints map[]
Feb 15 14:37:12.249: INFO: successfully validated that service multi-endpoint-test in namespace services-7900 exposes endpoints map[] (1.032577356s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:37:12.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7900" for this suite.
Feb 15 14:37:35.502: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:37:35.638: INFO: namespace services-7900 deletion completed in 23.339889769s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:40.591 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:37:35.639: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-67244a17-1ab1-475d-9353-05ef829ac2c1
STEP: Creating a pod to test consume configMaps
Feb 15 14:37:35.799: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-52ed0c7f-18ea-4caf-ae76-d3af896af96e" in namespace "projected-489" to be "success or failure"
Feb 15 14:37:35.817: INFO: Pod "pod-projected-configmaps-52ed0c7f-18ea-4caf-ae76-d3af896af96e": Phase="Pending", Reason="", readiness=false. Elapsed: 17.850038ms
Feb 15 14:37:37.828: INFO: Pod "pod-projected-configmaps-52ed0c7f-18ea-4caf-ae76-d3af896af96e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029588596s
Feb 15 14:37:39.847: INFO: Pod "pod-projected-configmaps-52ed0c7f-18ea-4caf-ae76-d3af896af96e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048038036s
Feb 15 14:37:41.861: INFO: Pod "pod-projected-configmaps-52ed0c7f-18ea-4caf-ae76-d3af896af96e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061822944s
Feb 15 14:37:43.879: INFO: Pod "pod-projected-configmaps-52ed0c7f-18ea-4caf-ae76-d3af896af96e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.079794s
STEP: Saw pod success
Feb 15 14:37:43.879: INFO: Pod "pod-projected-configmaps-52ed0c7f-18ea-4caf-ae76-d3af896af96e" satisfied condition "success or failure"
Feb 15 14:37:43.889: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-52ed0c7f-18ea-4caf-ae76-d3af896af96e container projected-configmap-volume-test: 
STEP: delete the pod
Feb 15 14:37:44.138: INFO: Waiting for pod pod-projected-configmaps-52ed0c7f-18ea-4caf-ae76-d3af896af96e to disappear
Feb 15 14:37:44.157: INFO: Pod pod-projected-configmaps-52ed0c7f-18ea-4caf-ae76-d3af896af96e no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:37:44.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-489" for this suite.
Feb 15 14:37:50.299: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:37:50.451: INFO: namespace projected-489 deletion completed in 6.268454589s

• [SLOW TEST:14.812 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:37:50.451: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 15 14:37:50.561: INFO: Creating deployment "test-recreate-deployment"
Feb 15 14:37:50.571: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Feb 15 14:37:50.601: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Feb 15 14:37:52.612: INFO: Waiting deployment "test-recreate-deployment" to complete
Feb 15 14:37:52.614: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717374270, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717374270, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717374270, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717374270, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 14:37:54.628: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717374270, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717374270, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717374270, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717374270, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 14:37:56.634: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717374270, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717374270, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717374270, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717374270, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 14:37:58.633: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Feb 15 14:37:58.648: INFO: Updating deployment test-recreate-deployment
Feb 15 14:37:58.648: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb 15 14:37:59.013: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-234,SelfLink:/apis/apps/v1/namespaces/deployment-234/deployments/test-recreate-deployment,UID:407c7e72-f132-47f4-9fe1-46d8d3c4ec80,ResourceVersion:24460434,Generation:2,CreationTimestamp:2020-02-15 14:37:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-02-15 14:37:58 +0000 UTC 2020-02-15 14:37:58 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-02-15 14:37:58 +0000 UTC 2020-02-15 14:37:50 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Feb 15 14:37:59.022: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-234,SelfLink:/apis/apps/v1/namespaces/deployment-234/replicasets/test-recreate-deployment-5c8c9cc69d,UID:dcb62203-7f58-427b-aa5b-d0dbf5340fdc,ResourceVersion:24460433,Generation:1,CreationTimestamp:2020-02-15 14:37:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 407c7e72-f132-47f4-9fe1-46d8d3c4ec80 0xc002db1fe7 0xc002db1fe8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 15 14:37:59.022: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Feb 15 14:37:59.022: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-234,SelfLink:/apis/apps/v1/namespaces/deployment-234/replicasets/test-recreate-deployment-6df85df6b9,UID:1cc7e2c6-2f2b-41aa-bfce-e5471d3fcd1e,ResourceVersion:24460423,Generation:2,CreationTimestamp:2020-02-15 14:37:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 407c7e72-f132-47f4-9fe1-46d8d3c4ec80 0xc0029760b7 0xc0029760b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 15 14:37:59.027: INFO: Pod "test-recreate-deployment-5c8c9cc69d-fl5vt" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-fl5vt,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-234,SelfLink:/api/v1/namespaces/deployment-234/pods/test-recreate-deployment-5c8c9cc69d-fl5vt,UID:46e4aba2-b64a-4eea-9bb7-3572687a5cbd,ResourceVersion:24460435,Generation:0,CreationTimestamp:2020-02-15 14:37:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d dcb62203-7f58-427b-aa5b-d0dbf5340fdc 0xc001e5d677 0xc001e5d678}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lr5x8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lr5x8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-lr5x8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001e5d6f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001e5d710}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 14:37:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 14:37:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 14:37:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 14:37:58 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-15 14:37:58 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:37:59.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-234" for this suite.
Feb 15 14:38:05.060: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:38:05.188: INFO: namespace deployment-234 deletion completed in 6.155208852s

• [SLOW TEST:14.737 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:38:05.191: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 15 14:38:05.350: INFO: (0) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 77.909408ms)
Feb 15 14:38:05.354: INFO: (1) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.44315ms)
Feb 15 14:38:05.359: INFO: (2) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.389947ms)
Feb 15 14:38:05.364: INFO: (3) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.256919ms)
Feb 15 14:38:05.369: INFO: (4) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.238439ms)
Feb 15 14:38:05.374: INFO: (5) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.324122ms)
Feb 15 14:38:05.377: INFO: (6) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.619294ms)
Feb 15 14:38:05.382: INFO: (7) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.01072ms)
Feb 15 14:38:05.388: INFO: (8) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.10716ms)
Feb 15 14:38:05.392: INFO: (9) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.894687ms)
Feb 15 14:38:05.397: INFO: (10) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.981859ms)
Feb 15 14:38:05.404: INFO: (11) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.37247ms)
Feb 15 14:38:05.408: INFO: (12) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.857463ms)
Feb 15 14:38:05.412: INFO: (13) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.423811ms)
Feb 15 14:38:05.416: INFO: (14) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.251007ms)
Feb 15 14:38:05.422: INFO: (15) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.506586ms)
Feb 15 14:38:05.426: INFO: (16) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.418442ms)
Feb 15 14:38:05.430: INFO: (17) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.467763ms)
Feb 15 14:38:05.434: INFO: (18) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.03675ms)
Feb 15 14:38:05.438: INFO: (19) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.804191ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:38:05.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-7229" for this suite.
Feb 15 14:38:11.483: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:38:11.633: INFO: namespace proxy-7229 deletion completed in 6.192502301s

• [SLOW TEST:6.443 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:38:11.636: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 15 14:38:11.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-2048'
Feb 15 14:38:13.887: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 15 14:38:13.887: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426
Feb 15 14:38:15.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-2048'
Feb 15 14:38:16.194: INFO: stderr: ""
Feb 15 14:38:16.195: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:38:16.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2048" for this suite.
Feb 15 14:38:22.320: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:38:22.439: INFO: namespace kubectl-2048 deletion completed in 6.18785279s

• [SLOW TEST:10.804 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:38:22.440: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating pod
Feb 15 14:38:32.643: INFO: Pod pod-hostip-5cae7938-c68c-4b73-847e-f58feeb816ac has hostIP: 10.96.3.65
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:38:32.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5703" for this suite.
Feb 15 14:38:54.680: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:38:55.257: INFO: namespace pods-5703 deletion completed in 22.60693207s

• [SLOW TEST:32.817 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:38:55.258: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:39:07.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-8000" for this suite.
Feb 15 14:39:13.459: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:39:13.584: INFO: namespace kubelet-test-8000 deletion completed in 6.151690869s

• [SLOW TEST:18.326 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:39:13.585: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Feb 15 14:39:13.657: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 15 14:39:13.697: INFO: Waiting for terminating namespaces to be deleted...
Feb 15 14:39:13.701: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Feb 15 14:39:13.719: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Feb 15 14:39:13.719: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 15 14:39:13.719: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Feb 15 14:39:13.719: INFO: 	Container weave ready: true, restart count 0
Feb 15 14:39:13.719: INFO: 	Container weave-npc ready: true, restart count 0
Feb 15 14:39:13.719: INFO: kube-bench-j7kcs from default started at 2020-02-11 06:42:30 +0000 UTC (1 container statuses recorded)
Feb 15 14:39:13.719: INFO: 	Container kube-bench ready: false, restart count 0
Feb 15 14:39:13.719: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Feb 15 14:39:13.733: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Feb 15 14:39:13.733: INFO: 	Container kube-apiserver ready: true, restart count 0
Feb 15 14:39:13.733: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Feb 15 14:39:13.733: INFO: 	Container kube-scheduler ready: true, restart count 13
Feb 15 14:39:13.733: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb 15 14:39:13.733: INFO: 	Container coredns ready: true, restart count 0
Feb 15 14:39:13.733: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Feb 15 14:39:13.733: INFO: 	Container etcd ready: true, restart count 0
Feb 15 14:39:13.733: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Feb 15 14:39:13.733: INFO: 	Container weave ready: true, restart count 0
Feb 15 14:39:13.733: INFO: 	Container weave-npc ready: true, restart count 0
Feb 15 14:39:13.733: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb 15 14:39:13.733: INFO: 	Container coredns ready: true, restart count 0
Feb 15 14:39:13.733: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Feb 15 14:39:13.733: INFO: 	Container kube-controller-manager ready: true, restart count 21
Feb 15 14:39:13.733: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Feb 15 14:39:13.733: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: verifying the node has the label node iruya-node
STEP: verifying the node has the label node iruya-server-sfge57q7djm7
Feb 15 14:39:13.906: INFO: Pod coredns-5c98db65d4-bm4gs requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Feb 15 14:39:13.907: INFO: Pod coredns-5c98db65d4-xx8w8 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Feb 15 14:39:13.907: INFO: Pod etcd-iruya-server-sfge57q7djm7 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Feb 15 14:39:13.907: INFO: Pod kube-apiserver-iruya-server-sfge57q7djm7 requesting resource cpu=250m on Node iruya-server-sfge57q7djm7
Feb 15 14:39:13.907: INFO: Pod kube-controller-manager-iruya-server-sfge57q7djm7 requesting resource cpu=200m on Node iruya-server-sfge57q7djm7
Feb 15 14:39:13.907: INFO: Pod kube-proxy-58v95 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Feb 15 14:39:13.907: INFO: Pod kube-proxy-976zl requesting resource cpu=0m on Node iruya-node
Feb 15 14:39:13.907: INFO: Pod kube-scheduler-iruya-server-sfge57q7djm7 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Feb 15 14:39:13.907: INFO: Pod weave-net-bzl4d requesting resource cpu=20m on Node iruya-server-sfge57q7djm7
Feb 15 14:39:13.907: INFO: Pod weave-net-rlp57 requesting resource cpu=20m on Node iruya-node
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-36367c05-d722-41c4-bf08-887f32f05fc7.15f39a4e0285e0f6], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6625/filler-pod-36367c05-d722-41c4-bf08-887f32f05fc7 to iruya-server-sfge57q7djm7]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-36367c05-d722-41c4-bf08-887f32f05fc7.15f39a4f2672adc8], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-36367c05-d722-41c4-bf08-887f32f05fc7.15f39a4ff8f80e39], Reason = [Created], Message = [Created container filler-pod-36367c05-d722-41c4-bf08-887f32f05fc7]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-36367c05-d722-41c4-bf08-887f32f05fc7.15f39a5020efde62], Reason = [Started], Message = [Started container filler-pod-36367c05-d722-41c4-bf08-887f32f05fc7]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-cb602c6b-41d5-4cfd-b807-9bcc04bbafcd.15f39a4e00ae16dd], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6625/filler-pod-cb602c6b-41d5-4cfd-b807-9bcc04bbafcd to iruya-node]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-cb602c6b-41d5-4cfd-b807-9bcc04bbafcd.15f39a4f2d740036], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-cb602c6b-41d5-4cfd-b807-9bcc04bbafcd.15f39a5010b4c007], Reason = [Created], Message = [Created container filler-pod-cb602c6b-41d5-4cfd-b807-9bcc04bbafcd]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-cb602c6b-41d5-4cfd-b807-9bcc04bbafcd.15f39a503c285787], Reason = [Started], Message = [Started container filler-pod-cb602c6b-41d5-4cfd-b807-9bcc04bbafcd]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15f39a50cf7d1e3b], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: removing the label node off the node iruya-node
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node iruya-server-sfge57q7djm7
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:39:27.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-6625" for this suite.
Feb 15 14:39:35.301: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:39:35.404: INFO: namespace sched-pred-6625 deletion completed in 8.175403698s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:21.820 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:39:35.405: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-d82fbf93-19fc-410e-9884-e80f2c9e6bd6
STEP: Creating a pod to test consume secrets
Feb 15 14:39:37.219: INFO: Waiting up to 5m0s for pod "pod-secrets-df3aee33-0275-4a8b-a19c-d586d74a235e" in namespace "secrets-121" to be "success or failure"
Feb 15 14:39:37.233: INFO: Pod "pod-secrets-df3aee33-0275-4a8b-a19c-d586d74a235e": Phase="Pending", Reason="", readiness=false. Elapsed: 13.863778ms
Feb 15 14:39:39.242: INFO: Pod "pod-secrets-df3aee33-0275-4a8b-a19c-d586d74a235e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022348346s
Feb 15 14:39:41.248: INFO: Pod "pod-secrets-df3aee33-0275-4a8b-a19c-d586d74a235e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028562996s
Feb 15 14:39:43.318: INFO: Pod "pod-secrets-df3aee33-0275-4a8b-a19c-d586d74a235e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.098675513s
Feb 15 14:39:45.324: INFO: Pod "pod-secrets-df3aee33-0275-4a8b-a19c-d586d74a235e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.104849238s
STEP: Saw pod success
Feb 15 14:39:45.324: INFO: Pod "pod-secrets-df3aee33-0275-4a8b-a19c-d586d74a235e" satisfied condition "success or failure"
Feb 15 14:39:45.327: INFO: Trying to get logs from node iruya-node pod pod-secrets-df3aee33-0275-4a8b-a19c-d586d74a235e container secret-volume-test: 
STEP: delete the pod
Feb 15 14:39:45.392: INFO: Waiting for pod pod-secrets-df3aee33-0275-4a8b-a19c-d586d74a235e to disappear
Feb 15 14:39:45.406: INFO: Pod pod-secrets-df3aee33-0275-4a8b-a19c-d586d74a235e no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:39:45.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-121" for this suite.
Feb 15 14:39:51.439: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:39:51.562: INFO: namespace secrets-121 deletion completed in 6.149402468s

• [SLOW TEST:16.157 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:39:51.563: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 15 14:39:51.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-763'
Feb 15 14:39:51.848: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 15 14:39:51.848: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562
Feb 15 14:39:53.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-763'
Feb 15 14:39:54.091: INFO: stderr: ""
Feb 15 14:39:54.091: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:39:54.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-763" for this suite.
Feb 15 14:40:00.181: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:40:00.325: INFO: namespace kubectl-763 deletion completed in 6.176194141s

• [SLOW TEST:8.763 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:40:00.326: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 15 14:40:30.499: INFO: Container started at 2020-02-15 14:40:06 +0000 UTC, pod became ready at 2020-02-15 14:40:28 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:40:30.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-894" for this suite.
Feb 15 14:40:52.541: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:40:52.654: INFO: namespace container-probe-894 deletion completed in 22.147032346s

• [SLOW TEST:52.328 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:40:52.655: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-0a42a54c-3a4e-4627-990f-4cce4a1d1129
STEP: Creating a pod to test consume configMaps
Feb 15 14:40:52.845: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1d0a8290-9dd0-4bfc-9c38-5dde0eeb8d47" in namespace "projected-8690" to be "success or failure"
Feb 15 14:40:52.859: INFO: Pod "pod-projected-configmaps-1d0a8290-9dd0-4bfc-9c38-5dde0eeb8d47": Phase="Pending", Reason="", readiness=false. Elapsed: 13.824577ms
Feb 15 14:40:54.869: INFO: Pod "pod-projected-configmaps-1d0a8290-9dd0-4bfc-9c38-5dde0eeb8d47": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023953526s
Feb 15 14:40:56.884: INFO: Pod "pod-projected-configmaps-1d0a8290-9dd0-4bfc-9c38-5dde0eeb8d47": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039183697s
Feb 15 14:40:58.896: INFO: Pod "pod-projected-configmaps-1d0a8290-9dd0-4bfc-9c38-5dde0eeb8d47": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051154642s
Feb 15 14:41:00.914: INFO: Pod "pod-projected-configmaps-1d0a8290-9dd0-4bfc-9c38-5dde0eeb8d47": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.068614042s
STEP: Saw pod success
Feb 15 14:41:00.914: INFO: Pod "pod-projected-configmaps-1d0a8290-9dd0-4bfc-9c38-5dde0eeb8d47" satisfied condition "success or failure"
Feb 15 14:41:00.922: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-1d0a8290-9dd0-4bfc-9c38-5dde0eeb8d47 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 15 14:41:01.033: INFO: Waiting for pod pod-projected-configmaps-1d0a8290-9dd0-4bfc-9c38-5dde0eeb8d47 to disappear
Feb 15 14:41:01.039: INFO: Pod pod-projected-configmaps-1d0a8290-9dd0-4bfc-9c38-5dde0eeb8d47 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:41:01.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8690" for this suite.
Feb 15 14:41:07.086: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:41:07.205: INFO: namespace projected-8690 deletion completed in 6.156321647s

• [SLOW TEST:14.550 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:41:07.205: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting the proxy server
Feb 15 14:41:07.322: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:41:07.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8594" for this suite.
Feb 15 14:41:13.526: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:41:13.678: INFO: namespace kubectl-8594 deletion completed in 6.178139692s

• [SLOW TEST:6.473 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:41:13.679: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-fc1d4f74-f9ff-4e99-ab75-fb1259ae7c6e
STEP: Creating a pod to test consume configMaps
Feb 15 14:41:13.842: INFO: Waiting up to 5m0s for pod "pod-configmaps-e3f6df39-5063-4a1c-9ce9-0222503c3d2a" in namespace "configmap-3742" to be "success or failure"
Feb 15 14:41:13.883: INFO: Pod "pod-configmaps-e3f6df39-5063-4a1c-9ce9-0222503c3d2a": Phase="Pending", Reason="", readiness=false. Elapsed: 40.472853ms
Feb 15 14:41:15.893: INFO: Pod "pod-configmaps-e3f6df39-5063-4a1c-9ce9-0222503c3d2a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050494634s
Feb 15 14:41:17.901: INFO: Pod "pod-configmaps-e3f6df39-5063-4a1c-9ce9-0222503c3d2a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05786512s
Feb 15 14:41:19.915: INFO: Pod "pod-configmaps-e3f6df39-5063-4a1c-9ce9-0222503c3d2a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071625094s
Feb 15 14:41:21.923: INFO: Pod "pod-configmaps-e3f6df39-5063-4a1c-9ce9-0222503c3d2a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.080433303s
Feb 15 14:41:23.941: INFO: Pod "pod-configmaps-e3f6df39-5063-4a1c-9ce9-0222503c3d2a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.098236545s
Feb 15 14:41:25.951: INFO: Pod "pod-configmaps-e3f6df39-5063-4a1c-9ce9-0222503c3d2a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.108056022s
STEP: Saw pod success
Feb 15 14:41:25.951: INFO: Pod "pod-configmaps-e3f6df39-5063-4a1c-9ce9-0222503c3d2a" satisfied condition "success or failure"
Feb 15 14:41:25.959: INFO: Trying to get logs from node iruya-node pod pod-configmaps-e3f6df39-5063-4a1c-9ce9-0222503c3d2a container configmap-volume-test: 
STEP: delete the pod
Feb 15 14:41:26.088: INFO: Waiting for pod pod-configmaps-e3f6df39-5063-4a1c-9ce9-0222503c3d2a to disappear
Feb 15 14:41:26.097: INFO: Pod pod-configmaps-e3f6df39-5063-4a1c-9ce9-0222503c3d2a no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:41:26.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3742" for this suite.
Feb 15 14:41:32.157: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:41:32.275: INFO: namespace configmap-3742 deletion completed in 6.173464354s

• [SLOW TEST:18.597 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:41:32.276: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-sj65d in namespace proxy-4757
I0215 14:41:32.515038       8 runners.go:180] Created replication controller with name: proxy-service-sj65d, namespace: proxy-4757, replica count: 1
I0215 14:41:33.567052       8 runners.go:180] proxy-service-sj65d Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0215 14:41:34.568274       8 runners.go:180] proxy-service-sj65d Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0215 14:41:35.569417       8 runners.go:180] proxy-service-sj65d Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0215 14:41:36.570907       8 runners.go:180] proxy-service-sj65d Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0215 14:41:37.571620       8 runners.go:180] proxy-service-sj65d Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0215 14:41:38.572285       8 runners.go:180] proxy-service-sj65d Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0215 14:41:39.572855       8 runners.go:180] proxy-service-sj65d Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0215 14:41:40.573917       8 runners.go:180] proxy-service-sj65d Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0215 14:41:41.574889       8 runners.go:180] proxy-service-sj65d Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0215 14:41:42.576270       8 runners.go:180] proxy-service-sj65d Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0215 14:41:43.576843       8 runners.go:180] proxy-service-sj65d Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0215 14:41:44.577873       8 runners.go:180] proxy-service-sj65d Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0215 14:41:45.578629       8 runners.go:180] proxy-service-sj65d Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 15 14:41:45.590: INFO: setup took 13.251253632s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Feb 15 14:41:45.613: INFO: (0) /api/v1/namespaces/proxy-4757/pods/http:proxy-service-sj65d-8gwzl:162/proxy/: bar (200; 22.90535ms)
Feb 15 14:41:45.616: INFO: (0) /api/v1/namespaces/proxy-4757/pods/proxy-service-sj65d-8gwzl:162/proxy/: bar (200; 25.924539ms)
Feb 15 14:41:45.616: INFO: (0) /api/v1/namespaces/proxy-4757/services/proxy-service-sj65d:portname2/proxy/: bar (200; 26.024217ms)
Feb 15 14:41:45.617: INFO: (0) /api/v1/namespaces/proxy-4757/services/proxy-service-sj65d:portname1/proxy/: foo (200; 26.620527ms)
Feb 15 14:41:45.621: INFO: (0) /api/v1/namespaces/proxy-4757/services/http:proxy-service-sj65d:portname1/proxy/: foo (200; 30.865699ms)
Feb 15 14:41:45.627: INFO: (0) /api/v1/namespaces/proxy-4757/pods/http:proxy-service-sj65d-8gwzl:160/proxy/: foo (200; 36.912619ms)
Feb 15 14:41:45.629: INFO: (0) /api/v1/namespaces/proxy-4757/services/http:proxy-service-sj65d:portname2/proxy/: bar (200; 38.700451ms)
Feb 15 14:41:45.629: INFO: (0) /api/v1/namespaces/proxy-4757/pods/proxy-service-sj65d-8gwzl/proxy/: test (200; 39.183585ms)
Feb 15 14:41:45.630: INFO: (0) /api/v1/namespaces/proxy-4757/pods/http:proxy-service-sj65d-8gwzl:1080/proxy/: ... (200; 40.513786ms)
Feb 15 14:41:45.631: INFO: (0) /api/v1/namespaces/proxy-4757/pods/proxy-service-sj65d-8gwzl:160/proxy/: foo (200; 40.76094ms)
Feb 15 14:41:45.633: INFO: (0) /api/v1/namespaces/proxy-4757/pods/proxy-service-sj65d-8gwzl:1080/proxy/: test<... (200; 43.170167ms)
Feb 15 14:41:45.636: INFO: (0) /api/v1/namespaces/proxy-4757/pods/https:proxy-service-sj65d-8gwzl:443/proxy/: ... (200; 14.557496ms)
Feb 15 14:41:45.658: INFO: (1) /api/v1/namespaces/proxy-4757/pods/proxy-service-sj65d-8gwzl:162/proxy/: bar (200; 15.125274ms)
Feb 15 14:41:45.658: INFO: (1) /api/v1/namespaces/proxy-4757/pods/http:proxy-service-sj65d-8gwzl:160/proxy/: foo (200; 14.565818ms)
Feb 15 14:41:45.659: INFO: (1) /api/v1/namespaces/proxy-4757/pods/proxy-service-sj65d-8gwzl/proxy/: test (200; 14.684778ms)
Feb 15 14:41:45.661: INFO: (1) /api/v1/namespaces/proxy-4757/pods/http:proxy-service-sj65d-8gwzl:162/proxy/: bar (200; 18.032848ms)
Feb 15 14:41:45.661: INFO: (1) /api/v1/namespaces/proxy-4757/pods/https:proxy-service-sj65d-8gwzl:460/proxy/: tls baz (200; 18.156552ms)
Feb 15 14:41:45.662: INFO: (1) /api/v1/namespaces/proxy-4757/pods/proxy-service-sj65d-8gwzl:1080/proxy/: test<... (200; 18.169191ms)
Feb 15 14:41:45.662: INFO: (1) /api/v1/namespaces/proxy-4757/pods/https:proxy-service-sj65d-8gwzl:462/proxy/: tls qux (200; 18.792698ms)
Feb 15 14:41:45.664: INFO: (1) /api/v1/namespaces/proxy-4757/pods/proxy-service-sj65d-8gwzl:160/proxy/: foo (200; 20.352113ms)
Feb 15 14:41:45.667: INFO: (1) /api/v1/namespaces/proxy-4757/services/proxy-service-sj65d:portname1/proxy/: foo (200; 22.771279ms)
Feb 15 14:41:45.668: INFO: (1) /api/v1/namespaces/proxy-4757/services/https:proxy-service-sj65d:tlsportname2/proxy/: tls qux (200; 25.148305ms)
Feb 15 14:41:45.669: INFO: (1) /api/v1/namespaces/proxy-4757/services/http:proxy-service-sj65d:portname2/proxy/: bar (200; 25.90326ms)
Feb 15 14:41:45.670: INFO: (1) /api/v1/namespaces/proxy-4757/services/https:proxy-service-sj65d:tlsportname1/proxy/: tls baz (200; 26.687118ms)
Feb 15 14:41:45.670: INFO: (1) /api/v1/namespaces/proxy-4757/services/http:proxy-service-sj65d:portname1/proxy/: foo (200; 27.168292ms)
Feb 15 14:41:45.670: INFO: (1) /api/v1/namespaces/proxy-4757/services/proxy-service-sj65d:portname2/proxy/: bar (200; 26.880152ms)
Feb 15 14:41:45.682: INFO: (2) /api/v1/namespaces/proxy-4757/pods/proxy-service-sj65d-8gwzl:160/proxy/: foo (200; 11.42784ms)
Feb 15 14:41:45.683: INFO: (2) /api/v1/namespaces/proxy-4757/pods/http:proxy-service-sj65d-8gwzl:162/proxy/: bar (200; 12.251966ms)
Feb 15 14:41:45.683: INFO: (2) /api/v1/namespaces/proxy-4757/pods/http:proxy-service-sj65d-8gwzl:160/proxy/: foo (200; 12.316944ms)
Feb 15 14:41:45.683: INFO: (2) /api/v1/namespaces/proxy-4757/pods/https:proxy-service-sj65d-8gwzl:462/proxy/: tls qux (200; 12.533713ms)
Feb 15 14:41:45.690: INFO: (2) /api/v1/namespaces/proxy-4757/pods/http:proxy-service-sj65d-8gwzl:1080/proxy/: ... (200; 19.617891ms)
Feb 15 14:41:45.690: INFO: (2) /api/v1/namespaces/proxy-4757/services/proxy-service-sj65d:portname1/proxy/: foo (200; 19.79437ms)
Feb 15 14:41:45.690: INFO: (2) /api/v1/namespaces/proxy-4757/services/proxy-service-sj65d:portname2/proxy/: bar (200; 19.581697ms)
Feb 15 14:41:45.690: INFO: (2) /api/v1/namespaces/proxy-4757/pods/proxy-service-sj65d-8gwzl/proxy/: test (200; 19.842484ms)
Feb 15 14:41:45.690: INFO: (2) /api/v1/namespaces/proxy-4757/services/http:proxy-service-sj65d:portname1/proxy/: foo (200; 19.71748ms)
Feb 15 14:41:45.691: INFO: (2) /api/v1/namespaces/proxy-4757/services/https:proxy-service-sj65d:tlsportname1/proxy/: tls baz (200; 19.922317ms)
Feb 15 14:41:45.691: INFO: (2) /api/v1/namespaces/proxy-4757/services/http:proxy-service-sj65d:portname2/proxy/: bar (200; 19.838777ms)
Feb 15 14:41:45.691: INFO: (2) /api/v1/namespaces/proxy-4757/pods/proxy-service-sj65d-8gwzl:1080/proxy/: test<... (200; 19.949005ms)
Feb 15 14:41:45.691: INFO: (2) /api/v1/namespaces/proxy-4757/pods/https:proxy-service-sj65d-8gwzl:443/proxy/: test<... (200; 12.028524ms)
Feb 15 14:41:45.704: INFO: (3) /api/v1/namespaces/proxy-4757/pods/http:proxy-service-sj65d-8gwzl:1080/proxy/: ... (200; 11.320147ms)
Feb 15 14:41:45.704: INFO: (3) /api/v1/namespaces/proxy-4757/services/proxy-service-sj65d:portname1/proxy/: foo (200; 13.021382ms)
Feb 15 14:41:45.704: INFO: (3) /api/v1/namespaces/proxy-4757/pods/http:proxy-service-sj65d-8gwzl:162/proxy/: bar (200; 12.138668ms)
Feb 15 14:41:45.705: INFO: (3) /api/v1/namespaces/proxy-4757/services/https:proxy-service-sj65d:tlsportname2/proxy/: tls qux (200; 13.542876ms)
Feb 15 14:41:45.705: INFO: (3) /api/v1/namespaces/proxy-4757/services/http:proxy-service-sj65d:portname2/proxy/: bar (200; 13.02977ms)
Feb 15 14:41:45.705: INFO: (3) /api/v1/namespaces/proxy-4757/pods/https:proxy-service-sj65d-8gwzl:462/proxy/: tls qux (200; 12.245062ms)
Feb 15 14:41:45.705: INFO: (3) /api/v1/namespaces/proxy-4757/pods/proxy-service-sj65d-8gwzl/proxy/: test (200; 12.493651ms)
Feb 15 14:41:45.706: INFO: (3) /api/v1/namespaces/proxy-4757/pods/proxy-service-sj65d-8gwzl:160/proxy/: foo (200; 13.409367ms)
Feb 15 14:41:45.706: INFO: (3) /api/v1/namespaces/proxy-4757/pods/http:proxy-service-sj65d-8gwzl:160/proxy/: foo (200; 14.62875ms)
Feb 15 14:41:45.707: INFO: (3) /api/v1/namespaces/proxy-4757/services/https:proxy-service-sj65d:tlsportname1/proxy/: tls baz (200; 16.04823ms)
Feb 15 14:41:45.707: INFO: (3) /api/v1/namespaces/proxy-4757/pods/proxy-service-sj65d-8gwzl:162/proxy/: bar (200; 15.010502ms)
Feb 15 14:41:45.708: INFO: (3) /api/v1/namespaces/proxy-4757/services/http:proxy-service-sj65d:portname1/proxy/: foo (200; 15.218891ms)
Feb 15 14:41:45.708: INFO: (3) /api/v1/namespaces/proxy-4757/pods/https:proxy-service-sj65d-8gwzl:460/proxy/: tls baz (200; 15.162127ms)
Feb 15 14:41:45.716: INFO: (4) /api/v1/namespaces/proxy-4757/pods/https:proxy-service-sj65d-8gwzl:462/proxy/: tls qux (200; 8.131658ms)
Feb 15 14:41:45.716: INFO: (4) /api/v1/namespaces/proxy-4757/pods/https:proxy-service-sj65d-8gwzl:460/proxy/: tls baz (200; 8.523003ms)
Feb 15 14:41:45.716: INFO: (4) /api/v1/namespaces/proxy-4757/pods/http:proxy-service-sj65d-8gwzl:160/proxy/: foo (200; 8.317262ms)
Feb 15 14:41:45.716: INFO: (4) /api/v1/namespaces/proxy-4757/pods/proxy-service-sj65d-8gwzl/proxy/: test (200; 8.464213ms)
Feb 15 14:41:45.717: INFO: (4) /api/v1/namespaces/proxy-4757/pods/proxy-service-sj65d-8gwzl:160/proxy/: foo (200; 8.804372ms)
Feb 15 14:41:45.721: INFO: (4) /api/v1/namespaces/proxy-4757/services/proxy-service-sj65d:portname2/proxy/: bar (200; 12.909778ms)
Feb 15 14:41:45.721: INFO: (4) /api/v1/namespaces/proxy-4757/pods/http:proxy-service-sj65d-8gwzl:162/proxy/: bar (200; 13.036068ms)
Feb 15 14:41:45.722: INFO: (4) /api/v1/namespaces/proxy-4757/services/http:proxy-service-sj65d:portname1/proxy/: foo (200; 14.335103ms)
Feb 15 14:41:45.722: INFO: (4) /api/v1/namespaces/proxy-4757/pods/https:proxy-service-sj65d-8gwzl:443/proxy/: ... (200; 15.047313ms)
Feb 15 14:41:45.723: INFO: (4) /api/v1/namespaces/proxy-4757/services/https:proxy-service-sj65d:tlsportname1/proxy/: tls baz (200; 15.227886ms)
Feb 15 14:41:45.724: INFO: (4) /api/v1/namespaces/proxy-4757/services/http:proxy-service-sj65d:portname2/proxy/: bar (200; 15.928156ms)
Feb 15 14:41:45.724: INFO: (4) /api/v1/namespaces/proxy-4757/services/proxy-service-sj65d:portname1/proxy/: foo (200; 16.455965ms)
Feb 15 14:41:45.726: INFO: (4) /api/v1/namespaces/proxy-4757/pods/proxy-service-sj65d-8gwzl:1080/proxy/: test<... (200; 17.941604ms)
Feb 15 14:41:45.727: INFO: (4) /api/v1/namespaces/proxy-4757/services/https:proxy-service-sj65d:tlsportname2/proxy/: tls qux (200; 19.136509ms)
Feb 15 14:41:45.737: INFO: (5) /api/v1/namespaces/proxy-4757/pods/proxy-service-sj65d-8gwzl:162/proxy/: bar (200; 9.490548ms)
Feb 15 14:41:45.737: INFO: (5) /api/v1/namespaces/proxy-4757/pods/http:proxy-service-sj65d-8gwzl:1080/proxy/: ... (200; 9.598063ms)
Feb 15 14:41:45.737: INFO: (5) /api/v1/namespaces/proxy-4757/pods/http:proxy-service-sj65d-8gwzl:160/proxy/: foo (200; 9.577161ms)
Feb 15 14:41:45.738: INFO: (5) /api/v1/namespaces/proxy-4757/pods/https:proxy-service-sj65d-8gwzl:443/proxy/: test (200; 10.97041ms)
Feb 15 14:41:45.739: INFO: (5) /api/v1/namespaces/proxy-4757/pods/https:proxy-service-sj65d-8gwzl:462/proxy/: tls qux (200; 11.697132ms)
Feb 15 14:41:45.739: INFO: (5) /api/v1/namespaces/proxy-4757/pods/http:proxy-service-sj65d-8gwzl:162/proxy/: bar (200; 11.751206ms)
Feb 15 14:41:45.740: INFO: (5) /api/v1/namespaces/proxy-4757/pods/proxy-service-sj65d-8gwzl:1080/proxy/: test<... (200; 12.741399ms)
Feb 15 14:41:45.746: INFO: (5) /api/v1/namespaces/proxy-4757/services/http:proxy-service-sj65d:portname2/proxy/: bar (200; 18.680219ms)
Feb 15 14:41:45.746: INFO: (5) /api/v1/namespaces/proxy-4757/services/proxy-service-sj65d:portname2/proxy/: bar (200; 18.904549ms)
Feb 15 14:41:45.746: INFO: (5) /api/v1/namespaces/proxy-4757/services/proxy-service-sj65d:portname1/proxy/: foo (200; 19.219059ms)
Feb 15 14:41:45.747: INFO: (5) /api/v1/namespaces/proxy-4757/services/https:proxy-service-sj65d:tlsportname2/proxy/: tls qux (200; 19.255917ms)
Feb 15 14:41:45.747: INFO: (5) /api/v1/namespaces/proxy-4757/services/http:proxy-service-sj65d:portname1/proxy/: foo (200; 19.358125ms)
Feb 15 14:41:45.749: INFO: (5) /api/v1/namespaces/proxy-4757/services/https:proxy-service-sj65d:tlsportname1/proxy/: tls baz (200; 21.701626ms)
Feb 15 14:41:45.756: INFO: (6) /api/v1/namespaces/proxy-4757/pods/http:proxy-service-sj65d-8gwzl:162/proxy/: bar (200; 6.368648ms)
Feb 15 14:41:45.756: INFO: (6) /api/v1/namespaces/proxy-4757/pods/proxy-service-sj65d-8gwzl/proxy/: test (200; 6.962855ms)
Feb 15 14:41:45.757: INFO: (6) /api/v1/namespaces/proxy-4757/pods/http:proxy-service-sj65d-8gwzl:160/proxy/: foo (200; 7.615391ms)
Feb 15 14:41:45.762: INFO: (6) /api/v1/namespaces/proxy-4757/pods/https:proxy-service-sj65d-8gwzl:460/proxy/: tls baz (200; 12.560741ms)
Feb 15 14:41:45.762: INFO: (6) /api/v1/namespaces/proxy-4757/pods/proxy-service-sj65d-8gwzl:160/proxy/: foo (200; 12.818959ms)
Feb 15 14:41:45.762: INFO: (6) /api/v1/namespaces/proxy-4757/pods/http:proxy-service-sj65d-8gwzl:1080/proxy/: ... (200; 13.269051ms)
Feb 15 14:41:45.764: INFO: (6) /api/v1/namespaces/proxy-4757/pods/proxy-service-sj65d-8gwzl:162/proxy/: bar (200; 14.076814ms)
Feb 15 14:41:45.764: INFO: (6) /api/v1/namespaces/proxy-4757/pods/https:proxy-service-sj65d-8gwzl:443/proxy/: test<... (200; 14.937629ms)
Feb 15 14:41:45.769: INFO: (6) /api/v1/namespaces/proxy-4757/services/http:proxy-service-sj65d:portname2/proxy/: bar (200; 19.688467ms)
Feb 15 14:41:45.773: INFO: (6) /api/v1/namespaces/proxy-4757/services/proxy-service-sj65d:portname2/proxy/: bar (200; 23.778894ms)
Feb 15 14:41:45.781: INFO: (6) /api/v1/namespaces/proxy-4757/services/https:proxy-service-sj65d:tlsportname1/proxy/: tls baz (200; 32.07108ms)
Feb 15 14:41:45.781: INFO: (6) /api/v1/namespaces/proxy-4757/services/http:proxy-service-sj65d:portname1/proxy/: foo (200; 31.886455ms)
Feb 15 14:41:45.782: INFO: (6) /api/v1/namespaces/proxy-4757/services/https:proxy-service-sj65d:tlsportname2/proxy/: tls qux (200; 32.416807ms)
Feb 15 14:41:45.782: INFO: (6) /api/v1/namespaces/proxy-4757/services/proxy-service-sj65d:portname1/proxy/: foo (200; 32.523492ms)
Feb 15 14:41:45.837: INFO: (7) /api/v1/namespaces/proxy-4757/pods/https:proxy-service-sj65d-8gwzl:460/proxy/: tls baz (200; 54.984372ms)
Feb 15 14:41:45.837: INFO: (7) /api/v1/namespaces/proxy-4757/pods/http:proxy-service-sj65d-8gwzl:1080/proxy/: ... (200; 55.257603ms)
Feb 15 14:41:45.842: INFO: (7) /api/v1/namespaces/proxy-4757/pods/proxy-service-sj65d-8gwzl/proxy/: test (200; 59.809301ms)
Feb 15 14:41:45.842: INFO: (7) /api/v1/namespaces/proxy-4757/services/proxy-service-sj65d:portname2/proxy/: bar (200; 60.006184ms)
Feb 15 14:41:45.843: INFO: (7) /api/v1/namespaces/proxy-4757/pods/proxy-service-sj65d-8gwzl:1080/proxy/: test<... (200; 60.278241ms)
Feb 15 14:41:45.843: INFO: (7) /api/v1/namespaces/proxy-4757/pods/proxy-service-sj65d-8gwzl:162/proxy/: bar (200; 60.593898ms)
Feb 15 14:41:45.844: INFO: (7) /api/v1/namespaces/proxy-4757/pods/https:proxy-service-sj65d-8gwzl:443/proxy/: test (200; 10.975867ms)
Feb 15 14:41:45.861: INFO: (8) /api/v1/namespaces/proxy-4757/pods/http:proxy-service-sj65d-8gwzl:1080/proxy/: ... (200; 13.37049ms)
Feb 15 14:41:45.862: INFO: (8) /api/v1/namespaces/proxy-4757/pods/proxy-service-sj65d-8gwzl:162/proxy/: bar (200; 13.986629ms)
Feb 15 14:41:45.862: INFO: (8) /api/v1/namespaces/proxy-4757/pods/proxy-service-sj65d-8gwzl:1080/proxy/: test<... (200; 14.469281ms)
Feb 15 14:41:45.862: INFO: (8) /api/v1/namespaces/proxy-4757/services/http:proxy-service-sj65d:portname1/proxy/: foo (200; 15.004908ms)
Feb 15 14:41:45.863: INFO: (8) /api/v1/namespaces/proxy-4757/pods/proxy-service-sj65d-8gwzl:160/proxy/: foo (200; 14.753672ms)
Feb 15 14:41:45.863: INFO: (8) /api/v1/namespaces/proxy-4757/pods/http:proxy-service-sj65d-8gwzl:162/proxy/: bar (200; 14.69059ms)
Feb 15 14:41:45.863: INFO: (8) /api/v1/namespaces/proxy-4757/services/https:proxy-service-sj65d:tlsportname1/proxy/: tls baz (200; 15.622898ms)
Feb 15 14:41:45.863: INFO: (8) /api/v1/namespaces/proxy-4757/pods/https:proxy-service-sj65d-8gwzl:460/proxy/: tls baz (200; 15.64254ms)
Feb 15 14:41:45.864: INFO: (8) /api/v1/namespaces/proxy-4757/pods/https:proxy-service-sj65d-8gwzl:443/proxy/: test (200; 7.54591ms)
Feb 15 14:41:45.874: INFO: (9) /api/v1/namespaces/proxy-4757/pods/http:proxy-service-sj65d-8gwzl:1080/proxy/: ... (200; 8.700227ms)
Feb 15 14:41:45.874: INFO: (9) /api/v1/namespaces/proxy-4757/pods/proxy-service-sj65d-8gwzl:160/proxy/: foo (200; 8.974014ms)
Feb 15 14:41:45.874: INFO: (9) /api/v1/namespaces/proxy-4757/pods/http:proxy-service-sj65d-8gwzl:160/proxy/: foo (200; 8.867699ms)
Feb 15 14:41:45.877: INFO: (9) /api/v1/namespaces/proxy-4757/pods/https:proxy-service-sj65d-8gwzl:462/proxy/: tls qux (200; 11.414845ms)
Feb 15 14:41:45.878: INFO: (9) /api/v1/namespaces/proxy-4757/pods/proxy-service-sj65d-8gwzl:1080/proxy/: test<... (200; 12.61448ms)
Feb 15 14:41:45.879: INFO: (9) /api/v1/namespaces/proxy-4757/pods/https:proxy-service-sj65d-8gwzl:460/proxy/: tls baz (200; 13.716979ms)
Feb 15 14:41:45.879: INFO: (9) /api/v1/namespaces/proxy-4757/pods/proxy-service-sj65d-8gwzl:162/proxy/: bar (200; 13.95336ms)
Feb 15 14:41:45.879: INFO: (9) /api/v1/namespaces/proxy-4757/pods/https:proxy-service-sj65d-8gwzl:443/proxy/: test (200; 8.143918ms)
Feb 15 14:41:45.890: INFO: (10) /api/v1/namespaces/proxy-4757/pods/https:proxy-service-sj65d-8gwzl:443/proxy/: ... (200; 10.392701ms)
Feb 15 14:41:45.893: INFO: (10) /api/v1/namespaces/proxy-4757/services/https:proxy-service-sj65d:tlsportname2/proxy/: tls qux (200; 10.544648ms)
Feb 15 14:41:45.893: INFO: (10) /api/v1/namespaces/proxy-4757/services/http:proxy-service-sj65d:portname1/proxy/: foo (200; 11.146421ms)
Feb 15 14:41:45.894: INFO: (10) /api/v1/namespaces/proxy-4757/services/proxy-service-sj65d:portname2/proxy/: bar (200; 11.434703ms)
Feb 15 14:41:45.894: INFO: (10) /api/v1/namespaces/proxy-4757/services/http:proxy-service-sj65d:portname2/proxy/: bar (200; 11.579499ms)
Feb 15 14:41:45.894: INFO: (10) /api/v1/namespaces/proxy-4757/pods/https:proxy-service-sj65d-8gwzl:462/proxy/: tls qux (200; 11.500107ms)
Feb 15 14:41:45.894: INFO: (10) /api/v1/namespaces/proxy-4757/services/proxy-service-sj65d:portname1/proxy/: foo (200; 11.486639ms)
Feb 15 14:41:45.895: INFO: (10) /api/v1/namespaces/proxy-4757/pods/proxy-service-sj65d-8gwzl:1080/proxy/: test<... (200; 12.694565ms)
Feb 15 14:41:45.915: INFO: (11) /api/v1/namespaces/proxy-4757/services/proxy-service-sj65d:portname2/proxy/: bar (200; 20.164718ms)
Feb 15 14:41:45.915: INFO: (11) /api/v1/namespaces/proxy-4757/services/https:proxy-service-sj65d:tlsportname2/proxy/: tls qux (200; 20.124292ms)
Feb 15 14:41:45.916: INFO: (11) /api/v1/namespaces/proxy-4757/pods/proxy-service-sj65d-8gwzl:1080/proxy/: test<... (200; 20.402624ms)
Feb 15 14:41:45.916: INFO: (11) /api/v1/namespaces/proxy-4757/services/proxy-service-sj65d:portname1/proxy/: foo (200; 20.518432ms)
Feb 15 14:41:45.916: INFO: (11) /api/v1/namespaces/proxy-4757/services/http:proxy-service-sj65d:portname1/proxy/: foo (200; 20.808112ms)
Feb 15 14:41:45.916: INFO: (11) /api/v1/namespaces/proxy-4757/pods/https:proxy-service-sj65d-8gwzl:462/proxy/: tls qux (200; 20.856653ms)
Feb 15 14:41:45.916: INFO: (11) /api/v1/namespaces/proxy-4757/services/http:proxy-service-sj65d:portname2/proxy/: bar (200; 20.950411ms)
Feb 15 14:41:45.916: INFO: (11) /api/v1/namespaces/proxy-4757/services/https:proxy-service-sj65d:tlsportname1/proxy/: tls baz (200; 20.894284ms)
Feb 15 14:41:45.916: INFO: (11) /api/v1/namespaces/proxy-4757/pods/https:proxy-service-sj65d-8gwzl:443/proxy/: ... (200; 21.765716ms)
Feb 15 14:41:45.917: INFO: (11) /api/v1/namespaces/proxy-4757/pods/proxy-service-sj65d-8gwzl:160/proxy/: foo (200; 21.90421ms)
Feb 15 14:41:45.918: INFO: (11) /api/v1/namespaces/proxy-4757/pods/http:proxy-service-sj65d-8gwzl:162/proxy/: bar (200; 22.557896ms)
Feb 15 14:41:45.918: INFO: (11) /api/v1/namespaces/proxy-4757/pods/proxy-service-sj65d-8gwzl/proxy/: test (200; 22.458383ms)
Feb 15 14:41:45.918: INFO: (11) /api/v1/namespaces/proxy-4757/pods/proxy-service-sj65d-8gwzl:162/proxy/: bar (200; 22.569911ms)
Feb 15 14:41:45.918: INFO: (11) /api/v1/namespaces/proxy-4757/pods/http:proxy-service-sj65d-8gwzl:160/proxy/: foo (200; 22.573998ms)
Feb 15 14:41:45.918: INFO: (11) /api/v1/namespaces/proxy-4757/pods/https:proxy-service-sj65d-8gwzl:460/proxy/: tls baz (200; 22.963967ms)
Feb 15 14:41:45.928: INFO: (12) /api/v1/namespaces/proxy-4757/pods/http:proxy-service-sj65d-8gwzl:1080/proxy/: ... (200; 8.892097ms)
Feb 15 14:41:45.928: INFO: (12) /api/v1/namespaces/proxy-4757/pods/http:proxy-service-sj65d-8gwzl:162/proxy/: bar (200; 8.61765ms)
Feb 15 14:41:45.928: INFO: (12) /api/v1/namespaces/proxy-4757/pods/https:proxy-service-sj65d-8gwzl:462/proxy/: tls qux (200; 9.063409ms)
Feb 15 14:41:45.929: INFO: (12) /api/v1/namespaces/proxy-4757/pods/http:proxy-service-sj65d-8gwzl:160/proxy/: foo (200; 9.337639ms)
Feb 15 14:41:45.929: INFO: (12) /api/v1/namespaces/proxy-4757/pods/https:proxy-service-sj65d-8gwzl:443/proxy/: test<... (200; 10.751379ms)
Feb 15 14:41:45.931: INFO: (12) /api/v1/namespaces/proxy-4757/pods/proxy-service-sj65d-8gwzl:160/proxy/: foo (200; 11.212736ms)
Feb 15 14:41:45.931: INFO: (12) /api/v1/namespaces/proxy-4757/services/https:proxy-service-sj65d:tlsportname1/proxy/: tls baz (200; 10.944686ms)
Feb 15 14:41:45.931: INFO: (12) /api/v1/namespaces/proxy-4757/pods/proxy-service-sj65d-8gwzl:162/proxy/: bar (200; 11.55166ms)
Feb 15 14:41:45.931: INFO: (12) /api/v1/namespaces/proxy-4757/pods/proxy-service-sj65d-8gwzl/proxy/: test (200; 12.496435ms)
Feb 15 14:41:45.933: INFO: (12) /api/v1/namespaces/proxy-4757/services/http:proxy-service-sj65d:portname1/proxy/: foo (200; 14.027277ms)
Feb 15 14:41:45.933: INFO: (12) /api/v1/namespaces/proxy-4757/services/https:proxy-service-sj65d:tlsportname2/proxy/: tls qux (200; 13.417225ms)
Feb 15 14:41:45.933: INFO: (12) /api/v1/namespaces/proxy-4757/services/proxy-service-sj65d:portname1/proxy/: foo (200; 13.655779ms)
Feb 15 14:41:45.945: INFO: (13) /api/v1/namespaces/proxy-4757/pods/proxy-service-sj65d-8gwzl:160/proxy/: foo (200; 11.586992ms)
Feb 15 14:41:45.945: INFO: (13) /api/v1/namespaces/proxy-4757/pods/proxy-service-sj65d-8gwzl:1080/proxy/: test<... (200; 11.678081ms)
Feb 15 14:41:45.946: INFO: (13) /api/v1/namespaces/proxy-4757/pods/proxy-service-sj65d-8gwzl/proxy/: test (200; 11.831048ms)
Feb 15 14:41:45.946: INFO: (13) /api/v1/namespaces/proxy-4757/pods/https:proxy-service-sj65d-8gwzl:460/proxy/: tls baz (200; 12.193033ms)
Feb 15 14:41:45.947: INFO: (13) /api/v1/namespaces/proxy-4757/pods/http:proxy-service-sj65d-8gwzl:160/proxy/: foo (200; 12.716528ms)
Feb 15 14:41:45.947: INFO: (13) /api/v1/namespaces/proxy-4757/pods/http:proxy-service-sj65d-8gwzl:162/proxy/: bar (200; 12.851305ms)
Feb 15 14:41:45.947: INFO: (13) /api/v1/namespaces/proxy-4757/pods/http:proxy-service-sj65d-8gwzl:1080/proxy/: ... (200; 13.132228ms)
Feb 15 14:41:45.947: INFO: (13) /api/v1/namespaces/proxy-4757/services/https:proxy-service-sj65d:tlsportname2/proxy/: tls qux (200; 12.822224ms)
Feb 15 14:41:45.947: INFO: (13) /api/v1/namespaces/proxy-4757/services/proxy-service-sj65d:portname2/proxy/: bar (200; 13.075506ms)
Feb 15 14:41:45.947: INFO: (13) /api/v1/namespaces/proxy-4757/pods/https:proxy-service-sj65d-8gwzl:443/proxy/: test (200; 15.358706ms)
Feb 15 14:41:45.971: INFO: (14) /api/v1/namespaces/proxy-4757/pods/proxy-service-sj65d-8gwzl:162/proxy/: bar (200; 16.720331ms)
Feb 15 14:41:45.971: INFO: (14) /api/v1/namespaces/proxy-4757/pods/http:proxy-service-sj65d-8gwzl:1080/proxy/: ... (200; 16.795365ms)
Feb 15 14:41:45.971: INFO: (14) /api/v1/namespaces/proxy-4757/pods/https:proxy-service-sj65d-8gwzl:443/proxy/: test<... (200; 18.246291ms)
Feb 15 14:41:45.973: INFO: (14) /api/v1/namespaces/proxy-4757/services/http:proxy-service-sj65d:portname2/proxy/: bar (200; 18.305735ms)
Feb 15 14:41:45.973: INFO: (14) /api/v1/namespaces/proxy-4757/services/https:proxy-service-sj65d:tlsportname1/proxy/: tls baz (200; 18.384565ms)
Feb 15 14:41:45.973: INFO: (14) /api/v1/namespaces/proxy-4757/services/https:proxy-service-sj65d:tlsportname2/proxy/: tls qux (200; 18.851541ms)
Feb 15 14:41:45.973: INFO: (14) /api/v1/namespaces/proxy-4757/services/proxy-service-sj65d:portname2/proxy/: bar (200; 18.806754ms)
Feb 15 14:41:45.973: INFO: (14) /api/v1/namespaces/proxy-4757/services/proxy-service-sj65d:portname1/proxy/: foo (200; 19.364699ms)
Feb 15 14:41:45.980: INFO: (15) /api/v1/namespaces/proxy-4757/services/http:proxy-service-sj65d:portname1/proxy/: foo (200; 6.455918ms)
Feb 15 14:41:45.984: INFO: (15) /api/v1/namespaces/proxy-4757/pods/https:proxy-service-sj65d-8gwzl:462/proxy/: tls qux (200; 10.237664ms)
Feb 15 14:41:45.994: INFO: (15) /api/v1/namespaces/proxy-4757/pods/proxy-service-sj65d-8gwzl:1080/proxy/: test<... (200; 20.752807ms)
Feb 15 14:41:45.995: INFO: (15) /api/v1/namespaces/proxy-4757/pods/proxy-service-sj65d-8gwzl:160/proxy/: foo (200; 21.00269ms)
Feb 15 14:41:45.995: INFO: (15) /api/v1/namespaces/proxy-4757/services/https:proxy-service-sj65d:tlsportname2/proxy/: tls qux (200; 21.077962ms)
Feb 15 14:41:45.995: INFO: (15) /api/v1/namespaces/proxy-4757/pods/proxy-service-sj65d-8gwzl/proxy/: test (200; 21.320525ms)
Feb 15 14:41:45.995: INFO: (15) /api/v1/namespaces/proxy-4757/pods/http:proxy-service-sj65d-8gwzl:160/proxy/: foo (200; 21.062653ms)
Feb 15 14:41:45.995: INFO: (15) /api/v1/namespaces/proxy-4757/services/proxy-service-sj65d:portname1/proxy/: foo (200; 21.977661ms)
Feb 15 14:41:45.996: INFO: (15) /api/v1/namespaces/proxy-4757/pods/proxy-service-sj65d-8gwzl:162/proxy/: bar (200; 22.483666ms)
Feb 15 14:41:45.996: INFO: (15) /api/v1/namespaces/proxy-4757/services/proxy-service-sj65d:portname2/proxy/: bar (200; 22.627817ms)
Feb 15 14:41:45.997: INFO: (15) /api/v1/namespaces/proxy-4757/services/http:proxy-service-sj65d:portname2/proxy/: bar (200; 22.919572ms)
Feb 15 14:41:45.997: INFO: (15) /api/v1/namespaces/proxy-4757/pods/http:proxy-service-sj65d-8gwzl:162/proxy/: bar (200; 23.402473ms)
Feb 15 14:41:45.997: INFO: (15) /api/v1/namespaces/proxy-4757/pods/https:proxy-service-sj65d-8gwzl:443/proxy/: ... (200; 23.249569ms)
Feb 15 14:41:45.998: INFO: (15) /api/v1/namespaces/proxy-4757/services/https:proxy-service-sj65d:tlsportname1/proxy/: tls baz (200; 24.125683ms)
Feb 15 14:41:45.998: INFO: (15) /api/v1/namespaces/proxy-4757/pods/https:proxy-service-sj65d-8gwzl:460/proxy/: tls baz (200; 23.996425ms)
Feb 15 14:41:46.017: INFO: (16) /api/v1/namespaces/proxy-4757/pods/http:proxy-service-sj65d-8gwzl:160/proxy/: foo (200; 17.491477ms)
Feb 15 14:41:46.017: INFO: (16) /api/v1/namespaces/proxy-4757/pods/http:proxy-service-sj65d-8gwzl:162/proxy/: bar (200; 18.261369ms)
Feb 15 14:41:46.018: INFO: (16) /api/v1/namespaces/proxy-4757/pods/https:proxy-service-sj65d-8gwzl:462/proxy/: tls qux (200; 19.739488ms)
Feb 15 14:41:46.018: INFO: (16) /api/v1/namespaces/proxy-4757/pods/proxy-service-sj65d-8gwzl:162/proxy/: bar (200; 18.720519ms)
Feb 15 14:41:46.018: INFO: (16) /api/v1/namespaces/proxy-4757/pods/https:proxy-service-sj65d-8gwzl:443/proxy/: test (200; 17.658725ms)
Feb 15 14:41:46.019: INFO: (16) /api/v1/namespaces/proxy-4757/pods/proxy-service-sj65d-8gwzl:1080/proxy/: test<... (200; 18.973663ms)
Feb 15 14:41:46.019: INFO: (16) /api/v1/namespaces/proxy-4757/pods/proxy-service-sj65d-8gwzl:160/proxy/: foo (200; 20.091443ms)
Feb 15 14:41:46.019: INFO: (16) /api/v1/namespaces/proxy-4757/pods/http:proxy-service-sj65d-8gwzl:1080/proxy/: ... (200; 18.569405ms)
Feb 15 14:41:46.019: INFO: (16) /api/v1/namespaces/proxy-4757/services/https:proxy-service-sj65d:tlsportname2/proxy/: tls qux (200; 19.559995ms)
Feb 15 14:41:46.019: INFO: (16) /api/v1/namespaces/proxy-4757/pods/https:proxy-service-sj65d-8gwzl:460/proxy/: tls baz (200; 19.926114ms)
Feb 15 14:41:46.020: INFO: (16) /api/v1/namespaces/proxy-4757/services/proxy-service-sj65d:portname2/proxy/: bar (200; 19.308066ms)
Feb 15 14:41:46.020: INFO: (16) /api/v1/namespaces/proxy-4757/services/http:proxy-service-sj65d:portname1/proxy/: foo (200; 19.864944ms)
Feb 15 14:41:46.020: INFO: (16) /api/v1/namespaces/proxy-4757/services/proxy-service-sj65d:portname1/proxy/: foo (200; 21.127233ms)
Feb 15 14:41:46.020: INFO: (16) /api/v1/namespaces/proxy-4757/services/https:proxy-service-sj65d:tlsportname1/proxy/: tls baz (200; 20.820911ms)
Feb 15 14:41:46.020: INFO: (16) /api/v1/namespaces/proxy-4757/services/http:proxy-service-sj65d:portname2/proxy/: bar (200; 20.990763ms)
Feb 15 14:41:46.038: INFO: (17) /api/v1/namespaces/proxy-4757/pods/https:proxy-service-sj65d-8gwzl:460/proxy/: tls baz (200; 17.121072ms)
Feb 15 14:41:46.038: INFO: (17) /api/v1/namespaces/proxy-4757/pods/proxy-service-sj65d-8gwzl/proxy/: test (200; 17.12699ms)
Feb 15 14:41:46.045: INFO: (17) /api/v1/namespaces/proxy-4757/services/proxy-service-sj65d:portname2/proxy/: bar (200; 23.567265ms)
Feb 15 14:41:46.045: INFO: (17) /api/v1/namespaces/proxy-4757/services/http:proxy-service-sj65d:portname2/proxy/: bar (200; 24.264233ms)
Feb 15 14:41:46.046: INFO: (17) /api/v1/namespaces/proxy-4757/services/https:proxy-service-sj65d:tlsportname2/proxy/: tls qux (200; 24.807268ms)
Feb 15 14:41:46.046: INFO: (17) /api/v1/namespaces/proxy-4757/services/https:proxy-service-sj65d:tlsportname1/proxy/: tls baz (200; 25.272288ms)
Feb 15 14:41:46.048: INFO: (17) /api/v1/namespaces/proxy-4757/services/http:proxy-service-sj65d:portname1/proxy/: foo (200; 26.980789ms)
Feb 15 14:41:46.049: INFO: (17) /api/v1/namespaces/proxy-4757/pods/http:proxy-service-sj65d-8gwzl:162/proxy/: bar (200; 28.263973ms)
Feb 15 14:41:46.049: INFO: (17) /api/v1/namespaces/proxy-4757/pods/proxy-service-sj65d-8gwzl:1080/proxy/: test<... (200; 28.398692ms)
Feb 15 14:41:46.049: INFO: (17) /api/v1/namespaces/proxy-4757/pods/https:proxy-service-sj65d-8gwzl:462/proxy/: tls qux (200; 28.489397ms)
Feb 15 14:41:46.050: INFO: (17) /api/v1/namespaces/proxy-4757/pods/http:proxy-service-sj65d-8gwzl:1080/proxy/: ... (200; 28.923639ms)
Feb 15 14:41:46.050: INFO: (17) /api/v1/namespaces/proxy-4757/services/proxy-service-sj65d:portname1/proxy/: foo (200; 28.997964ms)
Feb 15 14:41:46.050: INFO: (17) /api/v1/namespaces/proxy-4757/pods/https:proxy-service-sj65d-8gwzl:443/proxy/: test (200; 8.301431ms)
Feb 15 14:41:46.060: INFO: (18) /api/v1/namespaces/proxy-4757/services/http:proxy-service-sj65d:portname1/proxy/: foo (200; 8.824097ms)
Feb 15 14:41:46.060: INFO: (18) /api/v1/namespaces/proxy-4757/services/proxy-service-sj65d:portname2/proxy/: bar (200; 8.727821ms)
Feb 15 14:41:46.060: INFO: (18) /api/v1/namespaces/proxy-4757/pods/http:proxy-service-sj65d-8gwzl:1080/proxy/: ... (200; 8.485032ms)
Feb 15 14:41:46.063: INFO: (18) /api/v1/namespaces/proxy-4757/pods/https:proxy-service-sj65d-8gwzl:460/proxy/: tls baz (200; 10.886988ms)
Feb 15 14:41:46.063: INFO: (18) /api/v1/namespaces/proxy-4757/pods/proxy-service-sj65d-8gwzl:160/proxy/: foo (200; 11.603432ms)
Feb 15 14:41:46.063: INFO: (18) /api/v1/namespaces/proxy-4757/pods/proxy-service-sj65d-8gwzl:162/proxy/: bar (200; 11.532724ms)
Feb 15 14:41:46.063: INFO: (18) /api/v1/namespaces/proxy-4757/services/https:proxy-service-sj65d:tlsportname2/proxy/: tls qux (200; 10.910751ms)
Feb 15 14:41:46.063: INFO: (18) /api/v1/namespaces/proxy-4757/pods/https:proxy-service-sj65d-8gwzl:462/proxy/: tls qux (200; 11.754529ms)
Feb 15 14:41:46.063: INFO: (18) /api/v1/namespaces/proxy-4757/services/http:proxy-service-sj65d:portname2/proxy/: bar (200; 11.180844ms)
Feb 15 14:41:46.063: INFO: (18) /api/v1/namespaces/proxy-4757/services/proxy-service-sj65d:portname1/proxy/: foo (200; 11.434992ms)
Feb 15 14:41:46.065: INFO: (18) /api/v1/namespaces/proxy-4757/pods/proxy-service-sj65d-8gwzl:1080/proxy/: test<... (200; 12.105938ms)
Feb 15 14:41:46.065: INFO: (18) /api/v1/namespaces/proxy-4757/services/https:proxy-service-sj65d:tlsportname1/proxy/: tls baz (200; 12.760965ms)
Feb 15 14:41:46.066: INFO: (18) /api/v1/namespaces/proxy-4757/pods/http:proxy-service-sj65d-8gwzl:160/proxy/: foo (200; 13.46712ms)
Feb 15 14:41:46.074: INFO: (19) /api/v1/namespaces/proxy-4757/pods/https:proxy-service-sj65d-8gwzl:460/proxy/: tls baz (200; 7.204585ms)
Feb 15 14:41:46.074: INFO: (19) /api/v1/namespaces/proxy-4757/pods/proxy-service-sj65d-8gwzl:1080/proxy/: test<... (200; 7.596641ms)
Feb 15 14:41:46.076: INFO: (19) /api/v1/namespaces/proxy-4757/pods/https:proxy-service-sj65d-8gwzl:443/proxy/: ... (200; 10.2029ms)
Feb 15 14:41:46.077: INFO: (19) /api/v1/namespaces/proxy-4757/pods/proxy-service-sj65d-8gwzl/proxy/: test (200; 10.511713ms)
Feb 15 14:41:46.078: INFO: (19) /api/v1/namespaces/proxy-4757/pods/proxy-service-sj65d-8gwzl:162/proxy/: bar (200; 11.01172ms)
Feb 15 14:41:46.078: INFO: (19) /api/v1/namespaces/proxy-4757/services/https:proxy-service-sj65d:tlsportname1/proxy/: tls baz (200; 11.271493ms)
Feb 15 14:41:46.079: INFO: (19) /api/v1/namespaces/proxy-4757/pods/proxy-service-sj65d-8gwzl:160/proxy/: foo (200; 11.930633ms)
Feb 15 14:41:46.079: INFO: (19) /api/v1/namespaces/proxy-4757/pods/http:proxy-service-sj65d-8gwzl:162/proxy/: bar (200; 12.162413ms)
Feb 15 14:41:46.079: INFO: (19) /api/v1/namespaces/proxy-4757/pods/https:proxy-service-sj65d-8gwzl:462/proxy/: tls qux (200; 12.443361ms)
Feb 15 14:41:46.083: INFO: (19) /api/v1/namespaces/proxy-4757/services/proxy-service-sj65d:portname2/proxy/: bar (200; 16.302598ms)
Feb 15 14:41:46.083: INFO: (19) /api/v1/namespaces/proxy-4757/services/proxy-service-sj65d:portname1/proxy/: foo (200; 16.292368ms)
Feb 15 14:41:46.083: INFO: (19) /api/v1/namespaces/proxy-4757/services/http:proxy-service-sj65d:portname1/proxy/: foo (200; 16.551341ms)
Feb 15 14:41:46.084: INFO: (19) /api/v1/namespaces/proxy-4757/services/http:proxy-service-sj65d:portname2/proxy/: bar (200; 17.146125ms)
Feb 15 14:41:46.084: INFO: (19) /api/v1/namespaces/proxy-4757/services/https:proxy-service-sj65d:tlsportname2/proxy/: tls qux (200; 17.39363ms)
STEP: deleting ReplicationController proxy-service-sj65d in namespace proxy-4757, will wait for the garbage collector to delete the pods
Feb 15 14:41:46.162: INFO: Deleting ReplicationController proxy-service-sj65d took: 22.369176ms
Feb 15 14:41:48.163: INFO: Terminating ReplicationController proxy-service-sj65d pods took: 2.000990702s
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:41:53.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-4757" for this suite.
Feb 15 14:41:59.293: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:41:59.443: INFO: namespace proxy-4757 deletion completed in 6.17238392s

• [SLOW TEST:27.167 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy through a service and a pod  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:41:59.445: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb 15 14:41:59.572: INFO: Waiting up to 5m0s for pod "pod-f7346e11-f51d-47b0-b421-0d1cb49b2029" in namespace "emptydir-9049" to be "success or failure"
Feb 15 14:41:59.582: INFO: Pod "pod-f7346e11-f51d-47b0-b421-0d1cb49b2029": Phase="Pending", Reason="", readiness=false. Elapsed: 9.75243ms
Feb 15 14:42:01.591: INFO: Pod "pod-f7346e11-f51d-47b0-b421-0d1cb49b2029": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018766765s
Feb 15 14:42:03.601: INFO: Pod "pod-f7346e11-f51d-47b0-b421-0d1cb49b2029": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027973403s
Feb 15 14:42:05.608: INFO: Pod "pod-f7346e11-f51d-47b0-b421-0d1cb49b2029": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034961506s
Feb 15 14:42:07.614: INFO: Pod "pod-f7346e11-f51d-47b0-b421-0d1cb49b2029": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.04106326s
STEP: Saw pod success
Feb 15 14:42:07.614: INFO: Pod "pod-f7346e11-f51d-47b0-b421-0d1cb49b2029" satisfied condition "success or failure"
Feb 15 14:42:07.617: INFO: Trying to get logs from node iruya-node pod pod-f7346e11-f51d-47b0-b421-0d1cb49b2029 container test-container: 
STEP: delete the pod
Feb 15 14:42:07.715: INFO: Waiting for pod pod-f7346e11-f51d-47b0-b421-0d1cb49b2029 to disappear
Feb 15 14:42:07.725: INFO: Pod pod-f7346e11-f51d-47b0-b421-0d1cb49b2029 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:42:07.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9049" for this suite.
Feb 15 14:42:13.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:42:13.930: INFO: namespace emptydir-9049 deletion completed in 6.193453461s

• [SLOW TEST:14.485 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:42:13.930: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-8fdd6266-8127-4d7c-8567-ca0fcdf2a9ee
STEP: Creating a pod to test consume secrets
Feb 15 14:42:14.095: INFO: Waiting up to 5m0s for pod "pod-secrets-4f5241ce-752b-47a5-b4aa-40e3af94e1ef" in namespace "secrets-9994" to be "success or failure"
Feb 15 14:42:14.105: INFO: Pod "pod-secrets-4f5241ce-752b-47a5-b4aa-40e3af94e1ef": Phase="Pending", Reason="", readiness=false. Elapsed: 10.293043ms
Feb 15 14:42:16.117: INFO: Pod "pod-secrets-4f5241ce-752b-47a5-b4aa-40e3af94e1ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021872221s
Feb 15 14:42:18.123: INFO: Pod "pod-secrets-4f5241ce-752b-47a5-b4aa-40e3af94e1ef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028051156s
Feb 15 14:42:20.133: INFO: Pod "pod-secrets-4f5241ce-752b-47a5-b4aa-40e3af94e1ef": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038284527s
Feb 15 14:42:22.142: INFO: Pod "pod-secrets-4f5241ce-752b-47a5-b4aa-40e3af94e1ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.046971991s
STEP: Saw pod success
Feb 15 14:42:22.142: INFO: Pod "pod-secrets-4f5241ce-752b-47a5-b4aa-40e3af94e1ef" satisfied condition "success or failure"
Feb 15 14:42:22.146: INFO: Trying to get logs from node iruya-node pod pod-secrets-4f5241ce-752b-47a5-b4aa-40e3af94e1ef container secret-volume-test: 
STEP: delete the pod
Feb 15 14:42:22.289: INFO: Waiting for pod pod-secrets-4f5241ce-752b-47a5-b4aa-40e3af94e1ef to disappear
Feb 15 14:42:22.304: INFO: Pod pod-secrets-4f5241ce-752b-47a5-b4aa-40e3af94e1ef no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:42:22.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9994" for this suite.
Feb 15 14:42:28.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:42:28.478: INFO: namespace secrets-9994 deletion completed in 6.154695823s

• [SLOW TEST:14.548 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:42:28.479: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's command
Feb 15 14:42:28.567: INFO: Waiting up to 5m0s for pod "var-expansion-03bad749-323c-402c-bdee-d9aae698c88a" in namespace "var-expansion-5697" to be "success or failure"
Feb 15 14:42:28.582: INFO: Pod "var-expansion-03bad749-323c-402c-bdee-d9aae698c88a": Phase="Pending", Reason="", readiness=false. Elapsed: 14.933915ms
Feb 15 14:42:30.589: INFO: Pod "var-expansion-03bad749-323c-402c-bdee-d9aae698c88a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022185641s
Feb 15 14:42:32.610: INFO: Pod "var-expansion-03bad749-323c-402c-bdee-d9aae698c88a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042730298s
Feb 15 14:42:34.621: INFO: Pod "var-expansion-03bad749-323c-402c-bdee-d9aae698c88a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053363589s
Feb 15 14:42:36.637: INFO: Pod "var-expansion-03bad749-323c-402c-bdee-d9aae698c88a": Phase="Running", Reason="", readiness=true. Elapsed: 8.069967636s
Feb 15 14:42:38.648: INFO: Pod "var-expansion-03bad749-323c-402c-bdee-d9aae698c88a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.080558784s
STEP: Saw pod success
Feb 15 14:42:38.648: INFO: Pod "var-expansion-03bad749-323c-402c-bdee-d9aae698c88a" satisfied condition "success or failure"
Feb 15 14:42:38.654: INFO: Trying to get logs from node iruya-node pod var-expansion-03bad749-323c-402c-bdee-d9aae698c88a container dapi-container: 
STEP: delete the pod
Feb 15 14:42:38.712: INFO: Waiting for pod var-expansion-03bad749-323c-402c-bdee-d9aae698c88a to disappear
Feb 15 14:42:38.740: INFO: Pod var-expansion-03bad749-323c-402c-bdee-d9aae698c88a no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:42:38.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-5697" for this suite.
Feb 15 14:42:44.776: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:42:44.870: INFO: namespace var-expansion-5697 deletion completed in 6.120915629s

• [SLOW TEST:16.392 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:42:44.872: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb 15 14:42:44.979: INFO: Waiting up to 5m0s for pod "downward-api-3815df2b-e796-4936-85f6-3c7dfeedb12b" in namespace "downward-api-5163" to be "success or failure"
Feb 15 14:42:45.020: INFO: Pod "downward-api-3815df2b-e796-4936-85f6-3c7dfeedb12b": Phase="Pending", Reason="", readiness=false. Elapsed: 41.502395ms
Feb 15 14:42:47.063: INFO: Pod "downward-api-3815df2b-e796-4936-85f6-3c7dfeedb12b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08411116s
Feb 15 14:42:49.074: INFO: Pod "downward-api-3815df2b-e796-4936-85f6-3c7dfeedb12b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094736263s
Feb 15 14:42:51.091: INFO: Pod "downward-api-3815df2b-e796-4936-85f6-3c7dfeedb12b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.112587059s
Feb 15 14:42:53.102: INFO: Pod "downward-api-3815df2b-e796-4936-85f6-3c7dfeedb12b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.123470153s
STEP: Saw pod success
Feb 15 14:42:53.102: INFO: Pod "downward-api-3815df2b-e796-4936-85f6-3c7dfeedb12b" satisfied condition "success or failure"
Feb 15 14:42:53.105: INFO: Trying to get logs from node iruya-node pod downward-api-3815df2b-e796-4936-85f6-3c7dfeedb12b container dapi-container: 
STEP: delete the pod
Feb 15 14:42:53.173: INFO: Waiting for pod downward-api-3815df2b-e796-4936-85f6-3c7dfeedb12b to disappear
Feb 15 14:42:53.177: INFO: Pod downward-api-3815df2b-e796-4936-85f6-3c7dfeedb12b no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:42:53.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5163" for this suite.
Feb 15 14:43:00.947: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:43:01.189: INFO: namespace downward-api-5163 deletion completed in 8.007340284s

• [SLOW TEST:16.317 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:43:01.189: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 15 14:43:01.357: INFO: Waiting up to 5m0s for pod "downwardapi-volume-30261d71-0abb-4f69-9a2d-6610239d86ca" in namespace "downward-api-1481" to be "success or failure"
Feb 15 14:43:01.362: INFO: Pod "downwardapi-volume-30261d71-0abb-4f69-9a2d-6610239d86ca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.742185ms
Feb 15 14:43:03.373: INFO: Pod "downwardapi-volume-30261d71-0abb-4f69-9a2d-6610239d86ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015855201s
Feb 15 14:43:05.380: INFO: Pod "downwardapi-volume-30261d71-0abb-4f69-9a2d-6610239d86ca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023298214s
Feb 15 14:43:07.391: INFO: Pod "downwardapi-volume-30261d71-0abb-4f69-9a2d-6610239d86ca": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033792329s
Feb 15 14:43:09.402: INFO: Pod "downwardapi-volume-30261d71-0abb-4f69-9a2d-6610239d86ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.044715782s
STEP: Saw pod success
Feb 15 14:43:09.402: INFO: Pod "downwardapi-volume-30261d71-0abb-4f69-9a2d-6610239d86ca" satisfied condition "success or failure"
Feb 15 14:43:09.406: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-30261d71-0abb-4f69-9a2d-6610239d86ca container client-container: 
STEP: delete the pod
Feb 15 14:43:09.460: INFO: Waiting for pod downwardapi-volume-30261d71-0abb-4f69-9a2d-6610239d86ca to disappear
Feb 15 14:43:09.464: INFO: Pod downwardapi-volume-30261d71-0abb-4f69-9a2d-6610239d86ca no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:43:09.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1481" for this suite.
Feb 15 14:43:15.554: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:43:15.711: INFO: namespace downward-api-1481 deletion completed in 6.190815065s

• [SLOW TEST:14.522 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:43:15.712: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-3773/configmap-test-358f3cad-4fde-4564-bb8d-4ee9440edd21
STEP: Creating a pod to test consume configMaps
Feb 15 14:43:15.836: INFO: Waiting up to 5m0s for pod "pod-configmaps-5fa7e922-21c7-46d4-95d7-a2bdd2c92301" in namespace "configmap-3773" to be "success or failure"
Feb 15 14:43:15.865: INFO: Pod "pod-configmaps-5fa7e922-21c7-46d4-95d7-a2bdd2c92301": Phase="Pending", Reason="", readiness=false. Elapsed: 28.256434ms
Feb 15 14:43:17.874: INFO: Pod "pod-configmaps-5fa7e922-21c7-46d4-95d7-a2bdd2c92301": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037585214s
Feb 15 14:43:19.884: INFO: Pod "pod-configmaps-5fa7e922-21c7-46d4-95d7-a2bdd2c92301": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047281938s
Feb 15 14:43:21.892: INFO: Pod "pod-configmaps-5fa7e922-21c7-46d4-95d7-a2bdd2c92301": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055254835s
Feb 15 14:43:23.913: INFO: Pod "pod-configmaps-5fa7e922-21c7-46d4-95d7-a2bdd2c92301": Phase="Pending", Reason="", readiness=false. Elapsed: 8.077094324s
Feb 15 14:43:25.921: INFO: Pod "pod-configmaps-5fa7e922-21c7-46d4-95d7-a2bdd2c92301": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.084971425s
STEP: Saw pod success
Feb 15 14:43:25.922: INFO: Pod "pod-configmaps-5fa7e922-21c7-46d4-95d7-a2bdd2c92301" satisfied condition "success or failure"
Feb 15 14:43:25.925: INFO: Trying to get logs from node iruya-node pod pod-configmaps-5fa7e922-21c7-46d4-95d7-a2bdd2c92301 container env-test: 
STEP: delete the pod
Feb 15 14:43:27.195: INFO: Waiting for pod pod-configmaps-5fa7e922-21c7-46d4-95d7-a2bdd2c92301 to disappear
Feb 15 14:43:27.200: INFO: Pod pod-configmaps-5fa7e922-21c7-46d4-95d7-a2bdd2c92301 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:43:27.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3773" for this suite.
Feb 15 14:43:33.233: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:43:33.398: INFO: namespace configmap-3773 deletion completed in 6.1910939s

• [SLOW TEST:17.686 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:43:33.398: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb 15 14:43:33.530: INFO: Waiting up to 5m0s for pod "pod-4a33f596-fccd-45e4-b0ad-f8f55ff2c151" in namespace "emptydir-9781" to be "success or failure"
Feb 15 14:43:33.559: INFO: Pod "pod-4a33f596-fccd-45e4-b0ad-f8f55ff2c151": Phase="Pending", Reason="", readiness=false. Elapsed: 28.622784ms
Feb 15 14:43:35.566: INFO: Pod "pod-4a33f596-fccd-45e4-b0ad-f8f55ff2c151": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03529029s
Feb 15 14:43:37.609: INFO: Pod "pod-4a33f596-fccd-45e4-b0ad-f8f55ff2c151": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078733151s
Feb 15 14:43:39.617: INFO: Pod "pod-4a33f596-fccd-45e4-b0ad-f8f55ff2c151": Phase="Pending", Reason="", readiness=false. Elapsed: 6.086819427s
Feb 15 14:43:41.625: INFO: Pod "pod-4a33f596-fccd-45e4-b0ad-f8f55ff2c151": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.094797883s
STEP: Saw pod success
Feb 15 14:43:41.626: INFO: Pod "pod-4a33f596-fccd-45e4-b0ad-f8f55ff2c151" satisfied condition "success or failure"
Feb 15 14:43:41.629: INFO: Trying to get logs from node iruya-node pod pod-4a33f596-fccd-45e4-b0ad-f8f55ff2c151 container test-container: 
STEP: delete the pod
Feb 15 14:43:41.728: INFO: Waiting for pod pod-4a33f596-fccd-45e4-b0ad-f8f55ff2c151 to disappear
Feb 15 14:43:41.734: INFO: Pod pod-4a33f596-fccd-45e4-b0ad-f8f55ff2c151 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:43:41.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9781" for this suite.
Feb 15 14:43:47.792: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:43:47.982: INFO: namespace emptydir-9781 deletion completed in 6.243148757s

• [SLOW TEST:14.585 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:43:47.984: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-4d6e71ca-d1cc-4999-a189-f3bbc35b365b
STEP: Creating a pod to test consume secrets
Feb 15 14:43:48.121: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c23fc159-f00c-43f3-bdb0-820ad81fbb44" in namespace "projected-3355" to be "success or failure"
Feb 15 14:43:48.128: INFO: Pod "pod-projected-secrets-c23fc159-f00c-43f3-bdb0-820ad81fbb44": Phase="Pending", Reason="", readiness=false. Elapsed: 6.425243ms
Feb 15 14:43:50.136: INFO: Pod "pod-projected-secrets-c23fc159-f00c-43f3-bdb0-820ad81fbb44": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014298511s
Feb 15 14:43:52.148: INFO: Pod "pod-projected-secrets-c23fc159-f00c-43f3-bdb0-820ad81fbb44": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02704695s
Feb 15 14:43:54.162: INFO: Pod "pod-projected-secrets-c23fc159-f00c-43f3-bdb0-820ad81fbb44": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040560814s
Feb 15 14:43:56.173: INFO: Pod "pod-projected-secrets-c23fc159-f00c-43f3-bdb0-820ad81fbb44": Phase="Pending", Reason="", readiness=false. Elapsed: 8.05155334s
Feb 15 14:43:58.656: INFO: Pod "pod-projected-secrets-c23fc159-f00c-43f3-bdb0-820ad81fbb44": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.535154061s
STEP: Saw pod success
Feb 15 14:43:58.657: INFO: Pod "pod-projected-secrets-c23fc159-f00c-43f3-bdb0-820ad81fbb44" satisfied condition "success or failure"
Feb 15 14:43:58.662: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-c23fc159-f00c-43f3-bdb0-820ad81fbb44 container projected-secret-volume-test: 
STEP: delete the pod
Feb 15 14:43:58.740: INFO: Waiting for pod pod-projected-secrets-c23fc159-f00c-43f3-bdb0-820ad81fbb44 to disappear
Feb 15 14:43:58.743: INFO: Pod pod-projected-secrets-c23fc159-f00c-43f3-bdb0-820ad81fbb44 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:43:58.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3355" for this suite.
Feb 15 14:44:04.819: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:44:04.970: INFO: namespace projected-3355 deletion completed in 6.222741043s

• [SLOW TEST:16.987 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:44:04.971: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating all guestbook components
Feb 15 14:44:05.029: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Feb 15 14:44:05.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2603'
Feb 15 14:44:05.503: INFO: stderr: ""
Feb 15 14:44:05.503: INFO: stdout: "service/redis-slave created\n"
Feb 15 14:44:05.504: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Feb 15 14:44:05.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2603'
Feb 15 14:44:05.987: INFO: stderr: ""
Feb 15 14:44:05.988: INFO: stdout: "service/redis-master created\n"
Feb 15 14:44:05.988: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Feb 15 14:44:05.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2603'
Feb 15 14:44:06.381: INFO: stderr: ""
Feb 15 14:44:06.381: INFO: stdout: "service/frontend created\n"
Feb 15 14:44:06.383: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Feb 15 14:44:06.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2603'
Feb 15 14:44:06.670: INFO: stderr: ""
Feb 15 14:44:06.670: INFO: stdout: "deployment.apps/frontend created\n"
Feb 15 14:44:06.670: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Feb 15 14:44:06.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2603'
Feb 15 14:44:07.282: INFO: stderr: ""
Feb 15 14:44:07.283: INFO: stdout: "deployment.apps/redis-master created\n"
Feb 15 14:44:07.284: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Feb 15 14:44:07.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2603'
Feb 15 14:44:08.711: INFO: stderr: ""
Feb 15 14:44:08.711: INFO: stdout: "deployment.apps/redis-slave created\n"
STEP: validating guestbook app
Feb 15 14:44:08.711: INFO: Waiting for all frontend pods to be Running.
Feb 15 14:44:28.765: INFO: Waiting for frontend to serve content.
Feb 15 14:44:28.902: INFO: Trying to add a new entry to the guestbook.
Feb 15 14:44:29.128: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Feb 15 14:44:29.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2603'
Feb 15 14:44:29.388: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 15 14:44:29.388: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Feb 15 14:44:29.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2603'
Feb 15 14:44:29.581: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 15 14:44:29.581: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Feb 15 14:44:29.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2603'
Feb 15 14:44:29.785: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 15 14:44:29.786: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb 15 14:44:29.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2603'
Feb 15 14:44:29.901: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 15 14:44:29.901: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb 15 14:44:29.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2603'
Feb 15 14:44:30.122: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 15 14:44:30.122: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Feb 15 14:44:30.124: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2603'
Feb 15 14:44:30.414: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 15 14:44:30.415: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:44:30.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2603" for this suite.
Feb 15 14:45:10.645: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:45:10.761: INFO: namespace kubectl-2603 deletion completed in 40.33180677s

• [SLOW TEST:65.790 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:45:10.761: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Feb 15 14:45:10.825: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 15 14:45:10.859: INFO: Waiting for terminating namespaces to be deleted...
Feb 15 14:45:10.861: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Feb 15 14:45:10.926: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Feb 15 14:45:10.927: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 15 14:45:10.927: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Feb 15 14:45:10.927: INFO: 	Container weave ready: true, restart count 0
Feb 15 14:45:10.927: INFO: 	Container weave-npc ready: true, restart count 0
Feb 15 14:45:10.927: INFO: kube-bench-j7kcs from default started at 2020-02-11 06:42:30 +0000 UTC (1 container statuses recorded)
Feb 15 14:45:10.927: INFO: 	Container kube-bench ready: false, restart count 0
Feb 15 14:45:10.927: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Feb 15 14:45:10.941: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Feb 15 14:45:10.941: INFO: 	Container kube-scheduler ready: true, restart count 13
Feb 15 14:45:10.941: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb 15 14:45:10.941: INFO: 	Container coredns ready: true, restart count 0
Feb 15 14:45:10.941: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Feb 15 14:45:10.941: INFO: 	Container etcd ready: true, restart count 0
Feb 15 14:45:10.941: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Feb 15 14:45:10.941: INFO: 	Container weave ready: true, restart count 0
Feb 15 14:45:10.941: INFO: 	Container weave-npc ready: true, restart count 0
Feb 15 14:45:10.941: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb 15 14:45:10.941: INFO: 	Container coredns ready: true, restart count 0
Feb 15 14:45:10.941: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Feb 15 14:45:10.941: INFO: 	Container kube-controller-manager ready: true, restart count 21
Feb 15 14:45:10.941: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Feb 15 14:45:10.941: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 15 14:45:10.941: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Feb 15 14:45:10.941: INFO: 	Container kube-apiserver ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-f8dddfa3-761c-44d4-8da2-9586c4b4b2b6 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-f8dddfa3-761c-44d4-8da2-9586c4b4b2b6 off the node iruya-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-f8dddfa3-761c-44d4-8da2-9586c4b4b2b6
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:45:29.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-2172" for this suite.
Feb 15 14:45:49.176: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:45:49.334: INFO: namespace sched-pred-2172 deletion completed in 20.18996944s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:38.573 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:45:49.335: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Feb 15 14:45:49.405: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:46:06.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7543" for this suite.
Feb 15 14:46:12.654: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:46:12.893: INFO: namespace pods-7543 deletion completed in 6.262435145s

• [SLOW TEST:23.559 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:46:12.894: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-secret-2vzj
STEP: Creating a pod to test atomic-volume-subpath
Feb 15 14:46:12.999: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-2vzj" in namespace "subpath-6733" to be "success or failure"
Feb 15 14:46:13.059: INFO: Pod "pod-subpath-test-secret-2vzj": Phase="Pending", Reason="", readiness=false. Elapsed: 60.087698ms
Feb 15 14:46:15.071: INFO: Pod "pod-subpath-test-secret-2vzj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071653415s
Feb 15 14:46:17.082: INFO: Pod "pod-subpath-test-secret-2vzj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082250618s
Feb 15 14:46:19.088: INFO: Pod "pod-subpath-test-secret-2vzj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.089008124s
Feb 15 14:46:21.095: INFO: Pod "pod-subpath-test-secret-2vzj": Phase="Running", Reason="", readiness=true. Elapsed: 8.095333261s
Feb 15 14:46:23.117: INFO: Pod "pod-subpath-test-secret-2vzj": Phase="Running", Reason="", readiness=true. Elapsed: 10.117632926s
Feb 15 14:46:25.146: INFO: Pod "pod-subpath-test-secret-2vzj": Phase="Running", Reason="", readiness=true. Elapsed: 12.146348654s
Feb 15 14:46:27.156: INFO: Pod "pod-subpath-test-secret-2vzj": Phase="Running", Reason="", readiness=true. Elapsed: 14.15702188s
Feb 15 14:46:29.167: INFO: Pod "pod-subpath-test-secret-2vzj": Phase="Running", Reason="", readiness=true. Elapsed: 16.167549519s
Feb 15 14:46:31.180: INFO: Pod "pod-subpath-test-secret-2vzj": Phase="Running", Reason="", readiness=true. Elapsed: 18.18115782s
Feb 15 14:46:33.192: INFO: Pod "pod-subpath-test-secret-2vzj": Phase="Running", Reason="", readiness=true. Elapsed: 20.193087686s
Feb 15 14:46:35.200: INFO: Pod "pod-subpath-test-secret-2vzj": Phase="Running", Reason="", readiness=true. Elapsed: 22.200215633s
Feb 15 14:46:37.207: INFO: Pod "pod-subpath-test-secret-2vzj": Phase="Running", Reason="", readiness=true. Elapsed: 24.20730296s
Feb 15 14:46:39.215: INFO: Pod "pod-subpath-test-secret-2vzj": Phase="Running", Reason="", readiness=true. Elapsed: 26.215816168s
Feb 15 14:46:41.230: INFO: Pod "pod-subpath-test-secret-2vzj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.230483564s
STEP: Saw pod success
Feb 15 14:46:41.230: INFO: Pod "pod-subpath-test-secret-2vzj" satisfied condition "success or failure"
Feb 15 14:46:41.234: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-secret-2vzj container test-container-subpath-secret-2vzj: 
STEP: delete the pod
Feb 15 14:46:41.320: INFO: Waiting for pod pod-subpath-test-secret-2vzj to disappear
Feb 15 14:46:41.345: INFO: Pod pod-subpath-test-secret-2vzj no longer exists
STEP: Deleting pod pod-subpath-test-secret-2vzj
Feb 15 14:46:41.345: INFO: Deleting pod "pod-subpath-test-secret-2vzj" in namespace "subpath-6733"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:46:41.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-6733" for this suite.
Feb 15 14:46:47.392: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:46:47.611: INFO: namespace subpath-6733 deletion completed in 6.251650084s

• [SLOW TEST:34.717 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:46:47.611: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating api versions
Feb 15 14:46:47.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Feb 15 14:46:47.996: INFO: stderr: ""
Feb 15 14:46:47.996: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:46:47.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-452" for this suite.
Feb 15 14:46:54.050: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:46:54.149: INFO: namespace kubectl-452 deletion completed in 6.131568831s

• [SLOW TEST:6.538 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:46:54.149: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 15 14:46:54.276: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a69797b3-e8cb-4d2f-83b8-cc33bc58b79b" in namespace "downward-api-3296" to be "success or failure"
Feb 15 14:46:54.299: INFO: Pod "downwardapi-volume-a69797b3-e8cb-4d2f-83b8-cc33bc58b79b": Phase="Pending", Reason="", readiness=false. Elapsed: 22.26413ms
Feb 15 14:46:56.310: INFO: Pod "downwardapi-volume-a69797b3-e8cb-4d2f-83b8-cc33bc58b79b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033569485s
Feb 15 14:46:58.325: INFO: Pod "downwardapi-volume-a69797b3-e8cb-4d2f-83b8-cc33bc58b79b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048107934s
Feb 15 14:47:00.335: INFO: Pod "downwardapi-volume-a69797b3-e8cb-4d2f-83b8-cc33bc58b79b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058075266s
Feb 15 14:47:02.348: INFO: Pod "downwardapi-volume-a69797b3-e8cb-4d2f-83b8-cc33bc58b79b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.071149346s
STEP: Saw pod success
Feb 15 14:47:02.348: INFO: Pod "downwardapi-volume-a69797b3-e8cb-4d2f-83b8-cc33bc58b79b" satisfied condition "success or failure"
Feb 15 14:47:02.354: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-a69797b3-e8cb-4d2f-83b8-cc33bc58b79b container client-container: 
STEP: delete the pod
Feb 15 14:47:02.505: INFO: Waiting for pod downwardapi-volume-a69797b3-e8cb-4d2f-83b8-cc33bc58b79b to disappear
Feb 15 14:47:02.514: INFO: Pod downwardapi-volume-a69797b3-e8cb-4d2f-83b8-cc33bc58b79b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:47:02.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3296" for this suite.
Feb 15 14:47:08.585: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:47:08.725: INFO: namespace downward-api-3296 deletion completed in 6.202546724s

• [SLOW TEST:14.576 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:47:08.725: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5784.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5784.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 15 14:47:24.233: INFO: Unable to read wheezy_udp@PodARecord from pod dns-5784/dns-test-39899391-c036-4a2b-8093-3cb0bcc9d83b: the server could not find the requested resource (get pods dns-test-39899391-c036-4a2b-8093-3cb0bcc9d83b)
Feb 15 14:47:24.237: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-5784/dns-test-39899391-c036-4a2b-8093-3cb0bcc9d83b: the server could not find the requested resource (get pods dns-test-39899391-c036-4a2b-8093-3cb0bcc9d83b)
Feb 15 14:47:24.246: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-5784/dns-test-39899391-c036-4a2b-8093-3cb0bcc9d83b: the server could not find the requested resource (get pods dns-test-39899391-c036-4a2b-8093-3cb0bcc9d83b)
Feb 15 14:47:24.252: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-5784/dns-test-39899391-c036-4a2b-8093-3cb0bcc9d83b: the server could not find the requested resource (get pods dns-test-39899391-c036-4a2b-8093-3cb0bcc9d83b)
Feb 15 14:47:24.258: INFO: Unable to read jessie_udp@PodARecord from pod dns-5784/dns-test-39899391-c036-4a2b-8093-3cb0bcc9d83b: the server could not find the requested resource (get pods dns-test-39899391-c036-4a2b-8093-3cb0bcc9d83b)
Feb 15 14:47:24.262: INFO: Unable to read jessie_tcp@PodARecord from pod dns-5784/dns-test-39899391-c036-4a2b-8093-3cb0bcc9d83b: the server could not find the requested resource (get pods dns-test-39899391-c036-4a2b-8093-3cb0bcc9d83b)
Feb 15 14:47:24.262: INFO: Lookups using dns-5784/dns-test-39899391-c036-4a2b-8093-3cb0bcc9d83b failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Feb 15 14:47:29.345: INFO: DNS probes using dns-5784/dns-test-39899391-c036-4a2b-8093-3cb0bcc9d83b succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:47:29.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5784" for this suite.
Feb 15 14:47:35.525: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:47:35.650: INFO: namespace dns-5784 deletion completed in 6.160851788s

• [SLOW TEST:26.925 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:47:35.650: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 15 14:47:35.749: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fb62748a-c5f2-44a8-a732-ed471b19ed7e" in namespace "downward-api-5536" to be "success or failure"
Feb 15 14:47:35.816: INFO: Pod "downwardapi-volume-fb62748a-c5f2-44a8-a732-ed471b19ed7e": Phase="Pending", Reason="", readiness=false. Elapsed: 66.501461ms
Feb 15 14:47:37.829: INFO: Pod "downwardapi-volume-fb62748a-c5f2-44a8-a732-ed471b19ed7e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07928092s
Feb 15 14:47:39.839: INFO: Pod "downwardapi-volume-fb62748a-c5f2-44a8-a732-ed471b19ed7e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089727213s
Feb 15 14:47:41.858: INFO: Pod "downwardapi-volume-fb62748a-c5f2-44a8-a732-ed471b19ed7e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.108560552s
Feb 15 14:47:43.877: INFO: Pod "downwardapi-volume-fb62748a-c5f2-44a8-a732-ed471b19ed7e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.127343311s
STEP: Saw pod success
Feb 15 14:47:43.877: INFO: Pod "downwardapi-volume-fb62748a-c5f2-44a8-a732-ed471b19ed7e" satisfied condition "success or failure"
Feb 15 14:47:43.895: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-fb62748a-c5f2-44a8-a732-ed471b19ed7e container client-container: 
STEP: delete the pod
Feb 15 14:47:43.969: INFO: Waiting for pod downwardapi-volume-fb62748a-c5f2-44a8-a732-ed471b19ed7e to disappear
Feb 15 14:47:43.975: INFO: Pod downwardapi-volume-fb62748a-c5f2-44a8-a732-ed471b19ed7e no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:47:43.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5536" for this suite.
Feb 15 14:47:50.112: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:47:50.259: INFO: namespace downward-api-5536 deletion completed in 6.276706161s

• [SLOW TEST:14.609 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:47:50.261: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb 15 14:47:50.551: INFO: Number of nodes with available pods: 0
Feb 15 14:47:50.551: INFO: Node iruya-node is running more than one daemon pod
Feb 15 14:47:52.048: INFO: Number of nodes with available pods: 0
Feb 15 14:47:52.048: INFO: Node iruya-node is running more than one daemon pod
Feb 15 14:47:52.593: INFO: Number of nodes with available pods: 0
Feb 15 14:47:52.593: INFO: Node iruya-node is running more than one daemon pod
Feb 15 14:47:56.032: INFO: Number of nodes with available pods: 0
Feb 15 14:47:56.032: INFO: Node iruya-node is running more than one daemon pod
Feb 15 14:47:56.572: INFO: Number of nodes with available pods: 0
Feb 15 14:47:56.572: INFO: Node iruya-node is running more than one daemon pod
Feb 15 14:47:57.569: INFO: Number of nodes with available pods: 0
Feb 15 14:47:57.569: INFO: Node iruya-node is running more than one daemon pod
Feb 15 14:47:59.039: INFO: Number of nodes with available pods: 0
Feb 15 14:47:59.039: INFO: Node iruya-node is running more than one daemon pod
Feb 15 14:47:59.586: INFO: Number of nodes with available pods: 0
Feb 15 14:47:59.586: INFO: Node iruya-node is running more than one daemon pod
Feb 15 14:48:00.598: INFO: Number of nodes with available pods: 0
Feb 15 14:48:00.598: INFO: Node iruya-node is running more than one daemon pod
Feb 15 14:48:01.567: INFO: Number of nodes with available pods: 0
Feb 15 14:48:01.567: INFO: Node iruya-node is running more than one daemon pod
Feb 15 14:48:02.578: INFO: Number of nodes with available pods: 2
Feb 15 14:48:02.578: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Feb 15 14:48:02.633: INFO: Number of nodes with available pods: 1
Feb 15 14:48:02.633: INFO: Node iruya-node is running more than one daemon pod
Feb 15 14:48:03.661: INFO: Number of nodes with available pods: 1
Feb 15 14:48:03.661: INFO: Node iruya-node is running more than one daemon pod
Feb 15 14:48:04.650: INFO: Number of nodes with available pods: 1
Feb 15 14:48:04.650: INFO: Node iruya-node is running more than one daemon pod
Feb 15 14:48:05.654: INFO: Number of nodes with available pods: 1
Feb 15 14:48:05.654: INFO: Node iruya-node is running more than one daemon pod
Feb 15 14:48:06.658: INFO: Number of nodes with available pods: 1
Feb 15 14:48:06.659: INFO: Node iruya-node is running more than one daemon pod
Feb 15 14:48:07.652: INFO: Number of nodes with available pods: 1
Feb 15 14:48:07.652: INFO: Node iruya-node is running more than one daemon pod
Feb 15 14:48:08.655: INFO: Number of nodes with available pods: 1
Feb 15 14:48:08.655: INFO: Node iruya-node is running more than one daemon pod
Feb 15 14:48:09.650: INFO: Number of nodes with available pods: 1
Feb 15 14:48:09.650: INFO: Node iruya-node is running more than one daemon pod
Feb 15 14:48:10.656: INFO: Number of nodes with available pods: 1
Feb 15 14:48:10.656: INFO: Node iruya-node is running more than one daemon pod
Feb 15 14:48:11.646: INFO: Number of nodes with available pods: 1
Feb 15 14:48:11.646: INFO: Node iruya-node is running more than one daemon pod
Feb 15 14:48:12.649: INFO: Number of nodes with available pods: 1
Feb 15 14:48:12.649: INFO: Node iruya-node is running more than one daemon pod
Feb 15 14:48:13.673: INFO: Number of nodes with available pods: 1
Feb 15 14:48:13.673: INFO: Node iruya-node is running more than one daemon pod
Feb 15 14:48:14.650: INFO: Number of nodes with available pods: 1
Feb 15 14:48:14.650: INFO: Node iruya-node is running more than one daemon pod
Feb 15 14:48:15.650: INFO: Number of nodes with available pods: 1
Feb 15 14:48:15.651: INFO: Node iruya-node is running more than one daemon pod
Feb 15 14:48:16.666: INFO: Number of nodes with available pods: 1
Feb 15 14:48:16.666: INFO: Node iruya-node is running more than one daemon pod
Feb 15 14:48:17.653: INFO: Number of nodes with available pods: 1
Feb 15 14:48:17.653: INFO: Node iruya-node is running more than one daemon pod
Feb 15 14:48:18.655: INFO: Number of nodes with available pods: 1
Feb 15 14:48:18.656: INFO: Node iruya-node is running more than one daemon pod
Feb 15 14:48:19.653: INFO: Number of nodes with available pods: 1
Feb 15 14:48:19.653: INFO: Node iruya-node is running more than one daemon pod
Feb 15 14:48:23.073: INFO: Number of nodes with available pods: 1
Feb 15 14:48:23.073: INFO: Node iruya-node is running more than one daemon pod
Feb 15 14:48:23.654: INFO: Number of nodes with available pods: 1
Feb 15 14:48:23.654: INFO: Node iruya-node is running more than one daemon pod
Feb 15 14:48:24.652: INFO: Number of nodes with available pods: 1
Feb 15 14:48:24.652: INFO: Node iruya-node is running more than one daemon pod
Feb 15 14:48:25.655: INFO: Number of nodes with available pods: 1
Feb 15 14:48:25.655: INFO: Node iruya-node is running more than one daemon pod
Feb 15 14:48:26.671: INFO: Number of nodes with available pods: 2
Feb 15 14:48:26.671: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5798, will wait for the garbage collector to delete the pods
Feb 15 14:48:26.759: INFO: Deleting DaemonSet.extensions daemon-set took: 16.70673ms
Feb 15 14:48:27.059: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.702233ms
Feb 15 14:48:37.892: INFO: Number of nodes with available pods: 0
Feb 15 14:48:37.892: INFO: Number of running nodes: 0, number of available pods: 0
Feb 15 14:48:37.898: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5798/daemonsets","resourceVersion":"24462243"},"items":null}

Feb 15 14:48:37.902: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5798/pods","resourceVersion":"24462243"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:48:37.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-5798" for this suite.
Feb 15 14:48:43.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:48:44.033: INFO: namespace daemonsets-5798 deletion completed in 6.108461158s

• [SLOW TEST:53.773 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:48:44.034: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 15 14:48:44.169: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"990eab2c-f7c2-4371-aad7-7140c3f09358", Controller:(*bool)(0xc0030ac832), BlockOwnerDeletion:(*bool)(0xc0030ac833)}}
Feb 15 14:48:44.210: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"c605c1e5-a163-40f5-979e-1ad346f67449", Controller:(*bool)(0xc0030ac9ca), BlockOwnerDeletion:(*bool)(0xc0030ac9cb)}}
Feb 15 14:48:44.277: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"42f5b026-e943-4510-945f-340abcdc3377", Controller:(*bool)(0xc00326e4ba), BlockOwnerDeletion:(*bool)(0xc00326e4bb)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:48:49.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7543" for this suite.
Feb 15 14:48:55.394: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:48:55.549: INFO: namespace gc-7543 deletion completed in 6.178882698s

• [SLOW TEST:11.515 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:48:55.550: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-8454
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Feb 15 14:48:55.685: INFO: Found 0 stateful pods, waiting for 3
Feb 15 14:49:05.697: INFO: Found 2 stateful pods, waiting for 3
Feb 15 14:49:15.703: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 15 14:49:15.703: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 15 14:49:15.703: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 15 14:49:25.697: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 15 14:49:25.697: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 15 14:49:25.697: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Feb 15 14:49:25.740: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Feb 15 14:49:35.796: INFO: Updating stateful set ss2
Feb 15 14:49:36.409: INFO: Waiting for Pod statefulset-8454/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 15 14:49:46.425: INFO: Waiting for Pod statefulset-8454/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Feb 15 14:49:56.774: INFO: Found 2 stateful pods, waiting for 3
Feb 15 14:50:06.803: INFO: Found 2 stateful pods, waiting for 3
Feb 15 14:50:16.805: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 15 14:50:16.805: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 15 14:50:16.805: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 15 14:50:26.802: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 15 14:50:26.803: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 15 14:50:26.803: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Feb 15 14:50:26.884: INFO: Updating stateful set ss2
Feb 15 14:50:27.010: INFO: Waiting for Pod statefulset-8454/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 15 14:50:37.511: INFO: Updating stateful set ss2
Feb 15 14:50:37.541: INFO: Waiting for StatefulSet statefulset-8454/ss2 to complete update
Feb 15 14:50:37.542: INFO: Waiting for Pod statefulset-8454/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 15 14:50:47.558: INFO: Waiting for StatefulSet statefulset-8454/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb 15 14:50:57.559: INFO: Deleting all statefulset in ns statefulset-8454
Feb 15 14:50:57.565: INFO: Scaling statefulset ss2 to 0
Feb 15 14:51:17.612: INFO: Waiting for statefulset status.replicas updated to 0
Feb 15 14:51:17.617: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:51:17.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-8454" for this suite.
Feb 15 14:51:25.695: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:51:25.863: INFO: namespace statefulset-8454 deletion completed in 8.198035469s

• [SLOW TEST:150.313 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:51:25.864: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Feb 15 14:51:25.986: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 15 14:51:25.996: INFO: Waiting for terminating namespaces to be deleted...
Feb 15 14:51:26.002: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Feb 15 14:51:26.020: INFO: kube-bench-j7kcs from default started at 2020-02-11 06:42:30 +0000 UTC (1 container statuses recorded)
Feb 15 14:51:26.020: INFO: 	Container kube-bench ready: false, restart count 0
Feb 15 14:51:26.020: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Feb 15 14:51:26.020: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 15 14:51:26.020: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Feb 15 14:51:26.020: INFO: 	Container weave ready: true, restart count 0
Feb 15 14:51:26.020: INFO: 	Container weave-npc ready: true, restart count 0
Feb 15 14:51:26.020: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Feb 15 14:51:26.032: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Feb 15 14:51:26.032: INFO: 	Container weave ready: true, restart count 0
Feb 15 14:51:26.032: INFO: 	Container weave-npc ready: true, restart count 0
Feb 15 14:51:26.032: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb 15 14:51:26.032: INFO: 	Container coredns ready: true, restart count 0
Feb 15 14:51:26.032: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Feb 15 14:51:26.032: INFO: 	Container etcd ready: true, restart count 0
Feb 15 14:51:26.032: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Feb 15 14:51:26.032: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 15 14:51:26.032: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Feb 15 14:51:26.032: INFO: 	Container kube-controller-manager ready: true, restart count 21
Feb 15 14:51:26.032: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Feb 15 14:51:26.032: INFO: 	Container kube-apiserver ready: true, restart count 0
Feb 15 14:51:26.032: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb 15 14:51:26.032: INFO: 	Container coredns ready: true, restart count 0
Feb 15 14:51:26.032: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Feb 15 14:51:26.032: INFO: 	Container kube-scheduler ready: true, restart count 13
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15f39af875d5072a], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:51:27.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-1958" for this suite.
Feb 15 14:51:33.082: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:51:33.183: INFO: namespace sched-pred-1958 deletion completed in 6.12047064s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:7.319 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:51:33.184: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 15 14:51:33.284: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4ebddd4e-e8c4-4c89-bbe5-5dcd63b0d91c" in namespace "projected-7562" to be "success or failure"
Feb 15 14:51:33.292: INFO: Pod "downwardapi-volume-4ebddd4e-e8c4-4c89-bbe5-5dcd63b0d91c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.537608ms
Feb 15 14:51:35.363: INFO: Pod "downwardapi-volume-4ebddd4e-e8c4-4c89-bbe5-5dcd63b0d91c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079768568s
Feb 15 14:51:37.384: INFO: Pod "downwardapi-volume-4ebddd4e-e8c4-4c89-bbe5-5dcd63b0d91c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.1001518s
Feb 15 14:51:39.394: INFO: Pod "downwardapi-volume-4ebddd4e-e8c4-4c89-bbe5-5dcd63b0d91c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.110141709s
Feb 15 14:51:41.405: INFO: Pod "downwardapi-volume-4ebddd4e-e8c4-4c89-bbe5-5dcd63b0d91c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.121764218s
Feb 15 14:51:43.417: INFO: Pod "downwardapi-volume-4ebddd4e-e8c4-4c89-bbe5-5dcd63b0d91c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.132881704s
STEP: Saw pod success
Feb 15 14:51:43.417: INFO: Pod "downwardapi-volume-4ebddd4e-e8c4-4c89-bbe5-5dcd63b0d91c" satisfied condition "success or failure"
Feb 15 14:51:43.424: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-4ebddd4e-e8c4-4c89-bbe5-5dcd63b0d91c container client-container: 
STEP: delete the pod
Feb 15 14:51:43.504: INFO: Waiting for pod downwardapi-volume-4ebddd4e-e8c4-4c89-bbe5-5dcd63b0d91c to disappear
Feb 15 14:51:43.552: INFO: Pod downwardapi-volume-4ebddd4e-e8c4-4c89-bbe5-5dcd63b0d91c no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:51:43.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7562" for this suite.
Feb 15 14:51:49.587: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:51:49.763: INFO: namespace projected-7562 deletion completed in 6.198011686s

• [SLOW TEST:16.578 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:51:49.763: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 15 14:51:49.927: INFO: Waiting up to 5m0s for pod "downwardapi-volume-723fb898-1021-49c0-9a32-9eac69619687" in namespace "projected-406" to be "success or failure"
Feb 15 14:51:49.956: INFO: Pod "downwardapi-volume-723fb898-1021-49c0-9a32-9eac69619687": Phase="Pending", Reason="", readiness=false. Elapsed: 29.325756ms
Feb 15 14:51:51.974: INFO: Pod "downwardapi-volume-723fb898-1021-49c0-9a32-9eac69619687": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047207849s
Feb 15 14:51:53.994: INFO: Pod "downwardapi-volume-723fb898-1021-49c0-9a32-9eac69619687": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067640231s
Feb 15 14:51:56.003: INFO: Pod "downwardapi-volume-723fb898-1021-49c0-9a32-9eac69619687": Phase="Pending", Reason="", readiness=false. Elapsed: 6.075928687s
Feb 15 14:51:58.011: INFO: Pod "downwardapi-volume-723fb898-1021-49c0-9a32-9eac69619687": Phase="Pending", Reason="", readiness=false. Elapsed: 8.084364888s
Feb 15 14:52:00.017: INFO: Pod "downwardapi-volume-723fb898-1021-49c0-9a32-9eac69619687": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.090401365s
STEP: Saw pod success
Feb 15 14:52:00.017: INFO: Pod "downwardapi-volume-723fb898-1021-49c0-9a32-9eac69619687" satisfied condition "success or failure"
Feb 15 14:52:00.021: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-723fb898-1021-49c0-9a32-9eac69619687 container client-container: 
STEP: delete the pod
Feb 15 14:52:00.063: INFO: Waiting for pod downwardapi-volume-723fb898-1021-49c0-9a32-9eac69619687 to disappear
Feb 15 14:52:00.070: INFO: Pod downwardapi-volume-723fb898-1021-49c0-9a32-9eac69619687 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:52:00.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-406" for this suite.
Feb 15 14:52:06.109: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:52:06.245: INFO: namespace projected-406 deletion completed in 6.167876395s

• [SLOW TEST:16.482 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:52:06.246: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-5b283e68-e6b2-416f-bbe9-c1a5a2b4f0d6
STEP: Creating a pod to test consume configMaps
Feb 15 14:52:06.374: INFO: Waiting up to 5m0s for pod "pod-configmaps-1d36e260-71e1-4896-89d8-51265c755954" in namespace "configmap-312" to be "success or failure"
Feb 15 14:52:06.386: INFO: Pod "pod-configmaps-1d36e260-71e1-4896-89d8-51265c755954": Phase="Pending", Reason="", readiness=false. Elapsed: 11.98542ms
Feb 15 14:52:08.401: INFO: Pod "pod-configmaps-1d36e260-71e1-4896-89d8-51265c755954": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026150132s
Feb 15 14:52:10.425: INFO: Pod "pod-configmaps-1d36e260-71e1-4896-89d8-51265c755954": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050798253s
Feb 15 14:52:12.432: INFO: Pod "pod-configmaps-1d36e260-71e1-4896-89d8-51265c755954": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057983428s
Feb 15 14:52:14.442: INFO: Pod "pod-configmaps-1d36e260-71e1-4896-89d8-51265c755954": Phase="Pending", Reason="", readiness=false. Elapsed: 8.06782092s
Feb 15 14:52:16.454: INFO: Pod "pod-configmaps-1d36e260-71e1-4896-89d8-51265c755954": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.079182415s
STEP: Saw pod success
Feb 15 14:52:16.454: INFO: Pod "pod-configmaps-1d36e260-71e1-4896-89d8-51265c755954" satisfied condition "success or failure"
Feb 15 14:52:16.460: INFO: Trying to get logs from node iruya-node pod pod-configmaps-1d36e260-71e1-4896-89d8-51265c755954 container configmap-volume-test: 
STEP: delete the pod
Feb 15 14:52:16.811: INFO: Waiting for pod pod-configmaps-1d36e260-71e1-4896-89d8-51265c755954 to disappear
Feb 15 14:52:16.818: INFO: Pod pod-configmaps-1d36e260-71e1-4896-89d8-51265c755954 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:52:16.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-312" for this suite.
Feb 15 14:52:22.859: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:52:23.010: INFO: namespace configmap-312 deletion completed in 6.183750938s

• [SLOW TEST:16.764 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:52:23.010: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the initial replication controller
Feb 15 14:52:23.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6075'
Feb 15 14:52:25.813: INFO: stderr: ""
Feb 15 14:52:25.813: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 15 14:52:25.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6075'
Feb 15 14:52:26.089: INFO: stderr: ""
Feb 15 14:52:26.089: INFO: stdout: "update-demo-nautilus-7jm8z update-demo-nautilus-sn25x "
Feb 15 14:52:26.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7jm8z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6075'
Feb 15 14:52:26.253: INFO: stderr: ""
Feb 15 14:52:26.253: INFO: stdout: ""
Feb 15 14:52:26.253: INFO: update-demo-nautilus-7jm8z is created but not running
Feb 15 14:52:31.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6075'
Feb 15 14:52:31.853: INFO: stderr: ""
Feb 15 14:52:31.854: INFO: stdout: "update-demo-nautilus-7jm8z update-demo-nautilus-sn25x "
Feb 15 14:52:31.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7jm8z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6075'
Feb 15 14:52:32.515: INFO: stderr: ""
Feb 15 14:52:32.515: INFO: stdout: ""
Feb 15 14:52:32.515: INFO: update-demo-nautilus-7jm8z is created but not running
Feb 15 14:52:37.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6075'
Feb 15 14:52:37.645: INFO: stderr: ""
Feb 15 14:52:37.645: INFO: stdout: "update-demo-nautilus-7jm8z update-demo-nautilus-sn25x "
Feb 15 14:52:37.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7jm8z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6075'
Feb 15 14:52:37.774: INFO: stderr: ""
Feb 15 14:52:37.774: INFO: stdout: ""
Feb 15 14:52:37.774: INFO: update-demo-nautilus-7jm8z is created but not running
Feb 15 14:52:42.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6075'
Feb 15 14:52:42.925: INFO: stderr: ""
Feb 15 14:52:42.925: INFO: stdout: "update-demo-nautilus-7jm8z update-demo-nautilus-sn25x "
Feb 15 14:52:42.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7jm8z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6075'
Feb 15 14:52:43.034: INFO: stderr: ""
Feb 15 14:52:43.035: INFO: stdout: "true"
Feb 15 14:52:43.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7jm8z -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6075'
Feb 15 14:52:43.154: INFO: stderr: ""
Feb 15 14:52:43.154: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 15 14:52:43.154: INFO: validating pod update-demo-nautilus-7jm8z
Feb 15 14:52:43.166: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 15 14:52:43.166: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 15 14:52:43.166: INFO: update-demo-nautilus-7jm8z is verified up and running
Feb 15 14:52:43.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sn25x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6075'
Feb 15 14:52:43.236: INFO: stderr: ""
Feb 15 14:52:43.237: INFO: stdout: "true"
Feb 15 14:52:43.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sn25x -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6075'
Feb 15 14:52:43.312: INFO: stderr: ""
Feb 15 14:52:43.312: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 15 14:52:43.312: INFO: validating pod update-demo-nautilus-sn25x
Feb 15 14:52:43.320: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 15 14:52:43.320: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 15 14:52:43.320: INFO: update-demo-nautilus-sn25x is verified up and running
STEP: rolling-update to new replication controller
Feb 15 14:52:43.322: INFO: scanned /root for discovery docs: 
Feb 15 14:52:43.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-6075'
Feb 15 14:53:15.304: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb 15 14:53:15.304: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 15 14:53:15.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6075'
Feb 15 14:53:15.430: INFO: stderr: ""
Feb 15 14:53:15.430: INFO: stdout: "update-demo-kitten-vnmww update-demo-kitten-zkgvg update-demo-nautilus-sn25x "
STEP: Replicas for name=update-demo: expected=2 actual=3
Feb 15 14:53:20.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6075'
Feb 15 14:53:20.581: INFO: stderr: ""
Feb 15 14:53:20.581: INFO: stdout: "update-demo-kitten-vnmww update-demo-kitten-zkgvg "
Feb 15 14:53:20.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-vnmww -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6075'
Feb 15 14:53:20.725: INFO: stderr: ""
Feb 15 14:53:20.726: INFO: stdout: "true"
Feb 15 14:53:20.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-vnmww -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6075'
Feb 15 14:53:20.826: INFO: stderr: ""
Feb 15 14:53:20.826: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb 15 14:53:20.826: INFO: validating pod update-demo-kitten-vnmww
Feb 15 14:53:20.849: INFO: got data: {
  "image": "kitten.jpg"
}

Feb 15 14:53:20.849: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb 15 14:53:20.849: INFO: update-demo-kitten-vnmww is verified up and running
Feb 15 14:53:20.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-zkgvg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6075'
Feb 15 14:53:20.957: INFO: stderr: ""
Feb 15 14:53:20.957: INFO: stdout: "true"
Feb 15 14:53:20.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-zkgvg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6075'
Feb 15 14:53:21.101: INFO: stderr: ""
Feb 15 14:53:21.101: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb 15 14:53:21.101: INFO: validating pod update-demo-kitten-zkgvg
Feb 15 14:53:21.124: INFO: got data: {
  "image": "kitten.jpg"
}

Feb 15 14:53:21.124: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb 15 14:53:21.124: INFO: update-demo-kitten-zkgvg is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:53:21.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6075" for this suite.
Feb 15 14:53:47.147: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:53:47.273: INFO: namespace kubectl-6075 deletion completed in 26.144821526s

• [SLOW TEST:84.263 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:53:47.274: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb 15 14:54:05.772: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 15 14:54:05.788: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 15 14:54:07.788: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 15 14:54:09.288: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 15 14:54:09.788: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 15 14:54:09.801: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 15 14:54:11.789: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 15 14:54:11.804: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 15 14:54:13.789: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 15 14:54:13.809: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 15 14:54:15.788: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 15 14:54:15.806: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 15 14:54:17.788: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 15 14:54:17.806: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 15 14:54:19.788: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 15 14:54:19.803: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 15 14:54:21.788: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 15 14:54:21.815: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 15 14:54:23.788: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 15 14:54:23.797: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 15 14:54:25.788: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 15 14:54:25.804: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 15 14:54:27.788: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 15 14:54:27.798: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 15 14:54:29.788: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 15 14:54:29.797: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 15 14:54:31.789: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 15 14:54:31.817: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 15 14:54:33.788: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 15 14:54:33.808: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 15 14:54:35.788: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 15 14:54:35.803: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 15 14:54:37.788: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 15 14:54:37.801: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:54:37.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-8224" for this suite.
Feb 15 14:54:59.911: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:55:00.023: INFO: namespace container-lifecycle-hook-8224 deletion completed in 22.152256169s

• [SLOW TEST:72.749 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:55:00.024: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-3b4fb0d8-bdd1-4b41-a921-95e47fd89354
STEP: Creating a pod to test consume configMaps
Feb 15 14:55:00.200: INFO: Waiting up to 5m0s for pod "pod-configmaps-22520d38-5797-44b1-976c-837feb953cfb" in namespace "configmap-282" to be "success or failure"
Feb 15 14:55:00.210: INFO: Pod "pod-configmaps-22520d38-5797-44b1-976c-837feb953cfb": Phase="Pending", Reason="", readiness=false. Elapsed: 9.616094ms
Feb 15 14:55:02.216: INFO: Pod "pod-configmaps-22520d38-5797-44b1-976c-837feb953cfb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015491836s
Feb 15 14:55:04.222: INFO: Pod "pod-configmaps-22520d38-5797-44b1-976c-837feb953cfb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021814522s
Feb 15 14:55:06.234: INFO: Pod "pod-configmaps-22520d38-5797-44b1-976c-837feb953cfb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033947129s
Feb 15 14:55:08.243: INFO: Pod "pod-configmaps-22520d38-5797-44b1-976c-837feb953cfb": Phase="Running", Reason="", readiness=true. Elapsed: 8.043073703s
Feb 15 14:55:10.252: INFO: Pod "pod-configmaps-22520d38-5797-44b1-976c-837feb953cfb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.0515715s
STEP: Saw pod success
Feb 15 14:55:10.252: INFO: Pod "pod-configmaps-22520d38-5797-44b1-976c-837feb953cfb" satisfied condition "success or failure"
Feb 15 14:55:10.256: INFO: Trying to get logs from node iruya-node pod pod-configmaps-22520d38-5797-44b1-976c-837feb953cfb container configmap-volume-test: 
STEP: delete the pod
Feb 15 14:55:10.363: INFO: Waiting for pod pod-configmaps-22520d38-5797-44b1-976c-837feb953cfb to disappear
Feb 15 14:55:10.370: INFO: Pod pod-configmaps-22520d38-5797-44b1-976c-837feb953cfb no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:55:10.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-282" for this suite.
Feb 15 14:55:16.554: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:55:16.734: INFO: namespace configmap-282 deletion completed in 6.357167071s

• [SLOW TEST:16.711 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:55:16.735: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 15 14:55:16.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-1309'
Feb 15 14:55:17.010: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 15 14:55:17.010: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Feb 15 14:55:17.033: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-w8v9f]
Feb 15 14:55:17.034: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-w8v9f" in namespace "kubectl-1309" to be "running and ready"
Feb 15 14:55:17.041: INFO: Pod "e2e-test-nginx-rc-w8v9f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.962461ms
Feb 15 14:55:19.051: INFO: Pod "e2e-test-nginx-rc-w8v9f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0175462s
Feb 15 14:55:21.063: INFO: Pod "e2e-test-nginx-rc-w8v9f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029867074s
Feb 15 14:55:23.073: INFO: Pod "e2e-test-nginx-rc-w8v9f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039732558s
Feb 15 14:55:25.079: INFO: Pod "e2e-test-nginx-rc-w8v9f": Phase="Running", Reason="", readiness=true. Elapsed: 8.045301408s
Feb 15 14:55:25.079: INFO: Pod "e2e-test-nginx-rc-w8v9f" satisfied condition "running and ready"
Feb 15 14:55:25.079: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-w8v9f]
Feb 15 14:55:25.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-1309'
Feb 15 14:55:25.266: INFO: stderr: ""
Feb 15 14:55:25.266: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461
Feb 15 14:55:25.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-1309'
Feb 15 14:55:25.368: INFO: stderr: ""
Feb 15 14:55:25.368: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:55:25.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1309" for this suite.
Feb 15 14:55:47.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:55:47.606: INFO: namespace kubectl-1309 deletion completed in 22.232057635s

• [SLOW TEST:30.871 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:55:47.607: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name secret-emptykey-test-dabf12af-6cff-4841-bc26-ece267144d29
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:55:47.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3217" for this suite.
Feb 15 14:55:53.731: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:55:53.868: INFO: namespace secrets-3217 deletion completed in 6.180506836s

• [SLOW TEST:6.261 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:55:53.868: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-e8da8187-9270-4ef6-a019-e0c79db81f71
STEP: Creating a pod to test consume configMaps
Feb 15 14:55:53.968: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-afef10e9-1d76-4055-8b3c-ac7a406581c2" in namespace "projected-8199" to be "success or failure"
Feb 15 14:55:53.979: INFO: Pod "pod-projected-configmaps-afef10e9-1d76-4055-8b3c-ac7a406581c2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.364951ms
Feb 15 14:55:55.987: INFO: Pod "pod-projected-configmaps-afef10e9-1d76-4055-8b3c-ac7a406581c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018604866s
Feb 15 14:55:57.996: INFO: Pod "pod-projected-configmaps-afef10e9-1d76-4055-8b3c-ac7a406581c2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027220234s
Feb 15 14:56:00.012: INFO: Pod "pod-projected-configmaps-afef10e9-1d76-4055-8b3c-ac7a406581c2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043284614s
Feb 15 14:56:02.035: INFO: Pod "pod-projected-configmaps-afef10e9-1d76-4055-8b3c-ac7a406581c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.066802375s
STEP: Saw pod success
Feb 15 14:56:02.036: INFO: Pod "pod-projected-configmaps-afef10e9-1d76-4055-8b3c-ac7a406581c2" satisfied condition "success or failure"
Feb 15 14:56:02.042: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-afef10e9-1d76-4055-8b3c-ac7a406581c2 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 15 14:56:02.123: INFO: Waiting for pod pod-projected-configmaps-afef10e9-1d76-4055-8b3c-ac7a406581c2 to disappear
Feb 15 14:56:02.158: INFO: Pod pod-projected-configmaps-afef10e9-1d76-4055-8b3c-ac7a406581c2 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:56:02.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8199" for this suite.
Feb 15 14:56:08.207: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:56:08.377: INFO: namespace projected-8199 deletion completed in 6.207515799s

• [SLOW TEST:14.509 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:56:08.378: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 15 14:56:08.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-8154'
Feb 15 14:56:08.648: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 15 14:56:08.648: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Feb 15 14:56:08.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-8154'
Feb 15 14:56:08.959: INFO: stderr: ""
Feb 15 14:56:08.959: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:56:08.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8154" for this suite.
Feb 15 14:56:15.065: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:56:15.189: INFO: namespace kubectl-8154 deletion completed in 6.224745356s

• [SLOW TEST:6.811 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:56:15.189: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb 15 14:56:15.301: INFO: Waiting up to 5m0s for pod "pod-b262078f-f90b-4e30-8250-e336429b9ff0" in namespace "emptydir-4362" to be "success or failure"
Feb 15 14:56:15.308: INFO: Pod "pod-b262078f-f90b-4e30-8250-e336429b9ff0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.497913ms
Feb 15 14:56:17.316: INFO: Pod "pod-b262078f-f90b-4e30-8250-e336429b9ff0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014012559s
Feb 15 14:56:19.324: INFO: Pod "pod-b262078f-f90b-4e30-8250-e336429b9ff0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022490855s
Feb 15 14:56:21.337: INFO: Pod "pod-b262078f-f90b-4e30-8250-e336429b9ff0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035600852s
Feb 15 14:56:23.362: INFO: Pod "pod-b262078f-f90b-4e30-8250-e336429b9ff0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.059835282s
Feb 15 14:56:25.373: INFO: Pod "pod-b262078f-f90b-4e30-8250-e336429b9ff0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.071004638s
STEP: Saw pod success
Feb 15 14:56:25.373: INFO: Pod "pod-b262078f-f90b-4e30-8250-e336429b9ff0" satisfied condition "success or failure"
Feb 15 14:56:25.377: INFO: Trying to get logs from node iruya-node pod pod-b262078f-f90b-4e30-8250-e336429b9ff0 container test-container: 
STEP: delete the pod
Feb 15 14:56:25.507: INFO: Waiting for pod pod-b262078f-f90b-4e30-8250-e336429b9ff0 to disappear
Feb 15 14:56:25.513: INFO: Pod pod-b262078f-f90b-4e30-8250-e336429b9ff0 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 14:56:25.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4362" for this suite.
Feb 15 14:56:31.536: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 14:56:31.701: INFO: namespace emptydir-4362 deletion completed in 6.182021775s

• [SLOW TEST:16.512 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 14:56:31.703: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb 15 14:59:33.015: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 15 14:59:33.039: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 15 14:59:35.039: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 15 14:59:35.050: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 15 14:59:37.039: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 15 14:59:37.058: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 15 14:59:39.039: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 15 14:59:39.049: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 15 14:59:41.039: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 15 14:59:41.052: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 15 14:59:43.039: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 15 14:59:43.049: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 15 14:59:45.039: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 15 14:59:45.047: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 15 14:59:47.039: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 15 14:59:47.047: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 15 14:59:49.039: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 15 14:59:49.048: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 15 14:59:51.039: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 15 14:59:51.045: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 15 14:59:53.039: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 15 14:59:53.046: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 15 14:59:55.039: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 15 14:59:55.052: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 15 14:59:57.039: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 15 14:59:57.057: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 15 14:59:59.040: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 15 14:59:59.049: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 15 15:00:01.039: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 15 15:00:01.049: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 15 15:00:03.039: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 15 15:00:03.047: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 15 15:00:05.039: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 15 15:00:05.046: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 15 15:00:07.039: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 15 15:00:07.046: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 15 15:00:09.039: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 15 15:00:09.047: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 15 15:00:11.039: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 15 15:00:11.050: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 15 15:00:13.039: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 15 15:00:13.056: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 15 15:00:15.039: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 15 15:00:15.080: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 15 15:00:17.039: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 15 15:00:17.060: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 15 15:00:19.039: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 15 15:00:19.047: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 15 15:00:21.039: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 15 15:00:21.044: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 15 15:00:23.039: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 15 15:00:23.910: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 15 15:00:25.039: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 15 15:00:25.047: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 15 15:00:27.039: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 15 15:00:27.048: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 15 15:00:29.039: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 15 15:00:29.046: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 15 15:00:31.039: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 15 15:00:31.049: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 15 15:00:33.039: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 15 15:00:33.051: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 15 15:00:35.039: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 15 15:00:35.046: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 15 15:00:37.039: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 15 15:00:37.054: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 15 15:00:39.039: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 15 15:00:39.048: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 15 15:00:41.039: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 15 15:00:41.049: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 15 15:00:43.039: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 15 15:00:43.048: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 15 15:00:45.039: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 15 15:00:45.054: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 15 15:00:47.039: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 15 15:00:47.050: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 15 15:00:49.040: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 15 15:00:49.072: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 15 15:00:51.039: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 15 15:00:51.050: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 15 15:00:53.039: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 15 15:00:53.061: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 15 15:00:55.039: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 15 15:00:55.045: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 15 15:00:57.039: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 15 15:00:57.055: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 15 15:00:59.041: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 15 15:00:59.048: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 15 15:01:01.039: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 15 15:01:01.055: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 15 15:01:03.039: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 15 15:01:03.046: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 15 15:01:05.039: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 15 15:01:05.049: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 15 15:01:07.039: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 15 15:01:07.554: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 15 15:01:09.040: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 15 15:01:09.057: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 15 15:01:11.039: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 15 15:01:11.048: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 15 15:01:13.039: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 15 15:01:13.090: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 15:01:13.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-3209" for this suite.
Feb 15 15:01:31.123: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 15:01:31.186: INFO: namespace container-lifecycle-hook-3209 deletion completed in 18.090423468s

• [SLOW TEST:299.484 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 15:01:31.187: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Feb 15 15:01:31.257: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Registering the sample API server.
Feb 15 15:01:33.063: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Feb 15 15:01:35.310: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717375693, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717375693, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717375693, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717375692, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 15:01:37.325: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717375693, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717375693, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717375693, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717375692, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 15:01:39.318: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717375693, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717375693, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717375693, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717375692, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 15:01:41.317: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717375693, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717375693, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717375693, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717375692, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 15:01:43.318: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717375693, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717375693, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717375693, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717375692, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 15 15:01:51.160: INFO: Waited 5.816042682s for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 15:01:51.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-5722" for this suite.
Feb 15 15:01:57.966: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 15:01:58.081: INFO: namespace aggregator-5722 deletion completed in 6.165010193s

• [SLOW TEST:26.894 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 15:01:58.081: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
Feb 15 15:01:58.718: INFO: created pod pod-service-account-defaultsa
Feb 15 15:01:58.718: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Feb 15 15:01:58.789: INFO: created pod pod-service-account-mountsa
Feb 15 15:01:58.790: INFO: pod pod-service-account-mountsa service account token volume mount: true
Feb 15 15:01:58.905: INFO: created pod pod-service-account-nomountsa
Feb 15 15:01:58.905: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Feb 15 15:01:58.940: INFO: created pod pod-service-account-defaultsa-mountspec
Feb 15 15:01:58.941: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Feb 15 15:01:58.972: INFO: created pod pod-service-account-mountsa-mountspec
Feb 15 15:01:58.973: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Feb 15 15:01:59.100: INFO: created pod pod-service-account-nomountsa-mountspec
Feb 15 15:01:59.101: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Feb 15 15:01:59.123: INFO: created pod pod-service-account-defaultsa-nomountspec
Feb 15 15:01:59.123: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Feb 15 15:02:00.252: INFO: created pod pod-service-account-mountsa-nomountspec
Feb 15 15:02:00.252: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Feb 15 15:02:00.720: INFO: created pod pod-service-account-nomountsa-nomountspec
Feb 15 15:02:00.720: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 15:02:00.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-9085" for this suite.
Feb 15 15:02:41.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 15:02:42.042: INFO: namespace svcaccounts-9085 deletion completed in 41.292876466s

• [SLOW TEST:43.961 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 15:02:42.043: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Feb 15 15:02:42.184: INFO: Pod name pod-release: Found 0 pods out of 1
Feb 15 15:02:47.193: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 15:02:48.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-6646" for this suite.
Feb 15 15:02:54.280: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 15:02:54.419: INFO: namespace replication-controller-6646 deletion completed in 6.178608458s

• [SLOW TEST:12.377 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 15:02:54.420: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 15 15:02:54.577: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 15:03:05.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6427" for this suite.
Feb 15 15:03:59.118: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 15:03:59.273: INFO: namespace pods-6427 deletion completed in 54.254238045s

• [SLOW TEST:64.853 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 15:03:59.274: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 15 15:04:07.581: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 15:04:07.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6298" for this suite.
Feb 15 15:04:13.856: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 15:04:14.071: INFO: namespace container-runtime-6298 deletion completed in 6.402196731s

• [SLOW TEST:14.797 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 15:04:14.072: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 15 15:04:14.215: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Feb 15 15:04:14.234: INFO: Number of nodes with available pods: 0
Feb 15 15:04:14.235: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Feb 15 15:04:14.385: INFO: Number of nodes with available pods: 0
Feb 15 15:04:14.385: INFO: Node iruya-node is running more than one daemon pod
Feb 15 15:04:15.394: INFO: Number of nodes with available pods: 0
Feb 15 15:04:15.394: INFO: Node iruya-node is running more than one daemon pod
Feb 15 15:04:16.390: INFO: Number of nodes with available pods: 0
Feb 15 15:04:16.390: INFO: Node iruya-node is running more than one daemon pod
Feb 15 15:04:17.398: INFO: Number of nodes with available pods: 0
Feb 15 15:04:17.398: INFO: Node iruya-node is running more than one daemon pod
Feb 15 15:04:18.400: INFO: Number of nodes with available pods: 0
Feb 15 15:04:18.400: INFO: Node iruya-node is running more than one daemon pod
Feb 15 15:04:19.394: INFO: Number of nodes with available pods: 0
Feb 15 15:04:19.394: INFO: Node iruya-node is running more than one daemon pod
Feb 15 15:04:20.398: INFO: Number of nodes with available pods: 0
Feb 15 15:04:20.398: INFO: Node iruya-node is running more than one daemon pod
Feb 15 15:04:21.394: INFO: Number of nodes with available pods: 0
Feb 15 15:04:21.394: INFO: Node iruya-node is running more than one daemon pod
Feb 15 15:04:22.394: INFO: Number of nodes with available pods: 1
Feb 15 15:04:22.395: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Feb 15 15:04:22.444: INFO: Number of nodes with available pods: 1
Feb 15 15:04:22.444: INFO: Number of running nodes: 0, number of available pods: 1
Feb 15 15:04:23.453: INFO: Number of nodes with available pods: 0
Feb 15 15:04:23.454: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Feb 15 15:04:23.489: INFO: Number of nodes with available pods: 0
Feb 15 15:04:23.489: INFO: Node iruya-node is running more than one daemon pod
Feb 15 15:04:24.508: INFO: Number of nodes with available pods: 0
Feb 15 15:04:24.508: INFO: Node iruya-node is running more than one daemon pod
Feb 15 15:04:25.499: INFO: Number of nodes with available pods: 0
Feb 15 15:04:25.499: INFO: Node iruya-node is running more than one daemon pod
Feb 15 15:04:26.509: INFO: Number of nodes with available pods: 0
Feb 15 15:04:26.510: INFO: Node iruya-node is running more than one daemon pod
Feb 15 15:04:27.499: INFO: Number of nodes with available pods: 0
Feb 15 15:04:27.499: INFO: Node iruya-node is running more than one daemon pod
Feb 15 15:04:28.500: INFO: Number of nodes with available pods: 0
Feb 15 15:04:28.500: INFO: Node iruya-node is running more than one daemon pod
Feb 15 15:04:29.497: INFO: Number of nodes with available pods: 0
Feb 15 15:04:29.497: INFO: Node iruya-node is running more than one daemon pod
Feb 15 15:04:30.502: INFO: Number of nodes with available pods: 0
Feb 15 15:04:30.502: INFO: Node iruya-node is running more than one daemon pod
Feb 15 15:04:31.505: INFO: Number of nodes with available pods: 0
Feb 15 15:04:31.505: INFO: Node iruya-node is running more than one daemon pod
Feb 15 15:04:32.500: INFO: Number of nodes with available pods: 0
Feb 15 15:04:32.500: INFO: Node iruya-node is running more than one daemon pod
Feb 15 15:04:33.498: INFO: Number of nodes with available pods: 0
Feb 15 15:04:33.498: INFO: Node iruya-node is running more than one daemon pod
Feb 15 15:04:34.501: INFO: Number of nodes with available pods: 0
Feb 15 15:04:34.501: INFO: Node iruya-node is running more than one daemon pod
Feb 15 15:04:35.500: INFO: Number of nodes with available pods: 1
Feb 15 15:04:35.500: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5429, will wait for the garbage collector to delete the pods
Feb 15 15:04:35.576: INFO: Deleting DaemonSet.extensions daemon-set took: 13.068894ms
Feb 15 15:04:35.877: INFO: Terminating DaemonSet.extensions daemon-set pods took: 301.220717ms
Feb 15 15:04:42.982: INFO: Number of nodes with available pods: 0
Feb 15 15:04:42.982: INFO: Number of running nodes: 0, number of available pods: 0
Feb 15 15:04:42.989: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5429/daemonsets","resourceVersion":"24464582"},"items":null}

Feb 15 15:04:42.993: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5429/pods","resourceVersion":"24464582"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 15:04:43.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-5429" for this suite.
Feb 15 15:04:49.101: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 15:04:49.246: INFO: namespace daemonsets-5429 deletion completed in 6.164886126s

• [SLOW TEST:35.175 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 15:04:49.248: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 15:04:49.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3187" for this suite.
Feb 15 15:04:55.436: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 15:04:55.589: INFO: namespace services-3187 deletion completed in 6.201335226s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:6.342 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 15:04:55.590: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating secret secrets-8141/secret-test-e4faf4cc-d2f1-4421-9a13-fcc2c97b91e6
STEP: Creating a pod to test consume secrets
Feb 15 15:04:55.680: INFO: Waiting up to 5m0s for pod "pod-configmaps-56a84bd6-9421-4e6a-b0e3-ba7a63cd1d51" in namespace "secrets-8141" to be "success or failure"
Feb 15 15:04:55.769: INFO: Pod "pod-configmaps-56a84bd6-9421-4e6a-b0e3-ba7a63cd1d51": Phase="Pending", Reason="", readiness=false. Elapsed: 89.392775ms
Feb 15 15:04:57.779: INFO: Pod "pod-configmaps-56a84bd6-9421-4e6a-b0e3-ba7a63cd1d51": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098845398s
Feb 15 15:04:59.785: INFO: Pod "pod-configmaps-56a84bd6-9421-4e6a-b0e3-ba7a63cd1d51": Phase="Pending", Reason="", readiness=false. Elapsed: 4.105448191s
Feb 15 15:05:01.795: INFO: Pod "pod-configmaps-56a84bd6-9421-4e6a-b0e3-ba7a63cd1d51": Phase="Pending", Reason="", readiness=false. Elapsed: 6.115432071s
Feb 15 15:05:03.802: INFO: Pod "pod-configmaps-56a84bd6-9421-4e6a-b0e3-ba7a63cd1d51": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.122514778s
STEP: Saw pod success
Feb 15 15:05:03.803: INFO: Pod "pod-configmaps-56a84bd6-9421-4e6a-b0e3-ba7a63cd1d51" satisfied condition "success or failure"
Feb 15 15:05:03.806: INFO: Trying to get logs from node iruya-node pod pod-configmaps-56a84bd6-9421-4e6a-b0e3-ba7a63cd1d51 container env-test: 
STEP: delete the pod
Feb 15 15:05:06.582: INFO: Waiting for pod pod-configmaps-56a84bd6-9421-4e6a-b0e3-ba7a63cd1d51 to disappear
Feb 15 15:05:06.728: INFO: Pod pod-configmaps-56a84bd6-9421-4e6a-b0e3-ba7a63cd1d51 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 15:05:06.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8141" for this suite.
Feb 15 15:05:12.784: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 15:05:13.019: INFO: namespace secrets-8141 deletion completed in 6.281453049s

• [SLOW TEST:17.430 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 15:05:13.020: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-201bd1ae-0145-4069-972f-8701b229b917
STEP: Creating configMap with name cm-test-opt-upd-7e48bebb-d261-4c63-9dd8-8e2fb727927b
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-201bd1ae-0145-4069-972f-8701b229b917
STEP: Updating configmap cm-test-opt-upd-7e48bebb-d261-4c63-9dd8-8e2fb727927b
STEP: Creating configMap with name cm-test-opt-create-e0a85360-1f43-4afd-bad5-e519dd3dd2f8
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 15:06:57.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7205" for this suite.
Feb 15 15:07:19.608: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 15:07:19.701: INFO: namespace configmap-7205 deletion completed in 22.169106455s

• [SLOW TEST:126.682 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 15:07:19.702: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 15 15:07:19.795: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b973edcc-a12c-46d6-b294-2f5efce10149" in namespace "projected-3569" to be "success or failure"
Feb 15 15:07:19.801: INFO: Pod "downwardapi-volume-b973edcc-a12c-46d6-b294-2f5efce10149": Phase="Pending", Reason="", readiness=false. Elapsed: 5.706779ms
Feb 15 15:07:21.813: INFO: Pod "downwardapi-volume-b973edcc-a12c-46d6-b294-2f5efce10149": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017562956s
Feb 15 15:07:23.826: INFO: Pod "downwardapi-volume-b973edcc-a12c-46d6-b294-2f5efce10149": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030602406s
Feb 15 15:07:25.838: INFO: Pod "downwardapi-volume-b973edcc-a12c-46d6-b294-2f5efce10149": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042670831s
Feb 15 15:07:27.851: INFO: Pod "downwardapi-volume-b973edcc-a12c-46d6-b294-2f5efce10149": Phase="Pending", Reason="", readiness=false. Elapsed: 8.055740741s
Feb 15 15:07:29.867: INFO: Pod "downwardapi-volume-b973edcc-a12c-46d6-b294-2f5efce10149": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.071890247s
STEP: Saw pod success
Feb 15 15:07:29.867: INFO: Pod "downwardapi-volume-b973edcc-a12c-46d6-b294-2f5efce10149" satisfied condition "success or failure"
Feb 15 15:07:29.876: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-b973edcc-a12c-46d6-b294-2f5efce10149 container client-container: 
STEP: delete the pod
Feb 15 15:07:29.976: INFO: Waiting for pod downwardapi-volume-b973edcc-a12c-46d6-b294-2f5efce10149 to disappear
Feb 15 15:07:30.046: INFO: Pod downwardapi-volume-b973edcc-a12c-46d6-b294-2f5efce10149 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 15:07:30.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3569" for this suite.
Feb 15 15:07:36.076: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 15:07:36.207: INFO: namespace projected-3569 deletion completed in 6.15013765s

• [SLOW TEST:16.505 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 15:07:36.208: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: executing a command with run --rm and attach with stdin
Feb 15 15:07:36.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7198 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Feb 15 15:07:50.542: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0215 15:07:48.342370    3279 log.go:172] (0xc0002b8580) (0xc0005bcdc0) Create stream\nI0215 15:07:48.342543    3279 log.go:172] (0xc0002b8580) (0xc0005bcdc0) Stream added, broadcasting: 1\nI0215 15:07:48.352806    3279 log.go:172] (0xc0002b8580) Reply frame received for 1\nI0215 15:07:48.352884    3279 log.go:172] (0xc0002b8580) (0xc0007ca000) Create stream\nI0215 15:07:48.352896    3279 log.go:172] (0xc0002b8580) (0xc0007ca000) Stream added, broadcasting: 3\nI0215 15:07:48.355229    3279 log.go:172] (0xc0002b8580) Reply frame received for 3\nI0215 15:07:48.355553    3279 log.go:172] (0xc0002b8580) (0xc000a4c000) Create stream\nI0215 15:07:48.355589    3279 log.go:172] (0xc0002b8580) (0xc000a4c000) Stream added, broadcasting: 5\nI0215 15:07:48.358176    3279 log.go:172] (0xc0002b8580) Reply frame received for 5\nI0215 15:07:48.358272    3279 log.go:172] (0xc0002b8580) (0xc0005bce60) Create stream\nI0215 15:07:48.358313    3279 log.go:172] (0xc0002b8580) (0xc0005bce60) Stream added, broadcasting: 7\nI0215 15:07:48.368809    3279 log.go:172] (0xc0002b8580) Reply frame received for 7\nI0215 15:07:48.369383    3279 log.go:172] (0xc0007ca000) (3) Writing data frame\nI0215 15:07:48.369743    3279 log.go:172] (0xc0007ca000) (3) Writing data frame\nI0215 15:07:48.386517    3279 log.go:172] (0xc0002b8580) Data frame received for 5\nI0215 15:07:48.386590    3279 log.go:172] (0xc000a4c000) (5) Data frame handling\nI0215 15:07:48.386633    3279 log.go:172] (0xc000a4c000) (5) Data frame sent\nI0215 15:07:48.391764    3279 log.go:172] (0xc0002b8580) Data frame received for 5\nI0215 15:07:48.391792    3279 log.go:172] (0xc000a4c000) (5) Data frame handling\nI0215 15:07:48.391817    3279 log.go:172] (0xc000a4c000) (5) Data frame sent\nI0215 15:07:50.478756    3279 log.go:172] (0xc0002b8580) Data frame received for 1\nI0215 15:07:50.478833    3279 log.go:172] (0xc0002b8580) (0xc000a4c000) Stream removed, broadcasting: 5\nI0215 15:07:50.478910    3279 log.go:172] (0xc0005bcdc0) (1) Data frame handling\nI0215 15:07:50.478961    3279 log.go:172] (0xc0002b8580) (0xc0007ca000) Stream removed, broadcasting: 3\nI0215 15:07:50.479124    3279 log.go:172] (0xc0005bcdc0) (1) Data frame sent\nI0215 15:07:50.479175    3279 log.go:172] (0xc0002b8580) (0xc0005bce60) Stream removed, broadcasting: 7\nI0215 15:07:50.479215    3279 log.go:172] (0xc0002b8580) (0xc0005bcdc0) Stream removed, broadcasting: 1\nI0215 15:07:50.479241    3279 log.go:172] (0xc0002b8580) Go away received\nI0215 15:07:50.479439    3279 log.go:172] (0xc0002b8580) (0xc0005bcdc0) Stream removed, broadcasting: 1\nI0215 15:07:50.479460    3279 log.go:172] (0xc0002b8580) (0xc0007ca000) Stream removed, broadcasting: 3\nI0215 15:07:50.479470    3279 log.go:172] (0xc0002b8580) (0xc000a4c000) Stream removed, broadcasting: 5\nI0215 15:07:50.479482    3279 log.go:172] (0xc0002b8580) (0xc0005bce60) Stream removed, broadcasting: 7\n"
Feb 15 15:07:50.542: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 15:07:52.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7198" for this suite.
Feb 15 15:07:58.606: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 15:07:58.720: INFO: namespace kubectl-7198 deletion completed in 6.153368748s

• [SLOW TEST:22.512 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 15:07:58.720: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292
STEP: creating an rc
Feb 15 15:07:58.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8843'
Feb 15 15:07:59.351: INFO: stderr: ""
Feb 15 15:07:59.351: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Waiting for Redis master to start.
Feb 15 15:08:00.365: INFO: Selector matched 1 pods for map[app:redis]
Feb 15 15:08:00.365: INFO: Found 0 / 1
Feb 15 15:08:01.381: INFO: Selector matched 1 pods for map[app:redis]
Feb 15 15:08:01.381: INFO: Found 0 / 1
Feb 15 15:08:02.359: INFO: Selector matched 1 pods for map[app:redis]
Feb 15 15:08:02.359: INFO: Found 0 / 1
Feb 15 15:08:03.362: INFO: Selector matched 1 pods for map[app:redis]
Feb 15 15:08:03.362: INFO: Found 0 / 1
Feb 15 15:08:04.365: INFO: Selector matched 1 pods for map[app:redis]
Feb 15 15:08:04.365: INFO: Found 0 / 1
Feb 15 15:08:05.361: INFO: Selector matched 1 pods for map[app:redis]
Feb 15 15:08:05.361: INFO: Found 0 / 1
Feb 15 15:08:06.360: INFO: Selector matched 1 pods for map[app:redis]
Feb 15 15:08:06.360: INFO: Found 0 / 1
Feb 15 15:08:07.363: INFO: Selector matched 1 pods for map[app:redis]
Feb 15 15:08:07.363: INFO: Found 0 / 1
Feb 15 15:08:08.361: INFO: Selector matched 1 pods for map[app:redis]
Feb 15 15:08:08.361: INFO: Found 1 / 1
Feb 15 15:08:08.361: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb 15 15:08:08.366: INFO: Selector matched 1 pods for map[app:redis]
Feb 15 15:08:08.366: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Feb 15 15:08:08.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-7qw72 redis-master --namespace=kubectl-8843'
Feb 15 15:08:08.688: INFO: stderr: ""
Feb 15 15:08:08.689: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 15 Feb 15:08:06.676 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 15 Feb 15:08:06.677 # Server started, Redis version 3.2.12\n1:M 15 Feb 15:08:06.678 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 15 Feb 15:08:06.678 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Feb 15 15:08:08.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-7qw72 redis-master --namespace=kubectl-8843 --tail=1'
Feb 15 15:08:08.905: INFO: stderr: ""
Feb 15 15:08:08.905: INFO: stdout: "1:M 15 Feb 15:08:06.678 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Feb 15 15:08:08.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-7qw72 redis-master --namespace=kubectl-8843 --limit-bytes=1'
Feb 15 15:08:09.174: INFO: stderr: ""
Feb 15 15:08:09.174: INFO: stdout: " "
STEP: exposing timestamps
Feb 15 15:08:09.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-7qw72 redis-master --namespace=kubectl-8843 --tail=1 --timestamps'
Feb 15 15:08:09.376: INFO: stderr: ""
Feb 15 15:08:09.376: INFO: stdout: "2020-02-15T15:08:06.679378515Z 1:M 15 Feb 15:08:06.678 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Feb 15 15:08:11.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-7qw72 redis-master --namespace=kubectl-8843 --since=1s'
Feb 15 15:08:12.166: INFO: stderr: ""
Feb 15 15:08:12.166: INFO: stdout: ""
Feb 15 15:08:12.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-7qw72 redis-master --namespace=kubectl-8843 --since=24h'
Feb 15 15:08:12.330: INFO: stderr: ""
Feb 15 15:08:12.330: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 15 Feb 15:08:06.676 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 15 Feb 15:08:06.677 # Server started, Redis version 3.2.12\n1:M 15 Feb 15:08:06.678 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 15 Feb 15:08:06.678 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
STEP: using delete to clean up resources
Feb 15 15:08:12.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8843'
Feb 15 15:08:12.457: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 15 15:08:12.457: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Feb 15 15:08:12.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-8843'
Feb 15 15:08:12.574: INFO: stderr: "No resources found.\n"
Feb 15 15:08:12.574: INFO: stdout: ""
Feb 15 15:08:12.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-8843 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 15 15:08:12.689: INFO: stderr: ""
Feb 15 15:08:12.690: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 15:08:12.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8843" for this suite.
Feb 15 15:08:34.756: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 15:08:34.945: INFO: namespace kubectl-8843 deletion completed in 22.243133233s

• [SLOW TEST:36.225 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 15:08:34.945: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-4256
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-4256
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4256
Feb 15 15:08:35.129: INFO: Found 0 stateful pods, waiting for 1
Feb 15 15:08:45.141: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Feb 15 15:08:45.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4256 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 15 15:08:45.678: INFO: stderr: "I0215 15:08:45.288538    3501 log.go:172] (0xc0008fe630) (0xc000820a00) Create stream\nI0215 15:08:45.288736    3501 log.go:172] (0xc0008fe630) (0xc000820a00) Stream added, broadcasting: 1\nI0215 15:08:45.297077    3501 log.go:172] (0xc0008fe630) Reply frame received for 1\nI0215 15:08:45.297123    3501 log.go:172] (0xc0008fe630) (0xc000820000) Create stream\nI0215 15:08:45.297133    3501 log.go:172] (0xc0008fe630) (0xc000820000) Stream added, broadcasting: 3\nI0215 15:08:45.298871    3501 log.go:172] (0xc0008fe630) Reply frame received for 3\nI0215 15:08:45.298935    3501 log.go:172] (0xc0008fe630) (0xc00087e000) Create stream\nI0215 15:08:45.298946    3501 log.go:172] (0xc0008fe630) (0xc00087e000) Stream added, broadcasting: 5\nI0215 15:08:45.300177    3501 log.go:172] (0xc0008fe630) Reply frame received for 5\nI0215 15:08:45.446089    3501 log.go:172] (0xc0008fe630) Data frame received for 5\nI0215 15:08:45.446140    3501 log.go:172] (0xc00087e000) (5) Data frame handling\nI0215 15:08:45.446157    3501 log.go:172] (0xc00087e000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0215 15:08:45.494969    3501 log.go:172] (0xc0008fe630) Data frame received for 3\nI0215 15:08:45.495020    3501 log.go:172] (0xc000820000) (3) Data frame handling\nI0215 15:08:45.495045    3501 log.go:172] (0xc000820000) (3) Data frame sent\nI0215 15:08:45.666716    3501 log.go:172] (0xc0008fe630) Data frame received for 1\nI0215 15:08:45.666888    3501 log.go:172] (0xc0008fe630) (0xc000820000) Stream removed, broadcasting: 3\nI0215 15:08:45.666976    3501 log.go:172] (0xc000820a00) (1) Data frame handling\nI0215 15:08:45.666994    3501 log.go:172] (0xc000820a00) (1) Data frame sent\nI0215 15:08:45.667421    3501 log.go:172] (0xc0008fe630) (0xc000820a00) Stream removed, broadcasting: 1\nI0215 15:08:45.667618    3501 log.go:172] (0xc0008fe630) (0xc00087e000) Stream removed, broadcasting: 5\nI0215 15:08:45.667675    3501 log.go:172] (0xc0008fe630) Go away received\nI0215 15:08:45.669087    3501 log.go:172] (0xc0008fe630) (0xc000820a00) Stream removed, broadcasting: 1\nI0215 15:08:45.669132    3501 log.go:172] (0xc0008fe630) (0xc000820000) Stream removed, broadcasting: 3\nI0215 15:08:45.669141    3501 log.go:172] (0xc0008fe630) (0xc00087e000) Stream removed, broadcasting: 5\n"
Feb 15 15:08:45.678: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 15 15:08:45.678: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 15 15:08:45.687: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Feb 15 15:08:55.725: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 15 15:08:55.725: INFO: Waiting for statefulset status.replicas updated to 0
Feb 15 15:08:55.754: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999384s
Feb 15 15:08:56.765: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.990159852s
Feb 15 15:08:57.775: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.978557907s
Feb 15 15:08:58.785: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.968746471s
Feb 15 15:08:59.797: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.959392834s
Feb 15 15:09:00.812: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.946604031s
Feb 15 15:09:01.823: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.931753206s
Feb 15 15:09:02.833: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.921117654s
Feb 15 15:09:03.852: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.911177032s
Feb 15 15:09:04.865: INFO: Verifying statefulset ss doesn't scale past 1 for another 891.819509ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4256
Feb 15 15:09:05.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4256 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 15 15:09:06.406: INFO: stderr: "I0215 15:09:06.075356    3515 log.go:172] (0xc000a746e0) (0xc000aa6aa0) Create stream\nI0215 15:09:06.075493    3515 log.go:172] (0xc000a746e0) (0xc000aa6aa0) Stream added, broadcasting: 1\nI0215 15:09:06.093897    3515 log.go:172] (0xc000a746e0) Reply frame received for 1\nI0215 15:09:06.093949    3515 log.go:172] (0xc000a746e0) (0xc000aa6000) Create stream\nI0215 15:09:06.093959    3515 log.go:172] (0xc000a746e0) (0xc000aa6000) Stream added, broadcasting: 3\nI0215 15:09:06.095658    3515 log.go:172] (0xc000a746e0) Reply frame received for 3\nI0215 15:09:06.095682    3515 log.go:172] (0xc000a746e0) (0xc0001ffa40) Create stream\nI0215 15:09:06.095702    3515 log.go:172] (0xc000a746e0) (0xc0001ffa40) Stream added, broadcasting: 5\nI0215 15:09:06.096937    3515 log.go:172] (0xc000a746e0) Reply frame received for 5\nI0215 15:09:06.222397    3515 log.go:172] (0xc000a746e0) Data frame received for 5\nI0215 15:09:06.222468    3515 log.go:172] (0xc0001ffa40) (5) Data frame handling\nI0215 15:09:06.222487    3515 log.go:172] (0xc0001ffa40) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0215 15:09:06.222504    3515 log.go:172] (0xc000a746e0) Data frame received for 3\nI0215 15:09:06.222514    3515 log.go:172] (0xc000aa6000) (3) Data frame handling\nI0215 15:09:06.222521    3515 log.go:172] (0xc000aa6000) (3) Data frame sent\nI0215 15:09:06.391111    3515 log.go:172] (0xc000a746e0) Data frame received for 1\nI0215 15:09:06.391301    3515 log.go:172] (0xc000a746e0) (0xc000aa6000) Stream removed, broadcasting: 3\nI0215 15:09:06.391510    3515 log.go:172] (0xc000aa6aa0) (1) Data frame handling\nI0215 15:09:06.391560    3515 log.go:172] (0xc000aa6aa0) (1) Data frame sent\nI0215 15:09:06.391579    3515 log.go:172] (0xc000a746e0) (0xc000aa6aa0) Stream removed, broadcasting: 1\nI0215 15:09:06.391953    3515 log.go:172] (0xc000a746e0) (0xc0001ffa40) Stream removed, broadcasting: 5\nI0215 15:09:06.392022    3515 log.go:172] (0xc000a746e0) Go away received\nI0215 15:09:06.392865    3515 log.go:172] (0xc000a746e0) (0xc000aa6aa0) Stream removed, broadcasting: 1\nI0215 15:09:06.392911    3515 log.go:172] (0xc000a746e0) (0xc000aa6000) Stream removed, broadcasting: 3\nI0215 15:09:06.392924    3515 log.go:172] (0xc000a746e0) (0xc0001ffa40) Stream removed, broadcasting: 5\n"
Feb 15 15:09:06.407: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 15 15:09:06.407: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 15 15:09:06.533: INFO: Found 1 stateful pods, waiting for 3
Feb 15 15:09:16.550: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 15 15:09:16.550: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 15 15:09:16.550: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 15 15:09:26.552: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 15 15:09:26.552: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 15 15:09:26.552: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Feb 15 15:09:26.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4256 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 15 15:09:27.010: INFO: stderr: "I0215 15:09:26.748627    3535 log.go:172] (0xc000932420) (0xc0007a2640) Create stream\nI0215 15:09:26.748722    3535 log.go:172] (0xc000932420) (0xc0007a2640) Stream added, broadcasting: 1\nI0215 15:09:26.753596    3535 log.go:172] (0xc000932420) Reply frame received for 1\nI0215 15:09:26.753633    3535 log.go:172] (0xc000932420) (0xc000904000) Create stream\nI0215 15:09:26.753642    3535 log.go:172] (0xc000932420) (0xc000904000) Stream added, broadcasting: 3\nI0215 15:09:26.755046    3535 log.go:172] (0xc000932420) Reply frame received for 3\nI0215 15:09:26.755094    3535 log.go:172] (0xc000932420) (0xc0006263c0) Create stream\nI0215 15:09:26.755111    3535 log.go:172] (0xc000932420) (0xc0006263c0) Stream added, broadcasting: 5\nI0215 15:09:26.756545    3535 log.go:172] (0xc000932420) Reply frame received for 5\nI0215 15:09:26.857937    3535 log.go:172] (0xc000932420) Data frame received for 3\nI0215 15:09:26.858054    3535 log.go:172] (0xc000904000) (3) Data frame handling\nI0215 15:09:26.858079    3535 log.go:172] (0xc000904000) (3) Data frame sent\nI0215 15:09:26.858113    3535 log.go:172] (0xc000932420) Data frame received for 5\nI0215 15:09:26.858128    3535 log.go:172] (0xc0006263c0) (5) Data frame handling\nI0215 15:09:26.858147    3535 log.go:172] (0xc0006263c0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0215 15:09:27.000376    3535 log.go:172] (0xc000932420) Data frame received for 1\nI0215 15:09:27.000470    3535 log.go:172] (0xc0007a2640) (1) Data frame handling\nI0215 15:09:27.000521    3535 log.go:172] (0xc0007a2640) (1) Data frame sent\nI0215 15:09:27.000559    3535 log.go:172] (0xc000932420) (0xc0007a2640) Stream removed, broadcasting: 1\nI0215 15:09:27.000612    3535 log.go:172] (0xc000932420) (0xc000904000) Stream removed, broadcasting: 3\nI0215 15:09:27.000679    3535 log.go:172] (0xc000932420) (0xc0006263c0) Stream removed, broadcasting: 5\nI0215 15:09:27.000694    3535 log.go:172] (0xc000932420) Go away received\nI0215 15:09:27.001408    3535 log.go:172] (0xc000932420) (0xc0007a2640) Stream removed, broadcasting: 1\nI0215 15:09:27.001428    3535 log.go:172] (0xc000932420) (0xc000904000) Stream removed, broadcasting: 3\nI0215 15:09:27.001448    3535 log.go:172] (0xc000932420) (0xc0006263c0) Stream removed, broadcasting: 5\n"
Feb 15 15:09:27.010: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 15 15:09:27.011: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 15 15:09:27.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4256 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 15 15:09:27.512: INFO: stderr: "I0215 15:09:27.223830    3551 log.go:172] (0xc0009700b0) (0xc00093a640) Create stream\nI0215 15:09:27.224026    3551 log.go:172] (0xc0009700b0) (0xc00093a640) Stream added, broadcasting: 1\nI0215 15:09:27.232180    3551 log.go:172] (0xc0009700b0) Reply frame received for 1\nI0215 15:09:27.232244    3551 log.go:172] (0xc0009700b0) (0xc0004d5ae0) Create stream\nI0215 15:09:27.232265    3551 log.go:172] (0xc0009700b0) (0xc0004d5ae0) Stream added, broadcasting: 3\nI0215 15:09:27.233150    3551 log.go:172] (0xc0009700b0) Reply frame received for 3\nI0215 15:09:27.233172    3551 log.go:172] (0xc0009700b0) (0xc00093a6e0) Create stream\nI0215 15:09:27.233181    3551 log.go:172] (0xc0009700b0) (0xc00093a6e0) Stream added, broadcasting: 5\nI0215 15:09:27.234431    3551 log.go:172] (0xc0009700b0) Reply frame received for 5\nI0215 15:09:27.358041    3551 log.go:172] (0xc0009700b0) Data frame received for 5\nI0215 15:09:27.358176    3551 log.go:172] (0xc00093a6e0) (5) Data frame handling\nI0215 15:09:27.358225    3551 log.go:172] (0xc00093a6e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0215 15:09:27.430492    3551 log.go:172] (0xc0009700b0) Data frame received for 3\nI0215 15:09:27.431060    3551 log.go:172] (0xc0004d5ae0) (3) Data frame handling\nI0215 15:09:27.431149    3551 log.go:172] (0xc0004d5ae0) (3) Data frame sent\nI0215 15:09:27.506323    3551 log.go:172] (0xc0009700b0) Data frame received for 1\nI0215 15:09:27.506512    3551 log.go:172] (0xc0009700b0) (0xc0004d5ae0) Stream removed, broadcasting: 3\nI0215 15:09:27.506710    3551 log.go:172] (0xc00093a640) (1) Data frame handling\nI0215 15:09:27.506746    3551 log.go:172] (0xc00093a640) (1) Data frame sent\nI0215 15:09:27.507090    3551 log.go:172] (0xc0009700b0) (0xc00093a6e0) Stream removed, broadcasting: 5\nI0215 15:09:27.507308    3551 log.go:172] (0xc0009700b0) (0xc00093a640) Stream removed, broadcasting: 1\nI0215 15:09:27.507353    3551 log.go:172] (0xc0009700b0) Go away received\nI0215 15:09:27.508157    3551 log.go:172] (0xc0009700b0) (0xc00093a640) Stream removed, broadcasting: 1\nI0215 15:09:27.508175    3551 log.go:172] (0xc0009700b0) (0xc0004d5ae0) Stream removed, broadcasting: 3\nI0215 15:09:27.508190    3551 log.go:172] (0xc0009700b0) (0xc00093a6e0) Stream removed, broadcasting: 5\n"
Feb 15 15:09:27.513: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 15 15:09:27.513: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 15 15:09:27.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4256 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 15 15:09:28.093: INFO: stderr: "I0215 15:09:27.689982    3571 log.go:172] (0xc000117080) (0xc0005d6960) Create stream\nI0215 15:09:27.690125    3571 log.go:172] (0xc000117080) (0xc0005d6960) Stream added, broadcasting: 1\nI0215 15:09:27.699168    3571 log.go:172] (0xc000117080) Reply frame received for 1\nI0215 15:09:27.699228    3571 log.go:172] (0xc000117080) (0xc0005d6a00) Create stream\nI0215 15:09:27.699240    3571 log.go:172] (0xc000117080) (0xc0005d6a00) Stream added, broadcasting: 3\nI0215 15:09:27.701131    3571 log.go:172] (0xc000117080) Reply frame received for 3\nI0215 15:09:27.701169    3571 log.go:172] (0xc000117080) (0xc0007fe000) Create stream\nI0215 15:09:27.701189    3571 log.go:172] (0xc000117080) (0xc0007fe000) Stream added, broadcasting: 5\nI0215 15:09:27.703929    3571 log.go:172] (0xc000117080) Reply frame received for 5\nI0215 15:09:27.844454    3571 log.go:172] (0xc000117080) Data frame received for 5\nI0215 15:09:27.844607    3571 log.go:172] (0xc0007fe000) (5) Data frame handling\nI0215 15:09:27.844713    3571 log.go:172] (0xc0007fe000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0215 15:09:27.906277    3571 log.go:172] (0xc000117080) Data frame received for 3\nI0215 15:09:27.906397    3571 log.go:172] (0xc0005d6a00) (3) Data frame handling\nI0215 15:09:27.906462    3571 log.go:172] (0xc0005d6a00) (3) Data frame sent\nI0215 15:09:28.079596    3571 log.go:172] (0xc000117080) Data frame received for 1\nI0215 15:09:28.079795    3571 log.go:172] (0xc0005d6960) (1) Data frame handling\nI0215 15:09:28.079948    3571 log.go:172] (0xc0005d6960) (1) Data frame sent\nI0215 15:09:28.081028    3571 log.go:172] (0xc000117080) (0xc0005d6960) Stream removed, broadcasting: 1\nI0215 15:09:28.081409    3571 log.go:172] (0xc000117080) (0xc0005d6a00) Stream removed, broadcasting: 3\nI0215 15:09:28.081847    3571 log.go:172] (0xc000117080) (0xc0007fe000) Stream removed, broadcasting: 5\nI0215 15:09:28.081890    3571 log.go:172] (0xc000117080) Go away received\nI0215 15:09:28.082434    3571 log.go:172] (0xc000117080) (0xc0005d6960) Stream removed, broadcasting: 1\nI0215 15:09:28.082444    3571 log.go:172] (0xc000117080) (0xc0005d6a00) Stream removed, broadcasting: 3\nI0215 15:09:28.082454    3571 log.go:172] (0xc000117080) (0xc0007fe000) Stream removed, broadcasting: 5\n"
Feb 15 15:09:28.093: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 15 15:09:28.093: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 15 15:09:28.093: INFO: Waiting for statefulset status.replicas updated to 0
Feb 15 15:09:28.100: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Feb 15 15:09:38.348: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 15 15:09:38.348: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb 15 15:09:38.348: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb 15 15:09:38.525: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999982059s
Feb 15 15:09:39.531: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.984980546s
Feb 15 15:09:41.274: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.978770672s
Feb 15 15:09:42.286: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.236205782s
Feb 15 15:09:43.298: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.224226304s
Feb 15 15:09:44.315: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.211749838s
Feb 15 15:09:45.325: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.195343356s
Feb 15 15:09:46.336: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.18469894s
Feb 15 15:09:47.346: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.174423853s
Feb 15 15:09:48.363: INFO: Verifying statefulset ss doesn't scale past 3 for another 163.817126ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4256
Feb 15 15:09:49.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4256 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 15 15:09:49.942: INFO: stderr: "I0215 15:09:49.597361    3591 log.go:172] (0xc000a2e0b0) (0xc0009a81e0) Create stream\nI0215 15:09:49.597602    3591 log.go:172] (0xc000a2e0b0) (0xc0009a81e0) Stream added, broadcasting: 1\nI0215 15:09:49.608694    3591 log.go:172] (0xc000a2e0b0) Reply frame received for 1\nI0215 15:09:49.608766    3591 log.go:172] (0xc000a2e0b0) (0xc0009a8280) Create stream\nI0215 15:09:49.608779    3591 log.go:172] (0xc000a2e0b0) (0xc0009a8280) Stream added, broadcasting: 3\nI0215 15:09:49.610416    3591 log.go:172] (0xc000a2e0b0) Reply frame received for 3\nI0215 15:09:49.610470    3591 log.go:172] (0xc000a2e0b0) (0xc0006ba320) Create stream\nI0215 15:09:49.610483    3591 log.go:172] (0xc000a2e0b0) (0xc0006ba320) Stream added, broadcasting: 5\nI0215 15:09:49.611868    3591 log.go:172] (0xc000a2e0b0) Reply frame received for 5\nI0215 15:09:49.723589    3591 log.go:172] (0xc000a2e0b0) Data frame received for 5\nI0215 15:09:49.723737    3591 log.go:172] (0xc000a2e0b0) Data frame received for 3\nI0215 15:09:49.723787    3591 log.go:172] (0xc0009a8280) (3) Data frame handling\nI0215 15:09:49.723826    3591 log.go:172] (0xc0009a8280) (3) Data frame sent\nI0215 15:09:49.723864    3591 log.go:172] (0xc0006ba320) (5) Data frame handling\nI0215 15:09:49.723886    3591 log.go:172] (0xc0006ba320) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0215 15:09:49.925509    3591 log.go:172] (0xc000a2e0b0) Data frame received for 1\nI0215 15:09:49.925638    3591 log.go:172] (0xc0009a81e0) (1) Data frame handling\nI0215 15:09:49.925755    3591 log.go:172] (0xc0009a81e0) (1) Data frame sent\nI0215 15:09:49.926027    3591 log.go:172] (0xc000a2e0b0) (0xc0009a81e0) Stream removed, broadcasting: 1\nI0215 15:09:49.928001    3591 log.go:172] (0xc000a2e0b0) (0xc0009a8280) Stream removed, broadcasting: 3\nI0215 15:09:49.928109    3591 log.go:172] (0xc000a2e0b0) (0xc0006ba320) Stream removed, broadcasting: 5\nI0215 15:09:49.928234    3591 log.go:172] (0xc000a2e0b0) (0xc0009a81e0) Stream removed, broadcasting: 1\nI0215 15:09:49.928261    3591 log.go:172] (0xc000a2e0b0) (0xc0009a8280) Stream removed, broadcasting: 3\nI0215 15:09:49.928286    3591 log.go:172] (0xc000a2e0b0) (0xc0006ba320) Stream removed, broadcasting: 5\n"
Feb 15 15:09:49.942: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 15 15:09:49.942: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 15 15:09:49.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4256 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 15 15:09:50.321: INFO: stderr: "I0215 15:09:50.125770    3612 log.go:172] (0xc00013ae70) (0xc00064e6e0) Create stream\nI0215 15:09:50.125918    3612 log.go:172] (0xc00013ae70) (0xc00064e6e0) Stream added, broadcasting: 1\nI0215 15:09:50.129030    3612 log.go:172] (0xc00013ae70) Reply frame received for 1\nI0215 15:09:50.129177    3612 log.go:172] (0xc00013ae70) (0xc000984000) Create stream\nI0215 15:09:50.129209    3612 log.go:172] (0xc00013ae70) (0xc000984000) Stream added, broadcasting: 3\nI0215 15:09:50.130490    3612 log.go:172] (0xc00013ae70) Reply frame received for 3\nI0215 15:09:50.130511    3612 log.go:172] (0xc00013ae70) (0xc0009840a0) Create stream\nI0215 15:09:50.130516    3612 log.go:172] (0xc00013ae70) (0xc0009840a0) Stream added, broadcasting: 5\nI0215 15:09:50.131611    3612 log.go:172] (0xc00013ae70) Reply frame received for 5\nI0215 15:09:50.202174    3612 log.go:172] (0xc00013ae70) Data frame received for 3\nI0215 15:09:50.202239    3612 log.go:172] (0xc000984000) (3) Data frame handling\nI0215 15:09:50.202276    3612 log.go:172] (0xc000984000) (3) Data frame sent\nI0215 15:09:50.202497    3612 log.go:172] (0xc00013ae70) Data frame received for 5\nI0215 15:09:50.202517    3612 log.go:172] (0xc0009840a0) (5) Data frame handling\nI0215 15:09:50.202532    3612 log.go:172] (0xc0009840a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0215 15:09:50.313623    3612 log.go:172] (0xc00013ae70) (0xc000984000) Stream removed, broadcasting: 3\nI0215 15:09:50.313938    3612 log.go:172] (0xc00013ae70) Data frame received for 1\nI0215 15:09:50.313978    3612 log.go:172] (0xc00064e6e0) (1) Data frame handling\nI0215 15:09:50.314004    3612 log.go:172] (0xc00064e6e0) (1) Data frame sent\nI0215 15:09:50.314019    3612 log.go:172] (0xc00013ae70) (0xc00064e6e0) Stream removed, broadcasting: 1\nI0215 15:09:50.314183    3612 log.go:172] (0xc00013ae70) (0xc0009840a0) Stream removed, broadcasting: 5\nI0215 15:09:50.314213    3612 log.go:172] (0xc00013ae70) Go away received\nI0215 15:09:50.315072    3612 log.go:172] (0xc00013ae70) (0xc00064e6e0) Stream removed, broadcasting: 1\nI0215 15:09:50.315092    3612 log.go:172] (0xc00013ae70) (0xc000984000) Stream removed, broadcasting: 3\nI0215 15:09:50.315103    3612 log.go:172] (0xc00013ae70) (0xc0009840a0) Stream removed, broadcasting: 5\n"
Feb 15 15:09:50.321: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 15 15:09:50.321: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 15 15:09:50.321: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4256 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 15 15:09:50.976: INFO: stderr: "I0215 15:09:50.561272    3632 log.go:172] (0xc00098e0b0) (0xc0009706e0) Create stream\nI0215 15:09:50.561603    3632 log.go:172] (0xc00098e0b0) (0xc0009706e0) Stream added, broadcasting: 1\nI0215 15:09:50.568649    3632 log.go:172] (0xc00098e0b0) Reply frame received for 1\nI0215 15:09:50.568713    3632 log.go:172] (0xc00098e0b0) (0xc00050c280) Create stream\nI0215 15:09:50.568723    3632 log.go:172] (0xc00098e0b0) (0xc00050c280) Stream added, broadcasting: 3\nI0215 15:09:50.569970    3632 log.go:172] (0xc00098e0b0) Reply frame received for 3\nI0215 15:09:50.570070    3632 log.go:172] (0xc00098e0b0) (0xc00040a000) Create stream\nI0215 15:09:50.570112    3632 log.go:172] (0xc00098e0b0) (0xc00040a000) Stream added, broadcasting: 5\nI0215 15:09:50.571588    3632 log.go:172] (0xc00098e0b0) Reply frame received for 5\nI0215 15:09:50.704677    3632 log.go:172] (0xc00098e0b0) Data frame received for 3\nI0215 15:09:50.704777    3632 log.go:172] (0xc00050c280) (3) Data frame handling\nI0215 15:09:50.704839    3632 log.go:172] (0xc00050c280) (3) Data frame sent\nI0215 15:09:50.705156    3632 log.go:172] (0xc00098e0b0) Data frame received for 5\nI0215 15:09:50.705226    3632 log.go:172] (0xc00040a000) (5) Data frame handling\nI0215 15:09:50.705294    3632 log.go:172] (0xc00040a000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0215 15:09:50.960484    3632 log.go:172] (0xc00098e0b0) Data frame received for 1\nI0215 15:09:50.960686    3632 log.go:172] (0xc0009706e0) (1) Data frame handling\nI0215 15:09:50.960765    3632 log.go:172] (0xc0009706e0) (1) Data frame sent\nI0215 15:09:50.960825    3632 log.go:172] (0xc00098e0b0) (0xc0009706e0) Stream removed, broadcasting: 1\nI0215 15:09:50.961877    3632 log.go:172] (0xc00098e0b0) (0xc00050c280) Stream removed, broadcasting: 3\nI0215 15:09:50.962016    3632 log.go:172] (0xc00098e0b0) (0xc00040a000) Stream removed, broadcasting: 5\nI0215 15:09:50.962176    3632 log.go:172] (0xc00098e0b0) Go away received\nI0215 15:09:50.962624    3632 log.go:172] (0xc00098e0b0) (0xc0009706e0) Stream removed, broadcasting: 1\nI0215 15:09:50.962659    3632 log.go:172] (0xc00098e0b0) (0xc00050c280) Stream removed, broadcasting: 3\nI0215 15:09:50.962679    3632 log.go:172] (0xc00098e0b0) (0xc00040a000) Stream removed, broadcasting: 5\n"
Feb 15 15:09:50.976: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 15 15:09:50.976: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 15 15:09:50.976: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb 15 15:10:21.013: INFO: Deleting all statefulset in ns statefulset-4256
Feb 15 15:10:21.020: INFO: Scaling statefulset ss to 0
Feb 15 15:10:21.035: INFO: Waiting for statefulset status.replicas updated to 0
Feb 15 15:10:21.038: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 15:10:21.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-4256" for this suite.
Feb 15 15:10:29.159: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 15:10:29.312: INFO: namespace statefulset-4256 deletion completed in 8.195418328s

• [SLOW TEST:114.367 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 15 15:10:29.312: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 15 15:10:29.381: INFO: Creating ReplicaSet my-hostname-basic-d876f70f-c37f-42a3-819a-b98e48581b7b
Feb 15 15:10:29.397: INFO: Pod name my-hostname-basic-d876f70f-c37f-42a3-819a-b98e48581b7b: Found 0 pods out of 1
Feb 15 15:10:34.411: INFO: Pod name my-hostname-basic-d876f70f-c37f-42a3-819a-b98e48581b7b: Found 1 pods out of 1
Feb 15 15:10:34.411: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-d876f70f-c37f-42a3-819a-b98e48581b7b" is running
Feb 15 15:10:38.425: INFO: Pod "my-hostname-basic-d876f70f-c37f-42a3-819a-b98e48581b7b-jrbl7" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-15 15:10:29 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-15 15:10:29 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-d876f70f-c37f-42a3-819a-b98e48581b7b]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-15 15:10:29 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-d876f70f-c37f-42a3-819a-b98e48581b7b]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-15 15:10:29 +0000 UTC Reason: Message:}])
Feb 15 15:10:38.426: INFO: Trying to dial the pod
Feb 15 15:10:43.471: INFO: Controller my-hostname-basic-d876f70f-c37f-42a3-819a-b98e48581b7b: Got expected result from replica 1 [my-hostname-basic-d876f70f-c37f-42a3-819a-b98e48581b7b-jrbl7]: "my-hostname-basic-d876f70f-c37f-42a3-819a-b98e48581b7b-jrbl7", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 15 15:10:43.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-5230" for this suite.
Feb 15 15:10:49.560: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 15 15:10:49.670: INFO: namespace replicaset-5230 deletion completed in 6.190954253s

• [SLOW TEST:20.358 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSFeb 15 15:10:49.671: INFO: Running AfterSuite actions on all nodes
Feb 15 15:10:49.671: INFO: Running AfterSuite actions on node 1
Feb 15 15:10:49.671: INFO: Skipping dumping logs from cluster

Ran 215 of 4412 Specs in 8079.160 seconds
SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped
PASS