I0110 12:56:08.573385 8 e2e.go:243] Starting e2e run "fdf26298-6274-49fb-a625-32d68d475e0c" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1578660967 - Will randomize all specs Will run 215 of 4412 specs Jan 10 12:56:08.998: INFO: >>> kubeConfig: /root/.kube/config Jan 10 12:56:09.002: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 10 12:56:09.025: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 10 12:56:09.057: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 10 12:56:09.057: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 10 12:56:09.057: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 10 12:56:09.069: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 10 12:56:09.069: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Jan 10 12:56:09.069: INFO: e2e test version: v1.15.7 Jan 10 12:56:09.070: INFO: kube-apiserver version: v1.15.1 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 10 12:56:09.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api Jan 10 12:56:09.148: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 10 12:56:09.169: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d2cca7bc-2b16-4dcf-90cb-d492c5758cf8" in namespace "downward-api-2159" to be "success or failure" Jan 10 12:56:09.180: INFO: Pod "downwardapi-volume-d2cca7bc-2b16-4dcf-90cb-d492c5758cf8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.304094ms Jan 10 12:56:11.192: INFO: Pod "downwardapi-volume-d2cca7bc-2b16-4dcf-90cb-d492c5758cf8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022966434s Jan 10 12:56:13.203: INFO: Pod "downwardapi-volume-d2cca7bc-2b16-4dcf-90cb-d492c5758cf8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034072115s Jan 10 12:56:15.213: INFO: Pod "downwardapi-volume-d2cca7bc-2b16-4dcf-90cb-d492c5758cf8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043991302s Jan 10 12:56:17.223: INFO: Pod "downwardapi-volume-d2cca7bc-2b16-4dcf-90cb-d492c5758cf8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.05333582s Jan 10 12:56:19.231: INFO: Pod "downwardapi-volume-d2cca7bc-2b16-4dcf-90cb-d492c5758cf8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.061689785s STEP: Saw pod success Jan 10 12:56:19.231: INFO: Pod "downwardapi-volume-d2cca7bc-2b16-4dcf-90cb-d492c5758cf8" satisfied condition "success or failure" Jan 10 12:56:19.235: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-d2cca7bc-2b16-4dcf-90cb-d492c5758cf8 container client-container: STEP: delete the pod Jan 10 12:56:19.417: INFO: Waiting for pod downwardapi-volume-d2cca7bc-2b16-4dcf-90cb-d492c5758cf8 to disappear Jan 10 12:56:19.426: INFO: Pod downwardapi-volume-d2cca7bc-2b16-4dcf-90cb-d492c5758cf8 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 10 12:56:19.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2159" for this suite. Jan 10 12:56:25.454: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 10 12:56:25.598: INFO: namespace downward-api-2159 deletion completed in 6.165364505s • [SLOW TEST:16.527 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 10 12:56:25.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-4ee8e01c-5d50-46fa-af1f-629a8a7a9988 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 10 12:56:25.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6997" for this suite. Jan 10 12:56:31.800: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 10 12:56:31.957: INFO: namespace secrets-6997 deletion completed in 6.237586199s • [SLOW TEST:6.360 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 10 12:56:31.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jan 10 12:56:32.180: INFO: Waiting up to 5m0s for pod "downward-api-c38e50e8-55bf-4455-8c94-fcb4c77927a0" in namespace "downward-api-311" to be "success or failure" Jan 10 12:56:32.205: INFO: Pod "downward-api-c38e50e8-55bf-4455-8c94-fcb4c77927a0": Phase="Pending", Reason="", readiness=false. Elapsed: 24.742197ms Jan 10 12:56:34.215: INFO: Pod "downward-api-c38e50e8-55bf-4455-8c94-fcb4c77927a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034488319s Jan 10 12:56:36.224: INFO: Pod "downward-api-c38e50e8-55bf-4455-8c94-fcb4c77927a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044261144s Jan 10 12:56:38.237: INFO: Pod "downward-api-c38e50e8-55bf-4455-8c94-fcb4c77927a0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056521595s Jan 10 12:56:40.247: INFO: Pod "downward-api-c38e50e8-55bf-4455-8c94-fcb4c77927a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.066392356s STEP: Saw pod success Jan 10 12:56:40.247: INFO: Pod "downward-api-c38e50e8-55bf-4455-8c94-fcb4c77927a0" satisfied condition "success or failure" Jan 10 12:56:40.253: INFO: Trying to get logs from node iruya-node pod downward-api-c38e50e8-55bf-4455-8c94-fcb4c77927a0 container dapi-container: STEP: delete the pod Jan 10 12:56:40.550: INFO: Waiting for pod downward-api-c38e50e8-55bf-4455-8c94-fcb4c77927a0 to disappear Jan 10 12:56:40.572: INFO: Pod downward-api-c38e50e8-55bf-4455-8c94-fcb4c77927a0 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 10 12:56:40.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-311" for this suite. Jan 10 12:56:46.627: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 10 12:56:46.767: INFO: namespace downward-api-311 deletion completed in 6.187362732s • [SLOW TEST:14.809 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 10 12:56:46.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-3969 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3969 to expose endpoints map[] Jan 10 12:56:46.954: INFO: Get endpoints failed (10.097122ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Jan 10 12:56:47.998: INFO: successfully validated that service endpoint-test2 in namespace services-3969 exposes endpoints map[] (1.0536603s elapsed) STEP: Creating pod pod1 in namespace services-3969 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3969 to expose endpoints map[pod1:[80]] Jan 10 12:56:52.796: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.779841529s elapsed, will retry) Jan 10 12:56:55.872: INFO: successfully validated that service endpoint-test2 in namespace services-3969 exposes endpoints map[pod1:[80]] (7.855660762s elapsed) STEP: Creating pod pod2 in namespace services-3969 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3969 to expose endpoints map[pod1:[80] pod2:[80]] Jan 10 12:57:00.176: INFO: Unexpected endpoints: found map[79bbc68a-8307-4874-844c-4528cf6a851f:[80]], expected map[pod1:[80] pod2:[80]] (4.280530154s elapsed, will retry) Jan 10 12:57:03.256: INFO: successfully validated that service endpoint-test2 in namespace services-3969 exposes endpoints map[pod1:[80] pod2:[80]] (7.361024026s elapsed) STEP: Deleting pod pod1 in namespace services-3969 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3969 to expose endpoints map[pod2:[80]] Jan 10 12:57:04.332: INFO: successfully validated that service endpoint-test2 in namespace services-3969 exposes endpoints map[pod2:[80]] (1.06722864s elapsed) STEP: Deleting pod pod2 in namespace services-3969 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3969 to expose endpoints map[] Jan 10 12:57:05.375: INFO: successfully validated that service endpoint-test2 in namespace services-3969 exposes endpoints map[] (1.022401464s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 10 12:57:05.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3969" for this suite. Jan 10 12:57:27.758: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 10 12:57:27.955: INFO: namespace services-3969 deletion completed in 22.260560028s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:41.188 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 10 12:57:27.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 10 12:57:28.044: INFO: Waiting up to 5m0s for pod "downwardapi-volume-716fc1f6-ffa0-4825-ae5a-6789506fe2b1" in namespace "downward-api-9009" to be "success or failure" Jan 10 12:57:28.063: INFO: Pod "downwardapi-volume-716fc1f6-ffa0-4825-ae5a-6789506fe2b1": Phase="Pending", Reason="", readiness=false. Elapsed: 18.567055ms Jan 10 12:57:30.071: INFO: Pod "downwardapi-volume-716fc1f6-ffa0-4825-ae5a-6789506fe2b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026593399s Jan 10 12:57:32.081: INFO: Pod "downwardapi-volume-716fc1f6-ffa0-4825-ae5a-6789506fe2b1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036730616s Jan 10 12:57:34.092: INFO: Pod "downwardapi-volume-716fc1f6-ffa0-4825-ae5a-6789506fe2b1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047526169s Jan 10 12:57:36.104: INFO: Pod "downwardapi-volume-716fc1f6-ffa0-4825-ae5a-6789506fe2b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.059545778s STEP: Saw pod success Jan 10 12:57:36.104: INFO: Pod "downwardapi-volume-716fc1f6-ffa0-4825-ae5a-6789506fe2b1" satisfied condition "success or failure" Jan 10 12:57:36.109: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-716fc1f6-ffa0-4825-ae5a-6789506fe2b1 container client-container: STEP: delete the pod Jan 10 12:57:36.222: INFO: Waiting for pod downwardapi-volume-716fc1f6-ffa0-4825-ae5a-6789506fe2b1 to disappear Jan 10 12:57:36.237: INFO: Pod downwardapi-volume-716fc1f6-ffa0-4825-ae5a-6789506fe2b1 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 10 12:57:36.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9009" for this suite. Jan 10 12:57:42.316: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 10 12:57:42.391: INFO: namespace downward-api-9009 deletion completed in 6.139583718s • [SLOW TEST:14.436 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 10 12:57:42.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 10 12:57:42.547: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 10 12:57:50.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5509" for this suite. Jan 10 12:58:32.947: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 10 12:58:33.102: INFO: namespace pods-5509 deletion completed in 42.19509316s • [SLOW TEST:50.710 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 10 12:58:33.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-d4cdd5fb-d92c-4bba-9b5b-c8258fd0af0a in namespace container-probe-9578 Jan 10 12:58:43.256: INFO: Started pod busybox-d4cdd5fb-d92c-4bba-9b5b-c8258fd0af0a in namespace container-probe-9578 STEP: checking the pod's current state and verifying that restartCount is present Jan 10 12:58:43.261: INFO: Initial restart count of pod busybox-d4cdd5fb-d92c-4bba-9b5b-c8258fd0af0a is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 10 13:02:43.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9578" for this suite. Jan 10 13:02:49.663: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 10 13:02:49.816: INFO: namespace container-probe-9578 deletion completed in 6.351383333s • [SLOW TEST:256.712 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 10 13:02:49.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jan 10 13:03:08.062: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 10 13:03:08.075: INFO: Pod pod-with-poststart-http-hook still exists Jan 10 13:03:10.075: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 10 13:03:10.093: INFO: Pod pod-with-poststart-http-hook still exists Jan 10 13:03:12.075: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 10 13:03:12.086: INFO: Pod pod-with-poststart-http-hook still exists Jan 10 13:03:14.075: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 10 13:03:14.087: INFO: Pod pod-with-poststart-http-hook still exists Jan 10 13:03:16.075: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 10 13:03:16.085: INFO: Pod pod-with-poststart-http-hook still exists Jan 10 13:03:18.075: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 10 13:03:18.083: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 10 13:03:18.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1438" for this suite. Jan 10 13:03:42.152: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 10 13:03:42.335: INFO: namespace container-lifecycle-hook-1438 deletion completed in 24.244756122s • [SLOW TEST:52.518 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 10 13:03:42.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 10 13:03:42.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-6915' Jan 10 13:03:45.079: INFO: stderr: "" Jan 10 13:03:45.080: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Jan 10 13:03:55.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-6915 -o json' Jan 10 13:03:55.267: INFO: stderr: "" Jan 10 13:03:55.267: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-01-10T13:03:45Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-6915\",\n \"resourceVersion\": \"20022929\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-6915/pods/e2e-test-nginx-pod\",\n \"uid\": \"540d9bc1-1e7f-44e7-8fbd-5c71e8f6a391\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-lm5h2\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-node\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-lm5h2\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-lm5h2\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-10T13:03:45Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-10T13:03:54Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-10T13:03:54Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-10T13:03:45Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://522ee9de0169a78bc70625cf488aec1e77b12a4867f7efc9aaf0cc03419212f0\",\n \"image\": \"nginx:1.14-alpine\",\n \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-01-10T13:03:52Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.96.3.65\",\n \"phase\": \"Running\",\n \"podIP\": \"10.44.0.1\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-01-10T13:03:45Z\"\n }\n}\n" STEP: replace the image in the pod Jan 10 13:03:55.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-6915' Jan 10 13:03:55.830: INFO: stderr: "" Jan 10 13:03:55.831: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 Jan 10 13:03:55.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-6915' Jan 10 13:04:03.735: INFO: stderr: "" Jan 10 13:04:03.736: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 10 13:04:03.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6915" for this suite. Jan 10 13:04:09.855: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 10 13:04:09.994: INFO: namespace kubectl-6915 deletion completed in 6.243640817s • [SLOW TEST:27.659 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 10 13:04:09.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-71a15118-a78e-402f-8146-d9a35c087e17 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-71a15118-a78e-402f-8146-d9a35c087e17 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 10 13:04:21.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5772" for this suite. Jan 10 13:04:43.565: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 10 13:04:43.696: INFO: namespace projected-5772 deletion completed in 22.159310802s • [SLOW TEST:33.701 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 10 13:04:43.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Jan 10 13:04:43.914: INFO: namespace kubectl-2567 Jan 10 13:04:43.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2567' Jan 10 13:04:44.272: INFO: stderr: "" Jan 10 13:04:44.273: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jan 10 13:04:45.280: INFO: Selector matched 1 pods for map[app:redis] Jan 10 13:04:45.280: INFO: Found 0 / 1 Jan 10 13:04:46.285: INFO: Selector matched 1 pods for map[app:redis] Jan 10 13:04:46.285: INFO: Found 0 / 1 Jan 10 13:04:47.281: INFO: Selector matched 1 pods for map[app:redis] Jan 10 13:04:47.281: INFO: Found 0 / 1 Jan 10 13:04:48.285: INFO: Selector matched 1 pods for map[app:redis] Jan 10 13:04:48.285: INFO: Found 0 / 1 Jan 10 13:04:49.282: INFO: Selector matched 1 pods for map[app:redis] Jan 10 13:04:49.282: INFO: Found 0 / 1 Jan 10 13:04:50.286: INFO: Selector matched 1 pods for map[app:redis] Jan 10 13:04:50.286: INFO: Found 0 / 1 Jan 10 13:04:51.283: INFO: Selector matched 1 pods for map[app:redis] Jan 10 13:04:51.284: INFO: Found 0 / 1 Jan 10 13:04:52.287: INFO: Selector matched 1 pods for map[app:redis] Jan 10 13:04:52.288: INFO: Found 0 / 1 Jan 10 13:04:53.289: INFO: Selector matched 1 pods for map[app:redis] Jan 10 13:04:53.289: INFO: Found 1 / 1 Jan 10 13:04:53.289: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 10 13:04:53.296: INFO: Selector matched 1 pods for map[app:redis] Jan 10 13:04:53.296: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 10 13:04:53.296: INFO: wait on redis-master startup in kubectl-2567 Jan 10 13:04:53.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-b76hf redis-master --namespace=kubectl-2567' Jan 10 13:04:53.440: INFO: stderr: "" Jan 10 13:04:53.440: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 10 Jan 13:04:52.774 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 10 Jan 13:04:52.774 # Server started, Redis version 3.2.12\n1:M 10 Jan 13:04:52.775 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 10 Jan 13:04:52.775 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Jan 10 13:04:53.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-2567' Jan 10 13:04:53.622: INFO: stderr: "" Jan 10 13:04:53.623: INFO: stdout: "service/rm2 exposed\n" Jan 10 13:04:53.636: INFO: Service rm2 in namespace kubectl-2567 found. STEP: exposing service Jan 10 13:04:55.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-2567' Jan 10 13:04:56.012: INFO: stderr: "" Jan 10 13:04:56.012: INFO: stdout: "service/rm3 exposed\n" Jan 10 13:04:56.061: INFO: Service rm3 in namespace kubectl-2567 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 10 13:04:58.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2567" for this suite. Jan 10 13:05:20.156: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 10 13:05:20.320: INFO: namespace kubectl-2567 deletion completed in 22.230537849s • [SLOW TEST:36.624 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 10 13:05:20.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 10 13:05:52.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-5022" for this suite. Jan 10 13:05:58.933: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 10 13:05:59.059: INFO: namespace namespaces-5022 deletion completed in 6.167338472s STEP: Destroying namespace "nsdeletetest-9580" for this suite. Jan 10 13:05:59.062: INFO: Namespace nsdeletetest-9580 was already deleted STEP: Destroying namespace "nsdeletetest-1184" for this suite. Jan 10 13:06:05.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 10 13:06:05.244: INFO: namespace nsdeletetest-1184 deletion completed in 6.1816382s • [SLOW TEST:44.923 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 10 13:06:05.245: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-7369 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Jan 10 13:06:05.416: INFO: Found 0 stateful pods, waiting for 3 Jan 10 13:06:15.605: INFO: Found 2 stateful pods, waiting for 3 Jan 10 13:06:25.435: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 10 13:06:25.436: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 10 13:06:25.436: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 10 13:06:35.429: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 10 13:06:35.429: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 10 13:06:35.429: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jan 10 13:06:35.471: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Jan 10 13:06:45.526: INFO: Updating stateful set ss2 Jan 10 13:06:45.623: INFO: Waiting for Pod statefulset-7369/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Jan 10 13:06:56.040: INFO: Found 2 stateful pods, waiting for 3 Jan 10 13:07:06.051: INFO: Found 2 stateful pods, waiting for 3 Jan 10 13:07:16.050: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 10 13:07:16.050: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 10 13:07:16.050: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Jan 10 13:07:16.080: INFO: Updating stateful set ss2 Jan 10 13:07:16.112: INFO: Waiting for Pod statefulset-7369/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 10 13:07:26.128: INFO: Waiting for Pod statefulset-7369/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 10 13:07:36.186: INFO: Updating stateful set ss2 Jan 10 13:07:36.226: INFO: Waiting for StatefulSet statefulset-7369/ss2 to complete update Jan 10 13:07:36.226: INFO: Waiting for Pod statefulset-7369/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 10 13:07:46.246: INFO: Waiting for StatefulSet statefulset-7369/ss2 to complete update Jan 10 13:07:46.246: INFO: Waiting for Pod statefulset-7369/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jan 10 13:07:56.250: INFO: Deleting all statefulset in ns statefulset-7369 Jan 10 13:07:56.256: INFO: Scaling statefulset ss2 to 0 Jan 10 13:08:26.291: INFO: Waiting for statefulset status.replicas updated to 0 Jan 10 13:08:26.300: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 10 13:08:26.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7369" for this suite. Jan 10 13:08:34.370: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 10 13:08:34.510: INFO: namespace statefulset-7369 deletion completed in 8.168980574s • [SLOW TEST:149.265 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 10 13:08:34.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-86c9b484-e913-4269-a6c0-1f5c1da2b499 in namespace container-probe-3316 Jan 10 13:08:44.682: INFO: Started pod liveness-86c9b484-e913-4269-a6c0-1f5c1da2b499 in namespace container-probe-3316 STEP: checking the pod's current state and verifying that restartCount is present Jan 10 13:08:44.686: INFO: Initial restart count of pod liveness-86c9b484-e913-4269-a6c0-1f5c1da2b499 is 0 Jan 10 13:08:56.769: INFO: Restart count of pod container-probe-3316/liveness-86c9b484-e913-4269-a6c0-1f5c1da2b499 is now 1 (12.0825061s elapsed) Jan 10 13:09:16.918: INFO: Restart count of pod container-probe-3316/liveness-86c9b484-e913-4269-a6c0-1f5c1da2b499 is now 2 (32.231882148s elapsed) Jan 10 13:09:37.026: INFO: Restart count of pod container-probe-3316/liveness-86c9b484-e913-4269-a6c0-1f5c1da2b499 is now 3 (52.339154103s elapsed) Jan 10 13:09:57.148: INFO: Restart count of pod container-probe-3316/liveness-86c9b484-e913-4269-a6c0-1f5c1da2b499 is now 4 (1m12.462108531s elapsed) Jan 10 13:11:03.532: INFO: Restart count of pod container-probe-3316/liveness-86c9b484-e913-4269-a6c0-1f5c1da2b499 is now 5 (2m18.84578018s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 10 13:11:03.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3316" for this suite. Jan 10 13:11:09.647: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 10 13:11:09.811: INFO: namespace container-probe-3316 deletion completed in 6.209347597s • [SLOW TEST:155.298 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 10 13:11:09.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Jan 10 13:11:09.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5948' Jan 10 13:11:10.223: INFO: stderr: "" Jan 10 13:11:10.223: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jan 10 13:11:11.240: INFO: Selector matched 1 pods for map[app:redis] Jan 10 13:11:11.240: INFO: Found 0 / 1 Jan 10 13:11:12.244: INFO: Selector matched 1 pods for map[app:redis] Jan 10 13:11:12.245: INFO: Found 0 / 1 Jan 10 13:11:13.240: INFO: Selector matched 1 pods for map[app:redis] Jan 10 13:11:13.240: INFO: Found 0 / 1 Jan 10 13:11:14.233: INFO: Selector matched 1 pods for map[app:redis] Jan 10 13:11:14.234: INFO: Found 0 / 1 Jan 10 13:11:15.233: INFO: Selector matched 1 pods for map[app:redis] Jan 10 13:11:15.233: INFO: Found 0 / 1 Jan 10 13:11:16.231: INFO: Selector matched 1 pods for map[app:redis] Jan 10 13:11:16.231: INFO: Found 0 / 1 Jan 10 13:11:17.234: INFO: Selector matched 1 pods for map[app:redis] Jan 10 13:11:17.234: INFO: Found 1 / 1 Jan 10 13:11:17.234: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Jan 10 13:11:17.251: INFO: Selector matched 1 pods for map[app:redis] Jan 10 13:11:17.251: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 10 13:11:17.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-r8ns2 --namespace=kubectl-5948 -p {"metadata":{"annotations":{"x":"y"}}}' Jan 10 13:11:17.391: INFO: stderr: "" Jan 10 13:11:17.391: INFO: stdout: "pod/redis-master-r8ns2 patched\n" STEP: checking annotations Jan 10 13:11:17.401: INFO: Selector matched 1 pods for map[app:redis] Jan 10 13:11:17.402: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 10 13:11:17.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5948" for this suite. Jan 10 13:11:39.428: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 10 13:11:39.587: INFO: namespace kubectl-5948 deletion completed in 22.178221668s • [SLOW TEST:29.775 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 10 13:11:39.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod test-webserver-cab931f8-88ec-4359-bade-c12642b05293 in namespace container-probe-4627 Jan 10 13:11:47.692: INFO: Started pod test-webserver-cab931f8-88ec-4359-bade-c12642b05293 in namespace container-probe-4627 STEP: checking the pod's current state and verifying that restartCount is present Jan 10 13:11:47.696: INFO: Initial restart count of pod test-webserver-cab931f8-88ec-4359-bade-c12642b05293 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 10 13:15:49.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4627" for this suite. Jan 10 13:15:55.524: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 10 13:15:55.671: INFO: namespace container-probe-4627 deletion completed in 6.193753913s • [SLOW TEST:256.082 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 10 13:15:55.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Jan 10 13:16:04.915: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 10 13:16:05.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-285" for this suite. Jan 10 13:16:28.023: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 10 13:16:28.165: INFO: namespace replicaset-285 deletion completed in 22.184718037s • [SLOW TEST:32.492 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 10 13:16:28.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating server pod server in namespace prestop-5580 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-5580 STEP: Deleting pre-stop pod Jan 10 13:16:49.423: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 10 13:16:49.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-5580" for this suite. Jan 10 13:17:27.553: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 10 13:17:27.697: INFO: namespace prestop-5580 deletion completed in 38.238448948s • [SLOW TEST:59.531 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 10 13:17:27.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 10 13:17:27.815: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Jan 10 13:17:31.197: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 10 13:17:31.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1751" for this suite. Jan 10 13:17:43.434: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 10 13:17:43.569: INFO: namespace replication-controller-1751 deletion completed in 12.301215372s • [SLOW TEST:15.872 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 10 13:17:43.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Jan 10 13:17:43.815: INFO: Waiting up to 5m0s for pod "pod-0b5cd29e-f659-4c2e-9db1-85f2feeac2f0" in namespace "emptydir-8940" to be "success or failure" Jan 10 13:17:43.830: INFO: Pod "pod-0b5cd29e-f659-4c2e-9db1-85f2feeac2f0": Phase="Pending", Reason="", readiness=false. Elapsed: 14.964906ms Jan 10 13:17:45.849: INFO: Pod "pod-0b5cd29e-f659-4c2e-9db1-85f2feeac2f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033465988s Jan 10 13:17:47.862: INFO: Pod "pod-0b5cd29e-f659-4c2e-9db1-85f2feeac2f0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046767975s Jan 10 13:17:49.877: INFO: Pod "pod-0b5cd29e-f659-4c2e-9db1-85f2feeac2f0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061945668s Jan 10 13:17:51.886: INFO: Pod "pod-0b5cd29e-f659-4c2e-9db1-85f2feeac2f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.07076366s STEP: Saw pod success Jan 10 13:17:51.886: INFO: Pod "pod-0b5cd29e-f659-4c2e-9db1-85f2feeac2f0" satisfied condition "success or failure" Jan 10 13:17:51.890: INFO: Trying to get logs from node iruya-node pod pod-0b5cd29e-f659-4c2e-9db1-85f2feeac2f0 container test-container: STEP: delete the pod Jan 10 13:17:51.953: INFO: Waiting for pod pod-0b5cd29e-f659-4c2e-9db1-85f2feeac2f0 to disappear Jan 10 13:17:51.956: INFO: Pod pod-0b5cd29e-f659-4c2e-9db1-85f2feeac2f0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 10 13:17:51.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8940" for this suite. Jan 10 13:17:57.982: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 10 13:17:58.116: INFO: namespace emptydir-8940 deletion completed in 6.15627623s • [SLOW TEST:14.546 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 10 13:17:58.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-4qjj STEP: Creating a pod to test atomic-volume-subpath Jan 10 13:17:58.318: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-4qjj" in namespace "subpath-2775" to be "success or failure" Jan 10 13:17:58.331: INFO: Pod "pod-subpath-test-configmap-4qjj": Phase="Pending", Reason="", readiness=false. Elapsed: 13.235916ms Jan 10 13:18:00.341: INFO: Pod "pod-subpath-test-configmap-4qjj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022807237s Jan 10 13:18:02.351: INFO: Pod "pod-subpath-test-configmap-4qjj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033185039s Jan 10 13:18:04.364: INFO: Pod "pod-subpath-test-configmap-4qjj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045952841s Jan 10 13:18:06.372: INFO: Pod "pod-subpath-test-configmap-4qjj": Phase="Running", Reason="", readiness=true. Elapsed: 8.054426906s Jan 10 13:18:08.381: INFO: Pod "pod-subpath-test-configmap-4qjj": Phase="Running", Reason="", readiness=true. Elapsed: 10.062965682s Jan 10 13:18:10.394: INFO: Pod "pod-subpath-test-configmap-4qjj": Phase="Running", Reason="", readiness=true. Elapsed: 12.076005712s Jan 10 13:18:12.404: INFO: Pod "pod-subpath-test-configmap-4qjj": Phase="Running", Reason="", readiness=true. Elapsed: 14.085558289s Jan 10 13:18:14.412: INFO: Pod "pod-subpath-test-configmap-4qjj": Phase="Running", Reason="", readiness=true. Elapsed: 16.093836686s Jan 10 13:18:16.421: INFO: Pod "pod-subpath-test-configmap-4qjj": Phase="Running", Reason="", readiness=true. Elapsed: 18.103237838s Jan 10 13:18:18.433: INFO: Pod "pod-subpath-test-configmap-4qjj": Phase="Running", Reason="", readiness=true. Elapsed: 20.115399328s Jan 10 13:18:20.445: INFO: Pod "pod-subpath-test-configmap-4qjj": Phase="Running", Reason="", readiness=true. Elapsed: 22.127061289s Jan 10 13:18:22.455: INFO: Pod "pod-subpath-test-configmap-4qjj": Phase="Running", Reason="", readiness=true. Elapsed: 24.137330849s Jan 10 13:18:24.472: INFO: Pod "pod-subpath-test-configmap-4qjj": Phase="Running", Reason="", readiness=true. Elapsed: 26.154189948s Jan 10 13:18:26.491: INFO: Pod "pod-subpath-test-configmap-4qjj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.173224975s STEP: Saw pod success Jan 10 13:18:26.492: INFO: Pod "pod-subpath-test-configmap-4qjj" satisfied condition "success or failure" Jan 10 13:18:26.501: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-4qjj container test-container-subpath-configmap-4qjj: STEP: delete the pod Jan 10 13:18:26.610: INFO: Waiting for pod pod-subpath-test-configmap-4qjj to disappear Jan 10 13:18:26.648: INFO: Pod pod-subpath-test-configmap-4qjj no longer exists STEP: Deleting pod pod-subpath-test-configmap-4qjj Jan 10 13:18:26.648: INFO: Deleting pod "pod-subpath-test-configmap-4qjj" in namespace "subpath-2775" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 10 13:18:26.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2775" for this suite. Jan 10 13:18:32.705: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 10 13:18:32.824: INFO: namespace subpath-2775 deletion completed in 6.159916777s • [SLOW TEST:34.707 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 10 13:18:32.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-b478e248-1e43-474f-938d-5112681fd6e7 STEP: Creating configMap with name cm-test-opt-upd-347c3472-a353-4ecf-8e99-b76bed2ab24e STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-b478e248-1e43-474f-938d-5112681fd6e7 STEP: Updating configmap cm-test-opt-upd-347c3472-a353-4ecf-8e99-b76bed2ab24e STEP: Creating configMap with name cm-test-opt-create-a301f492-2c97-423f-aee6-8f56943f61a0 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 10 13:18:47.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1950" for this suite. Jan 10 13:19:09.524: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 10 13:19:09.683: INFO: namespace projected-1950 deletion completed in 22.204618094s • [SLOW TEST:36.857 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 10 13:19:09.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 10 13:19:09.883: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Jan 10 13:19:14.896: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 10 13:19:18.917: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jan 10 13:19:19.266: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-5189,SelfLink:/apis/apps/v1/namespaces/deployment-5189/deployments/test-cleanup-deployment,UID:29538981-371e-4604-9376-0952a75a2c55,ResourceVersion:20024913,Generation:1,CreationTimestamp:2020-01-10 13:19:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Jan 10 13:19:19.284: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-5189,SelfLink:/apis/apps/v1/namespaces/deployment-5189/replicasets/test-cleanup-deployment-55bbcbc84c,UID:d7f2ae95-50c0-4185-9563-dd4ed3aa4a93,ResourceVersion:20024915,Generation:1,CreationTimestamp:2020-01-10 13:19:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 29538981-371e-4604-9376-0952a75a2c55 0xc003203f77 0xc003203f78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 10 13:19:19.284: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Jan 10 13:19:19.284: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-5189,SelfLink:/apis/apps/v1/namespaces/deployment-5189/replicasets/test-cleanup-controller,UID:a6c2b53c-4b69-48e2-8adb-38b54f47af14,ResourceVersion:20024914,Generation:1,CreationTimestamp:2020-01-10 13:19:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 29538981-371e-4604-9376-0952a75a2c55 0xc003203da7 0xc003203da8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jan 10 13:19:19.296: INFO: Pod "test-cleanup-controller-7h6md" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-7h6md,GenerateName:test-cleanup-controller-,Namespace:deployment-5189,SelfLink:/api/v1/namespaces/deployment-5189/pods/test-cleanup-controller-7h6md,UID:f6e94316-e469-405e-a58a-b478155aedb7,ResourceVersion:20024909,Generation:0,CreationTimestamp:2020-01-10 13:19:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller a6c2b53c-4b69-48e2-8adb-38b54f47af14 0xc0024ec88f 0xc0024ec8a0}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gpjwj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gpjwj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-gpjwj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024ec920} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024ec940}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:19:09 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:19:17 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:19:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:19:09 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-01-10 13:19:09 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-10 13:19:16 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://deffbd1cfe8d93e0bb67817711fbe4d06c251883f2877d200fc94df95ccbb689}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 10 13:19:19.296: INFO: Pod "test-cleanup-deployment-55bbcbc84c-vgqpk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-vgqpk,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-5189,SelfLink:/api/v1/namespaces/deployment-5189/pods/test-cleanup-deployment-55bbcbc84c-vgqpk,UID:c56a7184-5435-40e1-9871-3eff18c18542,ResourceVersion:20024921,Generation:0,CreationTimestamp:2020-01-10 13:19:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c d7f2ae95-50c0-4185-9563-dd4ed3aa4a93 0xc000474287 0xc000474288}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gpjwj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gpjwj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-gpjwj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000475470} {node.kubernetes.io/unreachable Exists NoExecute 0xc0004754e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:19:19 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 10 13:19:19.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5189" for this suite. Jan 10 13:19:25.506: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 10 13:19:25.593: INFO: namespace deployment-5189 deletion completed in 6.151911848s • [SLOW TEST:15.910 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 10 13:19:25.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 10 13:19:25.677: INFO: Waiting up to 5m0s for pod "downwardapi-volume-032d73ec-1805-40b9-812f-3148a86a7176" in namespace "projected-3012" to be "success or failure" Jan 10 13:19:25.729: INFO: Pod "downwardapi-volume-032d73ec-1805-40b9-812f-3148a86a7176": Phase="Pending", Reason="", readiness=false. Elapsed: 52.009939ms Jan 10 13:19:27.738: INFO: Pod "downwardapi-volume-032d73ec-1805-40b9-812f-3148a86a7176": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061542597s Jan 10 13:19:29.752: INFO: Pod "downwardapi-volume-032d73ec-1805-40b9-812f-3148a86a7176": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075159445s Jan 10 13:19:31.761: INFO: Pod "downwardapi-volume-032d73ec-1805-40b9-812f-3148a86a7176": Phase="Pending", Reason="", readiness=false. Elapsed: 6.084228914s Jan 10 13:19:33.789: INFO: Pod "downwardapi-volume-032d73ec-1805-40b9-812f-3148a86a7176": Phase="Pending", Reason="", readiness=false. Elapsed: 8.112349872s Jan 10 13:19:35.801: INFO: Pod "downwardapi-volume-032d73ec-1805-40b9-812f-3148a86a7176": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.124034386s STEP: Saw pod success Jan 10 13:19:35.801: INFO: Pod "downwardapi-volume-032d73ec-1805-40b9-812f-3148a86a7176" satisfied condition "success or failure" Jan 10 13:19:35.807: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-032d73ec-1805-40b9-812f-3148a86a7176 container client-container: STEP: delete the pod Jan 10 13:19:35.911: INFO: Waiting for pod downwardapi-volume-032d73ec-1805-40b9-812f-3148a86a7176 to disappear Jan 10 13:19:35.919: INFO: Pod downwardapi-volume-032d73ec-1805-40b9-812f-3148a86a7176 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 10 13:19:35.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3012" for this suite. Jan 10 13:19:41.960: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 10 13:19:42.081: INFO: namespace projected-3012 deletion completed in 6.150882872s • [SLOW TEST:16.486 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 10 13:19:42.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-3ed9810c-4488-4033-9e85-f4578e1bffea STEP: Creating a pod to test consume secrets Jan 10 13:19:42.230: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c56a6581-b280-4b98-aa7b-03ef575e1f27" in namespace "projected-7997" to be "success or failure" Jan 10 13:19:42.243: INFO: Pod "pod-projected-secrets-c56a6581-b280-4b98-aa7b-03ef575e1f27": Phase="Pending", Reason="", readiness=false. Elapsed: 13.03994ms Jan 10 13:19:44.248: INFO: Pod "pod-projected-secrets-c56a6581-b280-4b98-aa7b-03ef575e1f27": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018161061s Jan 10 13:19:46.281: INFO: Pod "pod-projected-secrets-c56a6581-b280-4b98-aa7b-03ef575e1f27": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051095263s Jan 10 13:19:48.291: INFO: Pod "pod-projected-secrets-c56a6581-b280-4b98-aa7b-03ef575e1f27": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061719637s Jan 10 13:19:50.317: INFO: Pod "pod-projected-secrets-c56a6581-b280-4b98-aa7b-03ef575e1f27": Phase="Pending", Reason="", readiness=false. Elapsed: 8.087282492s Jan 10 13:19:52.324: INFO: Pod "pod-projected-secrets-c56a6581-b280-4b98-aa7b-03ef575e1f27": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.093876341s STEP: Saw pod success Jan 10 13:19:52.324: INFO: Pod "pod-projected-secrets-c56a6581-b280-4b98-aa7b-03ef575e1f27" satisfied condition "success or failure" Jan 10 13:19:52.345: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-c56a6581-b280-4b98-aa7b-03ef575e1f27 container projected-secret-volume-test: STEP: delete the pod Jan 10 13:19:52.386: INFO: Waiting for pod pod-projected-secrets-c56a6581-b280-4b98-aa7b-03ef575e1f27 to disappear Jan 10 13:19:52.390: INFO: Pod pod-projected-secrets-c56a6581-b280-4b98-aa7b-03ef575e1f27 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 10 13:19:52.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7997" for this suite. Jan 10 13:19:58.425: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 10 13:19:58.580: INFO: namespace projected-7997 deletion completed in 6.182874074s • [SLOW TEST:16.500 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 10 13:19:58.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 10 13:20:05.844: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 10 13:20:05.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9283" for this suite. Jan 10 13:20:12.007: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 10 13:20:12.103: INFO: namespace container-runtime-9283 deletion completed in 6.121561912s • [SLOW TEST:13.522 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 10 13:20:12.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Jan 10 13:20:12.156: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 10 13:20:26.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3547" for this suite. Jan 10 13:20:48.946: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 10 13:20:49.068: INFO: namespace init-container-3547 deletion completed in 22.200806711s • [SLOW TEST:36.965 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 10 13:20:49.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 10 13:20:49.183: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Jan 10 13:20:49.196: INFO: Number of nodes with available pods: 0 Jan 10 13:20:49.196: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Jan 10 13:20:49.349: INFO: Number of nodes with available pods: 0 Jan 10 13:20:49.349: INFO: Node iruya-node is running more than one daemon pod Jan 10 13:20:50.361: INFO: Number of nodes with available pods: 0 Jan 10 13:20:50.361: INFO: Node iruya-node is running more than one daemon pod Jan 10 13:20:51.357: INFO: Number of nodes with available pods: 0 Jan 10 13:20:51.357: INFO: Node iruya-node is running more than one daemon pod Jan 10 13:20:52.360: INFO: Number of nodes with available pods: 0 Jan 10 13:20:52.360: INFO: Node iruya-node is running more than one daemon pod Jan 10 13:20:53.360: INFO: Number of nodes with available pods: 0 Jan 10 13:20:53.360: INFO: Node iruya-node is running more than one daemon pod Jan 10 13:20:54.360: INFO: Number of nodes with available pods: 0 Jan 10 13:20:54.360: INFO: Node iruya-node is running more than one daemon pod Jan 10 13:20:55.411: INFO: Number of nodes with available pods: 0 Jan 10 13:20:55.411: INFO: Node iruya-node is running more than one daemon pod Jan 10 13:20:56.364: INFO: Number of nodes with available pods: 0 Jan 10 13:20:56.365: INFO: Node iruya-node is running more than one daemon pod Jan 10 13:20:57.363: INFO: Number of nodes with available pods: 1 Jan 10 13:20:57.363: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Jan 10 13:20:57.506: INFO: Number of nodes with available pods: 1 Jan 10 13:20:57.506: INFO: Number of running nodes: 0, number of available pods: 1 Jan 10 13:20:58.527: INFO: Number of nodes with available pods: 0 Jan 10 13:20:58.527: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jan 10 13:20:58.546: INFO: Number of nodes with available pods: 0 Jan 10 13:20:58.546: INFO: Node iruya-node is running more than one daemon pod Jan 10 13:20:59.557: INFO: Number of nodes with available pods: 0 Jan 10 13:20:59.557: INFO: Node iruya-node is running more than one daemon pod Jan 10 13:21:00.567: INFO: Number of nodes with available pods: 0 Jan 10 13:21:00.567: INFO: Node iruya-node is running more than one daemon pod Jan 10 13:21:01.557: INFO: Number of nodes with available pods: 0 Jan 10 13:21:01.557: INFO: Node iruya-node is running more than one daemon pod Jan 10 13:21:02.571: INFO: Number of nodes with available pods: 0 Jan 10 13:21:02.571: INFO: Node iruya-node is running more than one daemon pod Jan 10 13:21:03.558: INFO: Number of nodes with available pods: 0 Jan 10 13:21:03.558: INFO: Node iruya-node is running more than one daemon pod Jan 10 13:21:04.569: INFO: Number of nodes with available pods: 0 Jan 10 13:21:04.569: INFO: Node iruya-node is running more than one daemon pod Jan 10 13:21:05.556: INFO: Number of nodes with available pods: 0 Jan 10 13:21:05.557: INFO: Node iruya-node is running more than one daemon pod Jan 10 13:21:06.622: INFO: Number of nodes with available pods: 0 Jan 10 13:21:06.622: INFO: Node iruya-node is running more than one daemon pod Jan 10 13:21:07.556: INFO: Number of nodes with available pods: 0 Jan 10 13:21:07.556: INFO: Node iruya-node is running more than one daemon pod Jan 10 13:21:08.560: INFO: Number of nodes with available pods: 0 Jan 10 13:21:08.560: INFO: Node iruya-node is running more than one daemon pod Jan 10 13:21:09.556: INFO: Number of nodes with available pods: 0 Jan 10 13:21:09.556: INFO: Node iruya-node is running more than one daemon pod Jan 10 13:21:10.562: INFO: Number of nodes with available pods: 0 Jan 10 13:21:10.562: INFO: Node iruya-node is running more than one daemon pod Jan 10 13:21:11.644: INFO: Number of nodes with available pods: 0 Jan 10 13:21:11.645: INFO: Node iruya-node is running more than one daemon pod Jan 10 13:21:12.581: INFO: Number of nodes with available pods: 0 Jan 10 13:21:12.582: INFO: Node iruya-node is running more than one daemon pod Jan 10 13:21:13.560: INFO: Number of nodes with available pods: 1 Jan 10 13:21:13.560: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4546, will wait for the garbage collector to delete the pods Jan 10 13:21:13.659: INFO: Deleting DaemonSet.extensions daemon-set took: 33.502813ms Jan 10 13:21:13.960: INFO: Terminating DaemonSet.extensions daemon-set pods took: 301.101127ms Jan 10 13:21:26.575: INFO: Number of nodes with available pods: 0 Jan 10 13:21:26.575: INFO: Number of running nodes: 0, number of available pods: 0 Jan 10 13:21:26.585: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4546/daemonsets","resourceVersion":"20025279"},"items":null} Jan 10 13:21:26.589: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4546/pods","resourceVersion":"20025279"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 10 13:21:26.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4546" for this suite. Jan 10 13:21:32.695: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 10 13:21:32.817: INFO: namespace daemonsets-4546 deletion completed in 6.174140741s • [SLOW TEST:43.748 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 10 13:21:32.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-6e7a84de-c133-4279-8df5-19aaf8162037 STEP: Creating a pod to test consume secrets Jan 10 13:21:32.944: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-897681c1-d07a-45c2-a76e-977292805265" in namespace "projected-5464" to be "success or failure" Jan 10 13:21:32.958: INFO: Pod "pod-projected-secrets-897681c1-d07a-45c2-a76e-977292805265": Phase="Pending", Reason="", readiness=false. Elapsed: 13.72868ms Jan 10 13:21:34.966: INFO: Pod "pod-projected-secrets-897681c1-d07a-45c2-a76e-977292805265": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021814305s Jan 10 13:21:36.974: INFO: Pod "pod-projected-secrets-897681c1-d07a-45c2-a76e-977292805265": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029244747s Jan 10 13:21:38.996: INFO: Pod "pod-projected-secrets-897681c1-d07a-45c2-a76e-977292805265": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051107115s Jan 10 13:21:41.006: INFO: Pod "pod-projected-secrets-897681c1-d07a-45c2-a76e-977292805265": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.061907914s STEP: Saw pod success Jan 10 13:21:41.006: INFO: Pod "pod-projected-secrets-897681c1-d07a-45c2-a76e-977292805265" satisfied condition "success or failure" Jan 10 13:21:41.009: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-897681c1-d07a-45c2-a76e-977292805265 container projected-secret-volume-test: STEP: delete the pod Jan 10 13:21:41.060: INFO: Waiting for pod pod-projected-secrets-897681c1-d07a-45c2-a76e-977292805265 to disappear Jan 10 13:21:41.073: INFO: Pod pod-projected-secrets-897681c1-d07a-45c2-a76e-977292805265 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 10 13:21:41.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5464" for this suite. Jan 10 13:21:47.106: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 10 13:21:47.240: INFO: namespace projected-5464 deletion completed in 6.159115735s • [SLOW TEST:14.422 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 10 13:21:47.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 10 13:21:47.339: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ca249bc0-4203-4185-8e7a-c19d010bed75" in namespace "projected-581" to be "success or failure" Jan 10 13:21:47.361: INFO: Pod "downwardapi-volume-ca249bc0-4203-4185-8e7a-c19d010bed75": Phase="Pending", Reason="", readiness=false. Elapsed: 21.422072ms Jan 10 13:21:49.375: INFO: Pod "downwardapi-volume-ca249bc0-4203-4185-8e7a-c19d010bed75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035313453s Jan 10 13:21:51.383: INFO: Pod "downwardapi-volume-ca249bc0-4203-4185-8e7a-c19d010bed75": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04410548s Jan 10 13:21:53.392: INFO: Pod "downwardapi-volume-ca249bc0-4203-4185-8e7a-c19d010bed75": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052716401s Jan 10 13:21:55.402: INFO: Pod "downwardapi-volume-ca249bc0-4203-4185-8e7a-c19d010bed75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.063200345s STEP: Saw pod success Jan 10 13:21:55.403: INFO: Pod "downwardapi-volume-ca249bc0-4203-4185-8e7a-c19d010bed75" satisfied condition "success or failure" Jan 10 13:21:55.408: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-ca249bc0-4203-4185-8e7a-c19d010bed75 container client-container: STEP: delete the pod Jan 10 13:21:55.514: INFO: Waiting for pod downwardapi-volume-ca249bc0-4203-4185-8e7a-c19d010bed75 to disappear Jan 10 13:21:55.574: INFO: Pod downwardapi-volume-ca249bc0-4203-4185-8e7a-c19d010bed75 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 10 13:21:55.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-581" for this suite. Jan 10 13:22:01.645: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 10 13:22:01.833: INFO: namespace projected-581 deletion completed in 6.228127141s • [SLOW TEST:14.592 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 10 13:22:01.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Jan 10 13:22:01.966: INFO: Waiting up to 5m0s for pod "pod-f380c027-205c-4855-b71e-c973f4d7a46e" in namespace "emptydir-3399" to be "success or failure" Jan 10 13:22:01.976: INFO: Pod "pod-f380c027-205c-4855-b71e-c973f4d7a46e": Phase="Pending", Reason="", readiness=false. Elapsed: 9.369812ms Jan 10 13:22:03.987: INFO: Pod "pod-f380c027-205c-4855-b71e-c973f4d7a46e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021193831s Jan 10 13:22:05.999: INFO: Pod "pod-f380c027-205c-4855-b71e-c973f4d7a46e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032576164s Jan 10 13:22:08.010: INFO: Pod "pod-f380c027-205c-4855-b71e-c973f4d7a46e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043907088s Jan 10 13:22:10.019: INFO: Pod "pod-f380c027-205c-4855-b71e-c973f4d7a46e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.052870172s STEP: Saw pod success Jan 10 13:22:10.019: INFO: Pod "pod-f380c027-205c-4855-b71e-c973f4d7a46e" satisfied condition "success or failure" Jan 10 13:22:10.023: INFO: Trying to get logs from node iruya-node pod pod-f380c027-205c-4855-b71e-c973f4d7a46e container test-container: STEP: delete the pod Jan 10 13:22:10.104: INFO: Waiting for pod pod-f380c027-205c-4855-b71e-c973f4d7a46e to disappear Jan 10 13:22:10.114: INFO: Pod pod-f380c027-205c-4855-b71e-c973f4d7a46e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 10 13:22:10.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3399" for this suite. Jan 10 13:22:16.206: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 10 13:22:16.412: INFO: namespace emptydir-3399 deletion completed in 6.24044813s • [SLOW TEST:14.577 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 10 13:22:16.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-6837 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 10 13:22:16.558: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 10 13:22:46.735: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-6837 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 10 13:22:46.736: INFO: >>> kubeConfig: /root/.kube/config I0110 13:22:46.817402 8 log.go:172] (0xc00075af20) (0xc000a30500) Create stream I0110 13:22:46.817554 8 log.go:172] (0xc00075af20) (0xc000a30500) Stream added, broadcasting: 1 I0110 13:22:46.825934 8 log.go:172] (0xc00075af20) Reply frame received for 1 I0110 13:22:46.825975 8 log.go:172] (0xc00075af20) (0xc000a305a0) Create stream I0110 13:22:46.825985 8 log.go:172] (0xc00075af20) (0xc000a305a0) Stream added, broadcasting: 3 I0110 13:22:46.829531 8 log.go:172] (0xc00075af20) Reply frame received for 3 I0110 13:22:46.829624 8 log.go:172] (0xc00075af20) (0xc000a30640) Create stream I0110 13:22:46.829640 8 log.go:172] (0xc00075af20) (0xc000a30640) Stream added, broadcasting: 5 I0110 13:22:46.831599 8 log.go:172] (0xc00075af20) Reply frame received for 5 I0110 13:22:47.126265 8 log.go:172] (0xc00075af20) Data frame received for 3 I0110 13:22:47.126418 8 log.go:172] (0xc000a305a0) (3) Data frame handling I0110 13:22:47.126503 8 log.go:172] (0xc000a305a0) (3) Data frame sent I0110 13:22:47.269945 8 log.go:172] (0xc00075af20) (0xc000a305a0) Stream removed, broadcasting: 3 I0110 13:22:47.270171 8 log.go:172] (0xc00075af20) (0xc000a30640) Stream removed, broadcasting: 5 I0110 13:22:47.270284 8 log.go:172] (0xc00075af20) Data frame received for 1 I0110 13:22:47.270315 8 log.go:172] (0xc000a30500) (1) Data frame handling I0110 13:22:47.270363 8 log.go:172] (0xc000a30500) (1) Data frame sent I0110 13:22:47.270388 8 log.go:172] (0xc00075af20) (0xc000a30500) Stream removed, broadcasting: 1 I0110 13:22:47.270412 8 log.go:172] (0xc00075af20) Go away received I0110 13:22:47.271159 8 log.go:172] (0xc00075af20) (0xc000a30500) Stream removed, broadcasting: 1 I0110 13:22:47.271184 8 log.go:172] (0xc00075af20) (0xc000a305a0) Stream removed, broadcasting: 3 I0110 13:22:47.271199 8 log.go:172] (0xc00075af20) (0xc000a30640) Stream removed, broadcasting: 5 Jan 10 13:22:47.271: INFO: Waiting for endpoints: map[] Jan 10 13:22:47.279: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-6837 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 10 13:22:47.279: INFO: >>> kubeConfig: /root/.kube/config I0110 13:22:47.348750 8 log.go:172] (0xc000625ce0) (0xc0001b86e0) Create stream I0110 13:22:47.348825 8 log.go:172] (0xc000625ce0) (0xc0001b86e0) Stream added, broadcasting: 1 I0110 13:22:47.355063 8 log.go:172] (0xc000625ce0) Reply frame received for 1 I0110 13:22:47.355100 8 log.go:172] (0xc000625ce0) (0xc001688460) Create stream I0110 13:22:47.355111 8 log.go:172] (0xc000625ce0) (0xc001688460) Stream added, broadcasting: 3 I0110 13:22:47.358095 8 log.go:172] (0xc000625ce0) Reply frame received for 3 I0110 13:22:47.358287 8 log.go:172] (0xc000625ce0) (0xc0001b8a00) Create stream I0110 13:22:47.358308 8 log.go:172] (0xc000625ce0) (0xc0001b8a00) Stream added, broadcasting: 5 I0110 13:22:47.361170 8 log.go:172] (0xc000625ce0) Reply frame received for 5 I0110 13:22:47.485961 8 log.go:172] (0xc000625ce0) Data frame received for 3 I0110 13:22:47.486527 8 log.go:172] (0xc001688460) (3) Data frame handling I0110 13:22:47.486673 8 log.go:172] (0xc001688460) (3) Data frame sent I0110 13:22:47.629324 8 log.go:172] (0xc000625ce0) (0xc001688460) Stream removed, broadcasting: 3 I0110 13:22:47.629791 8 log.go:172] (0xc000625ce0) Data frame received for 1 I0110 13:22:47.629885 8 log.go:172] (0xc000625ce0) (0xc0001b8a00) Stream removed, broadcasting: 5 I0110 13:22:47.629990 8 log.go:172] (0xc0001b86e0) (1) Data frame handling I0110 13:22:47.630035 8 log.go:172] (0xc0001b86e0) (1) Data frame sent I0110 13:22:47.630057 8 log.go:172] (0xc000625ce0) (0xc0001b86e0) Stream removed, broadcasting: 1 I0110 13:22:47.630091 8 log.go:172] (0xc000625ce0) Go away received I0110 13:22:47.630621 8 log.go:172] (0xc000625ce0) (0xc0001b86e0) Stream removed, broadcasting: 1 I0110 13:22:47.630637 8 log.go:172] (0xc000625ce0) (0xc001688460) Stream removed, broadcasting: 3 I0110 13:22:47.630655 8 log.go:172] (0xc000625ce0) (0xc0001b8a00) Stream removed, broadcasting: 5 Jan 10 13:22:47.631: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 10 13:22:47.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6837" for this suite. Jan 10 13:23:11.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 10 13:23:11.887: INFO: namespace pod-network-test-6837 deletion completed in 24.22229298s • [SLOW TEST:55.474 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 10 13:23:11.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-f3b7c5d5-bd4f-4381-b1db-2e4bcfe575f0 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-f3b7c5d5-bd4f-4381-b1db-2e4bcfe575f0 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 10 13:23:22.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7515" for this suite. Jan 10 13:23:44.298: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 10 13:23:44.388: INFO: namespace configmap-7515 deletion completed in 22.115293208s • [SLOW TEST:32.500 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 10 13:23:44.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 10 13:23:44.441: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4b5430b3-c4c3-4ff1-bdeb-293412401c9d" in namespace "projected-1645" to be "success or failure" Jan 10 13:23:44.478: INFO: Pod "downwardapi-volume-4b5430b3-c4c3-4ff1-bdeb-293412401c9d": Phase="Pending", Reason="", readiness=false. Elapsed: 36.441091ms Jan 10 13:23:46.490: INFO: Pod "downwardapi-volume-4b5430b3-c4c3-4ff1-bdeb-293412401c9d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048863453s Jan 10 13:23:48.503: INFO: Pod "downwardapi-volume-4b5430b3-c4c3-4ff1-bdeb-293412401c9d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061610584s Jan 10 13:23:50.524: INFO: Pod "downwardapi-volume-4b5430b3-c4c3-4ff1-bdeb-293412401c9d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.082401018s Jan 10 13:23:52.542: INFO: Pod "downwardapi-volume-4b5430b3-c4c3-4ff1-bdeb-293412401c9d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.100106749s STEP: Saw pod success Jan 10 13:23:52.542: INFO: Pod "downwardapi-volume-4b5430b3-c4c3-4ff1-bdeb-293412401c9d" satisfied condition "success or failure" Jan 10 13:23:52.547: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-4b5430b3-c4c3-4ff1-bdeb-293412401c9d container client-container: STEP: delete the pod Jan 10 13:23:52.651: INFO: Waiting for pod downwardapi-volume-4b5430b3-c4c3-4ff1-bdeb-293412401c9d to disappear Jan 10 13:23:52.662: INFO: Pod downwardapi-volume-4b5430b3-c4c3-4ff1-bdeb-293412401c9d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 10 13:23:52.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1645" for this suite. Jan 10 13:23:58.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 10 13:23:58.784: INFO: namespace projected-1645 deletion completed in 6.116496613s • [SLOW TEST:14.396 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 10 13:23:58.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy Jan 10 13:23:58.961: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix855764434/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 10 13:23:59.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5509" for this suite. Jan 10 13:24:05.136: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 10 13:24:05.254: INFO: namespace kubectl-5509 deletion completed in 6.23170741s • [SLOW TEST:6.469 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 10 13:24:05.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 10 13:24:05.417: INFO: (0) /api/v1/nodes/iruya-node/proxy/logs/:
alternatives.log
alternatives.l... (200; 23.684047ms)
Jan 10 13:24:05.427: INFO: (1) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.539633ms)
Jan 10 13:24:05.434: INFO: (2) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.29226ms)
Jan 10 13:24:05.444: INFO: (3) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.442305ms)
Jan 10 13:24:05.452: INFO: (4) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.053092ms)
Jan 10 13:24:05.458: INFO: (5) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.532878ms)
Jan 10 13:24:05.495: INFO: (6) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 37.199568ms)
Jan 10 13:24:05.504: INFO: (7) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.010767ms)
Jan 10 13:24:05.512: INFO: (8) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.496774ms)
Jan 10 13:24:05.519: INFO: (9) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.333036ms)
Jan 10 13:24:05.526: INFO: (10) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.337109ms)
Jan 10 13:24:05.533: INFO: (11) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.914331ms)
Jan 10 13:24:05.539: INFO: (12) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.996573ms)
Jan 10 13:24:05.548: INFO: (13) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.256075ms)
Jan 10 13:24:05.562: INFO: (14) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.881207ms)
Jan 10 13:24:05.570: INFO: (15) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.435364ms)
Jan 10 13:24:05.576: INFO: (16) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.869666ms)
Jan 10 13:24:05.583: INFO: (17) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.31094ms)
Jan 10 13:24:05.590: INFO: (18) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.927886ms)
Jan 10 13:24:05.596: INFO: (19) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.637885ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 13:24:05.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-4668" for this suite.
Jan 10 13:24:11.623: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:24:11.857: INFO: namespace proxy-4668 deletion completed in 6.25673284s

• [SLOW TEST:6.601 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 13:24:11.857: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3365.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3365.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3365.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3365.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 10 13:24:24.065: INFO: File wheezy_udp@dns-test-service-3.dns-3365.svc.cluster.local from pod  dns-3365/dns-test-9b210f16-38d6-4fe0-aa15-5d57ec04d0a1 contains '' instead of 'foo.example.com.'
Jan 10 13:24:24.070: INFO: File jessie_udp@dns-test-service-3.dns-3365.svc.cluster.local from pod  dns-3365/dns-test-9b210f16-38d6-4fe0-aa15-5d57ec04d0a1 contains '' instead of 'foo.example.com.'
Jan 10 13:24:24.070: INFO: Lookups using dns-3365/dns-test-9b210f16-38d6-4fe0-aa15-5d57ec04d0a1 failed for: [wheezy_udp@dns-test-service-3.dns-3365.svc.cluster.local jessie_udp@dns-test-service-3.dns-3365.svc.cluster.local]

Jan 10 13:24:29.102: INFO: DNS probes using dns-test-9b210f16-38d6-4fe0-aa15-5d57ec04d0a1 succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3365.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3365.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3365.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3365.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 10 13:24:41.381: INFO: File wheezy_udp@dns-test-service-3.dns-3365.svc.cluster.local from pod  dns-3365/dns-test-1308fce4-297a-44ee-a7d9-2a6b10cba140 contains '' instead of 'bar.example.com.'
Jan 10 13:24:41.516: INFO: File jessie_udp@dns-test-service-3.dns-3365.svc.cluster.local from pod  dns-3365/dns-test-1308fce4-297a-44ee-a7d9-2a6b10cba140 contains '' instead of 'bar.example.com.'
Jan 10 13:24:41.517: INFO: Lookups using dns-3365/dns-test-1308fce4-297a-44ee-a7d9-2a6b10cba140 failed for: [wheezy_udp@dns-test-service-3.dns-3365.svc.cluster.local jessie_udp@dns-test-service-3.dns-3365.svc.cluster.local]

Jan 10 13:24:46.557: INFO: File wheezy_udp@dns-test-service-3.dns-3365.svc.cluster.local from pod  dns-3365/dns-test-1308fce4-297a-44ee-a7d9-2a6b10cba140 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 10 13:24:46.569: INFO: File jessie_udp@dns-test-service-3.dns-3365.svc.cluster.local from pod  dns-3365/dns-test-1308fce4-297a-44ee-a7d9-2a6b10cba140 contains '' instead of 'bar.example.com.'
Jan 10 13:24:46.569: INFO: Lookups using dns-3365/dns-test-1308fce4-297a-44ee-a7d9-2a6b10cba140 failed for: [wheezy_udp@dns-test-service-3.dns-3365.svc.cluster.local jessie_udp@dns-test-service-3.dns-3365.svc.cluster.local]

Jan 10 13:24:51.533: INFO: File wheezy_udp@dns-test-service-3.dns-3365.svc.cluster.local from pod  dns-3365/dns-test-1308fce4-297a-44ee-a7d9-2a6b10cba140 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 10 13:24:51.543: INFO: File jessie_udp@dns-test-service-3.dns-3365.svc.cluster.local from pod  dns-3365/dns-test-1308fce4-297a-44ee-a7d9-2a6b10cba140 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 10 13:24:51.544: INFO: Lookups using dns-3365/dns-test-1308fce4-297a-44ee-a7d9-2a6b10cba140 failed for: [wheezy_udp@dns-test-service-3.dns-3365.svc.cluster.local jessie_udp@dns-test-service-3.dns-3365.svc.cluster.local]

Jan 10 13:24:56.545: INFO: DNS probes using dns-test-1308fce4-297a-44ee-a7d9-2a6b10cba140 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3365.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-3365.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3365.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-3365.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 10 13:25:10.973: INFO: File wheezy_udp@dns-test-service-3.dns-3365.svc.cluster.local from pod  dns-3365/dns-test-b3473424-b5f9-4bbc-ab9c-79dd799566f9 contains '' instead of '10.107.133.4'
Jan 10 13:25:10.981: INFO: File jessie_udp@dns-test-service-3.dns-3365.svc.cluster.local from pod  dns-3365/dns-test-b3473424-b5f9-4bbc-ab9c-79dd799566f9 contains '' instead of '10.107.133.4'
Jan 10 13:25:10.981: INFO: Lookups using dns-3365/dns-test-b3473424-b5f9-4bbc-ab9c-79dd799566f9 failed for: [wheezy_udp@dns-test-service-3.dns-3365.svc.cluster.local jessie_udp@dns-test-service-3.dns-3365.svc.cluster.local]

Jan 10 13:25:16.007: INFO: DNS probes using dns-test-b3473424-b5f9-4bbc-ab9c-79dd799566f9 succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 13:25:16.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3365" for this suite.
Jan 10 13:25:22.277: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:25:22.370: INFO: namespace dns-3365 deletion completed in 6.204868013s

• [SLOW TEST:70.513 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 13:25:22.370: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on tmpfs
Jan 10 13:25:22.432: INFO: Waiting up to 5m0s for pod "pod-45af0e0b-13fb-4781-9362-b1ebee9f8bd4" in namespace "emptydir-2464" to be "success or failure"
Jan 10 13:25:22.438: INFO: Pod "pod-45af0e0b-13fb-4781-9362-b1ebee9f8bd4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.414162ms
Jan 10 13:25:24.448: INFO: Pod "pod-45af0e0b-13fb-4781-9362-b1ebee9f8bd4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016268726s
Jan 10 13:25:26.457: INFO: Pod "pod-45af0e0b-13fb-4781-9362-b1ebee9f8bd4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024480323s
Jan 10 13:25:28.467: INFO: Pod "pod-45af0e0b-13fb-4781-9362-b1ebee9f8bd4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034920028s
Jan 10 13:25:30.490: INFO: Pod "pod-45af0e0b-13fb-4781-9362-b1ebee9f8bd4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.057886977s
STEP: Saw pod success
Jan 10 13:25:30.491: INFO: Pod "pod-45af0e0b-13fb-4781-9362-b1ebee9f8bd4" satisfied condition "success or failure"
Jan 10 13:25:30.504: INFO: Trying to get logs from node iruya-node pod pod-45af0e0b-13fb-4781-9362-b1ebee9f8bd4 container test-container: 
STEP: delete the pod
Jan 10 13:25:30.682: INFO: Waiting for pod pod-45af0e0b-13fb-4781-9362-b1ebee9f8bd4 to disappear
Jan 10 13:25:30.692: INFO: Pod pod-45af0e0b-13fb-4781-9362-b1ebee9f8bd4 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 13:25:30.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2464" for this suite.
Jan 10 13:25:36.720: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:25:36.890: INFO: namespace emptydir-2464 deletion completed in 6.191526827s

• [SLOW TEST:14.520 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 13:25:36.891: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan 10 13:25:37.014: INFO: Waiting up to 5m0s for pod "pod-4dd36594-52ee-48c0-9289-48dfb6d8a07c" in namespace "emptydir-2644" to be "success or failure"
Jan 10 13:25:37.022: INFO: Pod "pod-4dd36594-52ee-48c0-9289-48dfb6d8a07c": Phase="Pending", Reason="", readiness=false. Elapsed: 7.676958ms
Jan 10 13:25:39.037: INFO: Pod "pod-4dd36594-52ee-48c0-9289-48dfb6d8a07c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023219526s
Jan 10 13:25:41.047: INFO: Pod "pod-4dd36594-52ee-48c0-9289-48dfb6d8a07c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032830496s
Jan 10 13:25:43.056: INFO: Pod "pod-4dd36594-52ee-48c0-9289-48dfb6d8a07c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041896526s
Jan 10 13:25:45.065: INFO: Pod "pod-4dd36594-52ee-48c0-9289-48dfb6d8a07c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.051531522s
STEP: Saw pod success
Jan 10 13:25:45.066: INFO: Pod "pod-4dd36594-52ee-48c0-9289-48dfb6d8a07c" satisfied condition "success or failure"
Jan 10 13:25:45.073: INFO: Trying to get logs from node iruya-node pod pod-4dd36594-52ee-48c0-9289-48dfb6d8a07c container test-container: 
STEP: delete the pod
Jan 10 13:25:45.166: INFO: Waiting for pod pod-4dd36594-52ee-48c0-9289-48dfb6d8a07c to disappear
Jan 10 13:25:45.172: INFO: Pod pod-4dd36594-52ee-48c0-9289-48dfb6d8a07c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 13:25:45.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2644" for this suite.
Jan 10 13:25:51.221: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:25:51.330: INFO: namespace emptydir-2644 deletion completed in 6.150107065s

• [SLOW TEST:14.440 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 13:25:51.331: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan 10 13:28:50.618: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 13:28:50.631: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 13:28:52.632: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 13:28:52.645: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 13:28:54.631: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 13:28:54.639: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 13:28:56.632: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 13:28:56.645: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 13:28:58.631: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 13:28:58.645: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 13:29:00.632: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 13:29:00.650: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 13:29:02.632: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 13:29:02.641: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 13:29:04.631: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 13:29:04.638: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 13:29:06.631: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 13:29:06.641: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 13:29:08.632: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 13:29:08.651: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 13:29:10.632: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 13:29:10.648: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 13:29:12.632: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 13:29:12.640: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 13:29:14.631: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 13:29:14.642: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 13:29:16.633: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 13:29:16.655: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 13:29:16.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-2354" for this suite.
Jan 10 13:29:38.695: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:29:38.838: INFO: namespace container-lifecycle-hook-2354 deletion completed in 22.167530224s

• [SLOW TEST:227.507 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 13:29:38.839: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan 10 13:29:55.108: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 10 13:29:55.122: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 10 13:29:57.122: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 10 13:29:57.132: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 10 13:29:59.123: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 10 13:29:59.132: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 10 13:30:01.122: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 10 13:30:01.135: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 10 13:30:03.122: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 10 13:30:03.134: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 10 13:30:05.123: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 10 13:30:05.131: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 10 13:30:07.122: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 10 13:30:07.132: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 10 13:30:09.123: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 10 13:30:09.141: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 10 13:30:11.123: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 10 13:30:11.144: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 10 13:30:13.123: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 10 13:30:13.137: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 10 13:30:15.123: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 10 13:30:15.132: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 10 13:30:17.122: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 10 13:30:17.174: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 10 13:30:19.123: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 10 13:30:19.129: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 10 13:30:21.123: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 10 13:30:21.135: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 10 13:30:23.123: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 10 13:30:23.132: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 10 13:30:25.123: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 10 13:30:25.133: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 10 13:30:27.123: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 10 13:30:27.132: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 13:30:27.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-2851" for this suite.
Jan 10 13:30:49.189: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:30:49.329: INFO: namespace container-lifecycle-hook-2851 deletion completed in 22.164590905s

• [SLOW TEST:70.490 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 13:30:49.330: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 13:30:49.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5881" for this suite.
Jan 10 13:30:55.495: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:30:55.659: INFO: namespace services-5881 deletion completed in 6.199489898s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:6.329 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 13:30:55.659: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 10 13:31:04.012: INFO: Waiting up to 5m0s for pod "client-envvars-423e0d0c-77f2-4445-b844-5404ca462cfc" in namespace "pods-7717" to be "success or failure"
Jan 10 13:31:04.033: INFO: Pod "client-envvars-423e0d0c-77f2-4445-b844-5404ca462cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 21.052366ms
Jan 10 13:31:06.045: INFO: Pod "client-envvars-423e0d0c-77f2-4445-b844-5404ca462cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032885424s
Jan 10 13:31:08.056: INFO: Pod "client-envvars-423e0d0c-77f2-4445-b844-5404ca462cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04362312s
Jan 10 13:31:10.063: INFO: Pod "client-envvars-423e0d0c-77f2-4445-b844-5404ca462cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051247636s
Jan 10 13:31:12.074: INFO: Pod "client-envvars-423e0d0c-77f2-4445-b844-5404ca462cfc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.061850412s
STEP: Saw pod success
Jan 10 13:31:12.074: INFO: Pod "client-envvars-423e0d0c-77f2-4445-b844-5404ca462cfc" satisfied condition "success or failure"
Jan 10 13:31:12.077: INFO: Trying to get logs from node iruya-node pod client-envvars-423e0d0c-77f2-4445-b844-5404ca462cfc container env3cont: 
STEP: delete the pod
Jan 10 13:31:12.183: INFO: Waiting for pod client-envvars-423e0d0c-77f2-4445-b844-5404ca462cfc to disappear
Jan 10 13:31:12.242: INFO: Pod client-envvars-423e0d0c-77f2-4445-b844-5404ca462cfc no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 13:31:12.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7717" for this suite.
Jan 10 13:31:58.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:31:58.382: INFO: namespace pods-7717 deletion completed in 46.13287068s

• [SLOW TEST:62.723 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 13:31:58.383: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Jan 10 13:32:07.106: INFO: Successfully updated pod "annotationupdate5bbe27ff-503d-4bc3-b341-23e43717106d"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 13:32:09.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3464" for this suite.
Jan 10 13:32:31.261: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:32:31.391: INFO: namespace projected-3464 deletion completed in 22.193312787s

• [SLOW TEST:33.008 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 13:32:31.392: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-3953
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 10 13:32:31.458: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 10 13:33:05.752: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-3953 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 10 13:33:05.752: INFO: >>> kubeConfig: /root/.kube/config
I0110 13:33:05.881378       8 log.go:172] (0xc000625ad0) (0xc0010edcc0) Create stream
I0110 13:33:05.881776       8 log.go:172] (0xc000625ad0) (0xc0010edcc0) Stream added, broadcasting: 1
I0110 13:33:05.894526       8 log.go:172] (0xc000625ad0) Reply frame received for 1
I0110 13:33:05.894688       8 log.go:172] (0xc000625ad0) (0xc000a00c80) Create stream
I0110 13:33:05.894714       8 log.go:172] (0xc000625ad0) (0xc000a00c80) Stream added, broadcasting: 3
I0110 13:33:05.897116       8 log.go:172] (0xc000625ad0) Reply frame received for 3
I0110 13:33:05.897199       8 log.go:172] (0xc000625ad0) (0xc000a6a0a0) Create stream
I0110 13:33:05.897231       8 log.go:172] (0xc000625ad0) (0xc000a6a0a0) Stream added, broadcasting: 5
I0110 13:33:05.899894       8 log.go:172] (0xc000625ad0) Reply frame received for 5
I0110 13:33:06.066232       8 log.go:172] (0xc000625ad0) Data frame received for 3
I0110 13:33:06.066307       8 log.go:172] (0xc000a00c80) (3) Data frame handling
I0110 13:33:06.066330       8 log.go:172] (0xc000a00c80) (3) Data frame sent
I0110 13:33:06.190509       8 log.go:172] (0xc000625ad0) (0xc000a00c80) Stream removed, broadcasting: 3
I0110 13:33:06.190654       8 log.go:172] (0xc000625ad0) Data frame received for 1
I0110 13:33:06.190675       8 log.go:172] (0xc0010edcc0) (1) Data frame handling
I0110 13:33:06.190690       8 log.go:172] (0xc0010edcc0) (1) Data frame sent
I0110 13:33:06.190724       8 log.go:172] (0xc000625ad0) (0xc0010edcc0) Stream removed, broadcasting: 1
I0110 13:33:06.190882       8 log.go:172] (0xc000625ad0) (0xc000a6a0a0) Stream removed, broadcasting: 5
I0110 13:33:06.190903       8 log.go:172] (0xc000625ad0) Go away received
I0110 13:33:06.191265       8 log.go:172] (0xc000625ad0) (0xc0010edcc0) Stream removed, broadcasting: 1
I0110 13:33:06.191283       8 log.go:172] (0xc000625ad0) (0xc000a00c80) Stream removed, broadcasting: 3
I0110 13:33:06.191295       8 log.go:172] (0xc000625ad0) (0xc000a6a0a0) Stream removed, broadcasting: 5
Jan 10 13:33:06.191: INFO: Waiting for endpoints: map[]
Jan 10 13:33:06.199: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-3953 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 10 13:33:06.200: INFO: >>> kubeConfig: /root/.kube/config
I0110 13:33:06.252608       8 log.go:172] (0xc00075b340) (0xc000a6a5a0) Create stream
I0110 13:33:06.252658       8 log.go:172] (0xc00075b340) (0xc000a6a5a0) Stream added, broadcasting: 1
I0110 13:33:06.257297       8 log.go:172] (0xc00075b340) Reply frame received for 1
I0110 13:33:06.257334       8 log.go:172] (0xc00075b340) (0xc00149c0a0) Create stream
I0110 13:33:06.257344       8 log.go:172] (0xc00075b340) (0xc00149c0a0) Stream added, broadcasting: 3
I0110 13:33:06.260674       8 log.go:172] (0xc00075b340) Reply frame received for 3
I0110 13:33:06.260920       8 log.go:172] (0xc00075b340) (0xc000a00f00) Create stream
I0110 13:33:06.260960       8 log.go:172] (0xc00075b340) (0xc000a00f00) Stream added, broadcasting: 5
I0110 13:33:06.263938       8 log.go:172] (0xc00075b340) Reply frame received for 5
I0110 13:33:06.408107       8 log.go:172] (0xc00075b340) Data frame received for 3
I0110 13:33:06.408169       8 log.go:172] (0xc00149c0a0) (3) Data frame handling
I0110 13:33:06.408185       8 log.go:172] (0xc00149c0a0) (3) Data frame sent
I0110 13:33:06.589739       8 log.go:172] (0xc00075b340) Data frame received for 1
I0110 13:33:06.589869       8 log.go:172] (0xc00075b340) (0xc00149c0a0) Stream removed, broadcasting: 3
I0110 13:33:06.590026       8 log.go:172] (0xc000a6a5a0) (1) Data frame handling
I0110 13:33:06.590109       8 log.go:172] (0xc00075b340) (0xc000a00f00) Stream removed, broadcasting: 5
I0110 13:33:06.590143       8 log.go:172] (0xc000a6a5a0) (1) Data frame sent
I0110 13:33:06.590181       8 log.go:172] (0xc00075b340) (0xc000a6a5a0) Stream removed, broadcasting: 1
I0110 13:33:06.590200       8 log.go:172] (0xc00075b340) Go away received
I0110 13:33:06.590603       8 log.go:172] (0xc00075b340) (0xc000a6a5a0) Stream removed, broadcasting: 1
I0110 13:33:06.590630       8 log.go:172] (0xc00075b340) (0xc00149c0a0) Stream removed, broadcasting: 3
I0110 13:33:06.590647       8 log.go:172] (0xc00075b340) (0xc000a00f00) Stream removed, broadcasting: 5
Jan 10 13:33:06.591: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 13:33:06.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-3953" for this suite.
Jan 10 13:33:18.638: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:33:18.740: INFO: namespace pod-network-test-3953 deletion completed in 12.135387855s

• [SLOW TEST:47.349 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 13:33:18.744: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Jan 10 13:33:27.452: INFO: Successfully updated pod "labelsupdate89a80f15-4735-4763-a96e-5ce6600a727b"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 13:33:29.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2834" for this suite.
Jan 10 13:33:53.623: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:33:53.909: INFO: namespace projected-2834 deletion completed in 24.348272736s

• [SLOW TEST:35.166 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 13:33:53.911: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan 10 13:33:54.041: INFO: Waiting up to 5m0s for pod "pod-27af7579-c389-41b9-86bb-8a5bf4831f7d" in namespace "emptydir-254" to be "success or failure"
Jan 10 13:33:54.051: INFO: Pod "pod-27af7579-c389-41b9-86bb-8a5bf4831f7d": Phase="Pending", Reason="", readiness=false. Elapsed: 9.255422ms
Jan 10 13:33:56.060: INFO: Pod "pod-27af7579-c389-41b9-86bb-8a5bf4831f7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018757739s
Jan 10 13:33:58.084: INFO: Pod "pod-27af7579-c389-41b9-86bb-8a5bf4831f7d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042800483s
Jan 10 13:34:00.168: INFO: Pod "pod-27af7579-c389-41b9-86bb-8a5bf4831f7d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.126435332s
Jan 10 13:34:02.176: INFO: Pod "pod-27af7579-c389-41b9-86bb-8a5bf4831f7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.134862757s
STEP: Saw pod success
Jan 10 13:34:02.176: INFO: Pod "pod-27af7579-c389-41b9-86bb-8a5bf4831f7d" satisfied condition "success or failure"
Jan 10 13:34:02.179: INFO: Trying to get logs from node iruya-node pod pod-27af7579-c389-41b9-86bb-8a5bf4831f7d container test-container: 
STEP: delete the pod
Jan 10 13:34:02.234: INFO: Waiting for pod pod-27af7579-c389-41b9-86bb-8a5bf4831f7d to disappear
Jan 10 13:34:02.243: INFO: Pod pod-27af7579-c389-41b9-86bb-8a5bf4831f7d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 13:34:02.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-254" for this suite.
Jan 10 13:34:08.268: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:34:08.429: INFO: namespace emptydir-254 deletion completed in 6.181266525s

• [SLOW TEST:14.519 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 13:34:08.431: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-794edcb1-6a40-4364-8a41-49f701731e41
STEP: Creating a pod to test consume secrets
Jan 10 13:34:08.568: INFO: Waiting up to 5m0s for pod "pod-secrets-57c2f03f-c1a8-42d3-be81-a3104b45bf97" in namespace "secrets-9285" to be "success or failure"
Jan 10 13:34:08.586: INFO: Pod "pod-secrets-57c2f03f-c1a8-42d3-be81-a3104b45bf97": Phase="Pending", Reason="", readiness=false. Elapsed: 16.79369ms
Jan 10 13:34:10.593: INFO: Pod "pod-secrets-57c2f03f-c1a8-42d3-be81-a3104b45bf97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023874068s
Jan 10 13:34:12.614: INFO: Pod "pod-secrets-57c2f03f-c1a8-42d3-be81-a3104b45bf97": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044952018s
Jan 10 13:34:14.624: INFO: Pod "pod-secrets-57c2f03f-c1a8-42d3-be81-a3104b45bf97": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055560571s
Jan 10 13:34:16.631: INFO: Pod "pod-secrets-57c2f03f-c1a8-42d3-be81-a3104b45bf97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.062352627s
STEP: Saw pod success
Jan 10 13:34:16.631: INFO: Pod "pod-secrets-57c2f03f-c1a8-42d3-be81-a3104b45bf97" satisfied condition "success or failure"
Jan 10 13:34:16.635: INFO: Trying to get logs from node iruya-node pod pod-secrets-57c2f03f-c1a8-42d3-be81-a3104b45bf97 container secret-volume-test: 
STEP: delete the pod
Jan 10 13:34:16.694: INFO: Waiting for pod pod-secrets-57c2f03f-c1a8-42d3-be81-a3104b45bf97 to disappear
Jan 10 13:34:16.698: INFO: Pod pod-secrets-57c2f03f-c1a8-42d3-be81-a3104b45bf97 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 13:34:16.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9285" for this suite.
Jan 10 13:34:22.770: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:34:22.892: INFO: namespace secrets-9285 deletion completed in 6.155993737s

• [SLOW TEST:14.462 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 13:34:22.893: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name projected-secret-test-c07c5045-4c16-4ac2-8fdb-14a15adb0417
STEP: Creating a pod to test consume secrets
Jan 10 13:34:23.019: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ee41d669-5b1d-4829-a01d-102929adb25a" in namespace "projected-1965" to be "success or failure"
Jan 10 13:34:23.044: INFO: Pod "pod-projected-secrets-ee41d669-5b1d-4829-a01d-102929adb25a": Phase="Pending", Reason="", readiness=false. Elapsed: 25.016834ms
Jan 10 13:34:25.055: INFO: Pod "pod-projected-secrets-ee41d669-5b1d-4829-a01d-102929adb25a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036175418s
Jan 10 13:34:27.067: INFO: Pod "pod-projected-secrets-ee41d669-5b1d-4829-a01d-102929adb25a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048130748s
Jan 10 13:34:29.081: INFO: Pod "pod-projected-secrets-ee41d669-5b1d-4829-a01d-102929adb25a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061993958s
Jan 10 13:34:31.100: INFO: Pod "pod-projected-secrets-ee41d669-5b1d-4829-a01d-102929adb25a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.081164871s
STEP: Saw pod success
Jan 10 13:34:31.101: INFO: Pod "pod-projected-secrets-ee41d669-5b1d-4829-a01d-102929adb25a" satisfied condition "success or failure"
Jan 10 13:34:31.107: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-ee41d669-5b1d-4829-a01d-102929adb25a container secret-volume-test: 
STEP: delete the pod
Jan 10 13:34:31.198: INFO: Waiting for pod pod-projected-secrets-ee41d669-5b1d-4829-a01d-102929adb25a to disappear
Jan 10 13:34:31.213: INFO: Pod pod-projected-secrets-ee41d669-5b1d-4829-a01d-102929adb25a no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 13:34:31.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1965" for this suite.
Jan 10 13:34:37.248: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:34:37.396: INFO: namespace projected-1965 deletion completed in 6.177490153s

• [SLOW TEST:14.503 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 13:34:37.397: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 10 13:34:37.516: INFO: Waiting up to 5m0s for pod "downwardapi-volume-59c6d494-e9d0-4b13-8138-1bffa9e43b95" in namespace "downward-api-4195" to be "success or failure"
Jan 10 13:34:37.542: INFO: Pod "downwardapi-volume-59c6d494-e9d0-4b13-8138-1bffa9e43b95": Phase="Pending", Reason="", readiness=false. Elapsed: 25.731884ms
Jan 10 13:34:39.556: INFO: Pod "downwardapi-volume-59c6d494-e9d0-4b13-8138-1bffa9e43b95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039349666s
Jan 10 13:34:41.566: INFO: Pod "downwardapi-volume-59c6d494-e9d0-4b13-8138-1bffa9e43b95": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04926541s
Jan 10 13:34:43.573: INFO: Pod "downwardapi-volume-59c6d494-e9d0-4b13-8138-1bffa9e43b95": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05683449s
Jan 10 13:34:45.584: INFO: Pod "downwardapi-volume-59c6d494-e9d0-4b13-8138-1bffa9e43b95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.068044787s
STEP: Saw pod success
Jan 10 13:34:45.585: INFO: Pod "downwardapi-volume-59c6d494-e9d0-4b13-8138-1bffa9e43b95" satisfied condition "success or failure"
Jan 10 13:34:45.593: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-59c6d494-e9d0-4b13-8138-1bffa9e43b95 container client-container: 
STEP: delete the pod
Jan 10 13:34:45.808: INFO: Waiting for pod downwardapi-volume-59c6d494-e9d0-4b13-8138-1bffa9e43b95 to disappear
Jan 10 13:34:45.819: INFO: Pod downwardapi-volume-59c6d494-e9d0-4b13-8138-1bffa9e43b95 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 13:34:45.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4195" for this suite.
Jan 10 13:34:51.858: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:34:52.110: INFO: namespace downward-api-4195 deletion completed in 6.279679166s

• [SLOW TEST:14.713 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 13:34:52.112: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 10 13:34:52.349: INFO: Creating deployment "test-recreate-deployment"
Jan 10 13:34:52.374: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Jan 10 13:34:52.422: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Jan 10 13:34:54.440: INFO: Waiting deployment "test-recreate-deployment" to complete
Jan 10 13:34:54.702: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714260092, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714260092, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714260092, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714260092, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 10 13:34:56.719: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714260092, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714260092, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714260092, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714260092, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 10 13:34:58.883: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714260092, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714260092, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714260092, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714260092, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 10 13:35:00.711: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Jan 10 13:35:00.723: INFO: Updating deployment test-recreate-deployment
Jan 10 13:35:00.723: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jan 10 13:35:01.023: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-2282,SelfLink:/apis/apps/v1/namespaces/deployment-2282/deployments/test-recreate-deployment,UID:deaa6831-47a6-4467-9b56-c3300b937c89,ResourceVersion:20027148,Generation:2,CreationTimestamp:2020-01-10 13:34:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-01-10 13:35:00 +0000 UTC 2020-01-10 13:35:00 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-01-10 13:35:00 +0000 UTC 2020-01-10 13:34:52 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Jan 10 13:35:01.029: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-2282,SelfLink:/apis/apps/v1/namespaces/deployment-2282/replicasets/test-recreate-deployment-5c8c9cc69d,UID:abb85eda-381a-4c96-9d80-bb71fe76559e,ResourceVersion:20027146,Generation:1,CreationTimestamp:2020-01-10 13:35:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment deaa6831-47a6-4467-9b56-c3300b937c89 0xc002396347 0xc002396348}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 10 13:35:01.029: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Jan 10 13:35:01.030: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-2282,SelfLink:/apis/apps/v1/namespaces/deployment-2282/replicasets/test-recreate-deployment-6df85df6b9,UID:7f1c2227-c48b-41bd-8ebf-a781fe043680,ResourceVersion:20027136,Generation:2,CreationTimestamp:2020-01-10 13:34:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment deaa6831-47a6-4467-9b56-c3300b937c89 0xc002396417 0xc002396418}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 10 13:35:01.034: INFO: Pod "test-recreate-deployment-5c8c9cc69d-mg2gx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-mg2gx,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-2282,SelfLink:/api/v1/namespaces/deployment-2282/pods/test-recreate-deployment-5c8c9cc69d-mg2gx,UID:22da4d88-b7c5-4184-94f9-fadba287fab4,ResourceVersion:20027149,Generation:0,CreationTimestamp:2020-01-10 13:35:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d abb85eda-381a-4c96-9d80-bb71fe76559e 0xc001e85d27 0xc001e85d28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rl88n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rl88n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rl88n true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001e85da0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001e85dc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:35:00 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:35:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:35:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:35:00 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-10 13:35:00 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 13:35:01.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2282" for this suite.
Jan 10 13:35:09.055: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:35:09.166: INFO: namespace deployment-2282 deletion completed in 8.126378756s

• [SLOW TEST:17.054 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 13:35:09.166: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 10 13:35:39.508: INFO: Container started at 2020-01-10 13:35:15 +0000 UTC, pod became ready at 2020-01-10 13:35:38 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 13:35:39.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1064" for this suite.
Jan 10 13:36:01.543: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:36:01.695: INFO: namespace container-probe-1064 deletion completed in 22.17952347s

• [SLOW TEST:52.529 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 13:36:01.695: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-bb4bf92c-5bb6-41c4-88f7-a41fa8fa1c0f
STEP: Creating a pod to test consume secrets
Jan 10 13:36:01.992: INFO: Waiting up to 5m0s for pod "pod-secrets-04035fc9-0637-4ec2-8c8b-0851622c6bfc" in namespace "secrets-8841" to be "success or failure"
Jan 10 13:36:02.018: INFO: Pod "pod-secrets-04035fc9-0637-4ec2-8c8b-0851622c6bfc": Phase="Pending", Reason="", readiness=false. Elapsed: 24.756766ms
Jan 10 13:36:04.086: INFO: Pod "pod-secrets-04035fc9-0637-4ec2-8c8b-0851622c6bfc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092898659s
Jan 10 13:36:06.094: INFO: Pod "pod-secrets-04035fc9-0637-4ec2-8c8b-0851622c6bfc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.100862105s
Jan 10 13:36:08.106: INFO: Pod "pod-secrets-04035fc9-0637-4ec2-8c8b-0851622c6bfc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.112807392s
Jan 10 13:36:10.114: INFO: Pod "pod-secrets-04035fc9-0637-4ec2-8c8b-0851622c6bfc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.121186365s
STEP: Saw pod success
Jan 10 13:36:10.114: INFO: Pod "pod-secrets-04035fc9-0637-4ec2-8c8b-0851622c6bfc" satisfied condition "success or failure"
Jan 10 13:36:10.119: INFO: Trying to get logs from node iruya-node pod pod-secrets-04035fc9-0637-4ec2-8c8b-0851622c6bfc container secret-volume-test: 
STEP: delete the pod
Jan 10 13:36:10.220: INFO: Waiting for pod pod-secrets-04035fc9-0637-4ec2-8c8b-0851622c6bfc to disappear
Jan 10 13:36:10.227: INFO: Pod pod-secrets-04035fc9-0637-4ec2-8c8b-0851622c6bfc no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 13:36:10.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8841" for this suite.
Jan 10 13:36:16.267: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:36:16.415: INFO: namespace secrets-8841 deletion completed in 6.167994985s
STEP: Destroying namespace "secret-namespace-8123" for this suite.
Jan 10 13:36:22.444: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:36:22.568: INFO: namespace secret-namespace-8123 deletion completed in 6.152764713s

• [SLOW TEST:20.873 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 13:36:22.570: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan 10 13:36:31.324: INFO: Successfully updated pod "pod-update-d7802d0b-df9d-439d-a855-2021fffd494e"
STEP: verifying the updated pod is in kubernetes
Jan 10 13:36:31.352: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 13:36:31.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6123" for this suite.
Jan 10 13:36:53.424: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:36:53.573: INFO: namespace pods-6123 deletion completed in 22.211326016s

• [SLOW TEST:31.003 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 13:36:53.574: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 10 13:36:53.694: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f433c1fc-79e1-4f6a-84f4-8b4d609b89ab" in namespace "downward-api-2432" to be "success or failure"
Jan 10 13:36:53.710: INFO: Pod "downwardapi-volume-f433c1fc-79e1-4f6a-84f4-8b4d609b89ab": Phase="Pending", Reason="", readiness=false. Elapsed: 15.338902ms
Jan 10 13:36:55.720: INFO: Pod "downwardapi-volume-f433c1fc-79e1-4f6a-84f4-8b4d609b89ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02536651s
Jan 10 13:36:57.727: INFO: Pod "downwardapi-volume-f433c1fc-79e1-4f6a-84f4-8b4d609b89ab": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032359235s
Jan 10 13:36:59.737: INFO: Pod "downwardapi-volume-f433c1fc-79e1-4f6a-84f4-8b4d609b89ab": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041878581s
Jan 10 13:37:01.745: INFO: Pod "downwardapi-volume-f433c1fc-79e1-4f6a-84f4-8b4d609b89ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.050029067s
STEP: Saw pod success
Jan 10 13:37:01.745: INFO: Pod "downwardapi-volume-f433c1fc-79e1-4f6a-84f4-8b4d609b89ab" satisfied condition "success or failure"
Jan 10 13:37:01.751: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-f433c1fc-79e1-4f6a-84f4-8b4d609b89ab container client-container: 
STEP: delete the pod
Jan 10 13:37:01.932: INFO: Waiting for pod downwardapi-volume-f433c1fc-79e1-4f6a-84f4-8b4d609b89ab to disappear
Jan 10 13:37:01.959: INFO: Pod downwardapi-volume-f433c1fc-79e1-4f6a-84f4-8b4d609b89ab no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 13:37:01.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2432" for this suite.
Jan 10 13:37:07.989: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:37:08.634: INFO: namespace downward-api-2432 deletion completed in 6.666845776s

• [SLOW TEST:15.061 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 13:37:08.635: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Jan 10 13:37:15.074: INFO: 0 pods remaining
Jan 10 13:37:15.074: INFO: 0 pods has nil DeletionTimestamp
Jan 10 13:37:15.075: INFO: 
STEP: Gathering metrics
W0110 13:37:15.812370       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 10 13:37:15.812: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 13:37:15.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1454" for this suite.
Jan 10 13:37:27.872: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:37:27.989: INFO: namespace gc-1454 deletion completed in 12.171123422s

• [SLOW TEST:19.355 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 13:37:27.990: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-hslnl in namespace proxy-6519
I0110 13:37:28.172040       8 runners.go:180] Created replication controller with name: proxy-service-hslnl, namespace: proxy-6519, replica count: 1
I0110 13:37:29.224838       8 runners.go:180] proxy-service-hslnl Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0110 13:37:30.225281       8 runners.go:180] proxy-service-hslnl Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0110 13:37:31.226177       8 runners.go:180] proxy-service-hslnl Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0110 13:37:32.227108       8 runners.go:180] proxy-service-hslnl Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0110 13:37:33.227953       8 runners.go:180] proxy-service-hslnl Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0110 13:37:34.228664       8 runners.go:180] proxy-service-hslnl Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0110 13:37:35.229677       8 runners.go:180] proxy-service-hslnl Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0110 13:37:36.230120       8 runners.go:180] proxy-service-hslnl Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0110 13:37:37.230523       8 runners.go:180] proxy-service-hslnl Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0110 13:37:38.230984       8 runners.go:180] proxy-service-hslnl Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0110 13:37:39.231442       8 runners.go:180] proxy-service-hslnl Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0110 13:37:40.231912       8 runners.go:180] proxy-service-hslnl Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0110 13:37:41.232365       8 runners.go:180] proxy-service-hslnl Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0110 13:37:42.233079       8 runners.go:180] proxy-service-hslnl Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0110 13:37:43.233588       8 runners.go:180] proxy-service-hslnl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 10 13:37:43.241: INFO: setup took 15.124689476s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Jan 10 13:37:43.292: INFO: (0) /api/v1/namespaces/proxy-6519/services/proxy-service-hslnl:portname2/proxy/: bar (200; 49.345641ms)
Jan 10 13:37:43.292: INFO: (0) /api/v1/namespaces/proxy-6519/services/http:proxy-service-hslnl:portname2/proxy/: bar (200; 49.155452ms)
Jan 10 13:37:43.292: INFO: (0) /api/v1/namespaces/proxy-6519/pods/http:proxy-service-hslnl-mr6kz:160/proxy/: foo (200; 50.205213ms)
Jan 10 13:37:43.292: INFO: (0) /api/v1/namespaces/proxy-6519/services/proxy-service-hslnl:portname1/proxy/: foo (200; 50.391352ms)
Jan 10 13:37:43.292: INFO: (0) /api/v1/namespaces/proxy-6519/services/http:proxy-service-hslnl:portname1/proxy/: foo (200; 50.157338ms)
Jan 10 13:37:43.293: INFO: (0) /api/v1/namespaces/proxy-6519/pods/proxy-service-hslnl-mr6kz:1080/proxy/: test<... (200; 50.688212ms)
Jan 10 13:37:43.293: INFO: (0) /api/v1/namespaces/proxy-6519/pods/http:proxy-service-hslnl-mr6kz:1080/proxy/: ... (200; 50.344799ms)
Jan 10 13:37:43.293: INFO: (0) /api/v1/namespaces/proxy-6519/pods/proxy-service-hslnl-mr6kz/proxy/: test (200; 50.957187ms)
Jan 10 13:37:43.293: INFO: (0) /api/v1/namespaces/proxy-6519/pods/proxy-service-hslnl-mr6kz:160/proxy/: foo (200; 51.009535ms)
Jan 10 13:37:43.297: INFO: (0) /api/v1/namespaces/proxy-6519/pods/http:proxy-service-hslnl-mr6kz:162/proxy/: bar (200; 54.512075ms)
Jan 10 13:37:43.297: INFO: (0) /api/v1/namespaces/proxy-6519/pods/proxy-service-hslnl-mr6kz:162/proxy/: bar (200; 54.614366ms)
Jan 10 13:37:43.324: INFO: (0) /api/v1/namespaces/proxy-6519/pods/https:proxy-service-hslnl-mr6kz:460/proxy/: tls baz (200; 81.191401ms)
Jan 10 13:37:43.324: INFO: (0) /api/v1/namespaces/proxy-6519/services/https:proxy-service-hslnl:tlsportname1/proxy/: tls baz (200; 81.485596ms)
Jan 10 13:37:43.324: INFO: (0) /api/v1/namespaces/proxy-6519/services/https:proxy-service-hslnl:tlsportname2/proxy/: tls qux (200; 81.63471ms)
Jan 10 13:37:43.326: INFO: (0) /api/v1/namespaces/proxy-6519/pods/https:proxy-service-hslnl-mr6kz:462/proxy/: tls qux (200; 84.160088ms)
Jan 10 13:37:43.325: INFO: (0) /api/v1/namespaces/proxy-6519/pods/https:proxy-service-hslnl-mr6kz:443/proxy/: test (200; 18.701169ms)
Jan 10 13:37:43.344: INFO: (1) /api/v1/namespaces/proxy-6519/pods/proxy-service-hslnl-mr6kz:1080/proxy/: test<... (200; 17.987297ms)
Jan 10 13:37:43.345: INFO: (1) /api/v1/namespaces/proxy-6519/pods/http:proxy-service-hslnl-mr6kz:1080/proxy/: ... (200; 18.10697ms)
Jan 10 13:37:43.346: INFO: (1) /api/v1/namespaces/proxy-6519/pods/http:proxy-service-hslnl-mr6kz:160/proxy/: foo (200; 19.614181ms)
Jan 10 13:37:43.346: INFO: (1) /api/v1/namespaces/proxy-6519/pods/proxy-service-hslnl-mr6kz:162/proxy/: bar (200; 19.778577ms)
Jan 10 13:37:43.347: INFO: (1) /api/v1/namespaces/proxy-6519/services/https:proxy-service-hslnl:tlsportname2/proxy/: tls qux (200; 21.042836ms)
Jan 10 13:37:43.347: INFO: (1) /api/v1/namespaces/proxy-6519/services/http:proxy-service-hslnl:portname2/proxy/: bar (200; 21.395332ms)
Jan 10 13:37:43.349: INFO: (1) /api/v1/namespaces/proxy-6519/services/http:proxy-service-hslnl:portname1/proxy/: foo (200; 22.571993ms)
Jan 10 13:37:43.349: INFO: (1) /api/v1/namespaces/proxy-6519/services/proxy-service-hslnl:portname1/proxy/: foo (200; 23.02974ms)
Jan 10 13:37:43.350: INFO: (1) /api/v1/namespaces/proxy-6519/pods/https:proxy-service-hslnl-mr6kz:460/proxy/: tls baz (200; 23.77139ms)
Jan 10 13:37:43.350: INFO: (1) /api/v1/namespaces/proxy-6519/services/https:proxy-service-hslnl:tlsportname1/proxy/: tls baz (200; 23.464689ms)
Jan 10 13:37:43.350: INFO: (1) /api/v1/namespaces/proxy-6519/pods/https:proxy-service-hslnl-mr6kz:462/proxy/: tls qux (200; 23.977249ms)
Jan 10 13:37:43.351: INFO: (1) /api/v1/namespaces/proxy-6519/services/proxy-service-hslnl:portname2/proxy/: bar (200; 25.334333ms)
Jan 10 13:37:43.360: INFO: (2) /api/v1/namespaces/proxy-6519/pods/http:proxy-service-hslnl-mr6kz:160/proxy/: foo (200; 8.602699ms)
Jan 10 13:37:43.360: INFO: (2) /api/v1/namespaces/proxy-6519/pods/proxy-service-hslnl-mr6kz:162/proxy/: bar (200; 8.773976ms)
Jan 10 13:37:43.364: INFO: (2) /api/v1/namespaces/proxy-6519/pods/https:proxy-service-hslnl-mr6kz:460/proxy/: tls baz (200; 12.164735ms)
Jan 10 13:37:43.364: INFO: (2) /api/v1/namespaces/proxy-6519/pods/http:proxy-service-hslnl-mr6kz:1080/proxy/: ... (200; 12.631777ms)
Jan 10 13:37:43.366: INFO: (2) /api/v1/namespaces/proxy-6519/pods/http:proxy-service-hslnl-mr6kz:162/proxy/: bar (200; 14.510436ms)
Jan 10 13:37:43.366: INFO: (2) /api/v1/namespaces/proxy-6519/pods/proxy-service-hslnl-mr6kz:160/proxy/: foo (200; 14.579797ms)
Jan 10 13:37:43.367: INFO: (2) /api/v1/namespaces/proxy-6519/pods/https:proxy-service-hslnl-mr6kz:462/proxy/: tls qux (200; 14.866889ms)
Jan 10 13:37:43.367: INFO: (2) /api/v1/namespaces/proxy-6519/pods/https:proxy-service-hslnl-mr6kz:443/proxy/: test (200; 14.702141ms)
Jan 10 13:37:43.370: INFO: (2) /api/v1/namespaces/proxy-6519/services/http:proxy-service-hslnl:portname1/proxy/: foo (200; 18.039864ms)
Jan 10 13:37:43.370: INFO: (2) /api/v1/namespaces/proxy-6519/services/proxy-service-hslnl:portname2/proxy/: bar (200; 18.675479ms)
Jan 10 13:37:43.371: INFO: (2) /api/v1/namespaces/proxy-6519/services/http:proxy-service-hslnl:portname2/proxy/: bar (200; 18.973179ms)
Jan 10 13:37:43.371: INFO: (2) /api/v1/namespaces/proxy-6519/services/https:proxy-service-hslnl:tlsportname2/proxy/: tls qux (200; 18.879639ms)
Jan 10 13:37:43.371: INFO: (2) /api/v1/namespaces/proxy-6519/services/proxy-service-hslnl:portname1/proxy/: foo (200; 19.20223ms)
Jan 10 13:37:43.371: INFO: (2) /api/v1/namespaces/proxy-6519/pods/proxy-service-hslnl-mr6kz:1080/proxy/: test<... (200; 18.865915ms)
Jan 10 13:37:43.371: INFO: (2) /api/v1/namespaces/proxy-6519/services/https:proxy-service-hslnl:tlsportname1/proxy/: tls baz (200; 19.62952ms)
Jan 10 13:37:43.378: INFO: (3) /api/v1/namespaces/proxy-6519/pods/https:proxy-service-hslnl-mr6kz:443/proxy/: ... (200; 11.004965ms)
Jan 10 13:37:43.382: INFO: (3) /api/v1/namespaces/proxy-6519/pods/http:proxy-service-hslnl-mr6kz:160/proxy/: foo (200; 11.032153ms)
Jan 10 13:37:43.383: INFO: (3) /api/v1/namespaces/proxy-6519/pods/proxy-service-hslnl-mr6kz/proxy/: test (200; 11.56898ms)
Jan 10 13:37:43.383: INFO: (3) /api/v1/namespaces/proxy-6519/services/http:proxy-service-hslnl:portname1/proxy/: foo (200; 11.430879ms)
Jan 10 13:37:43.383: INFO: (3) /api/v1/namespaces/proxy-6519/pods/proxy-service-hslnl-mr6kz:1080/proxy/: test<... (200; 11.743205ms)
Jan 10 13:37:43.384: INFO: (3) /api/v1/namespaces/proxy-6519/services/proxy-service-hslnl:portname2/proxy/: bar (200; 12.615057ms)
Jan 10 13:37:43.384: INFO: (3) /api/v1/namespaces/proxy-6519/pods/proxy-service-hslnl-mr6kz:162/proxy/: bar (200; 12.811551ms)
Jan 10 13:37:43.384: INFO: (3) /api/v1/namespaces/proxy-6519/pods/https:proxy-service-hslnl-mr6kz:462/proxy/: tls qux (200; 13.083773ms)
Jan 10 13:37:43.384: INFO: (3) /api/v1/namespaces/proxy-6519/services/https:proxy-service-hslnl:tlsportname1/proxy/: tls baz (200; 12.918407ms)
Jan 10 13:37:43.384: INFO: (3) /api/v1/namespaces/proxy-6519/services/https:proxy-service-hslnl:tlsportname2/proxy/: tls qux (200; 12.935074ms)
Jan 10 13:37:43.385: INFO: (3) /api/v1/namespaces/proxy-6519/pods/http:proxy-service-hslnl-mr6kz:162/proxy/: bar (200; 13.560192ms)
Jan 10 13:37:43.393: INFO: (4) /api/v1/namespaces/proxy-6519/pods/proxy-service-hslnl-mr6kz:162/proxy/: bar (200; 8.456547ms)
Jan 10 13:37:43.393: INFO: (4) /api/v1/namespaces/proxy-6519/pods/proxy-service-hslnl-mr6kz/proxy/: test (200; 8.216092ms)
Jan 10 13:37:43.395: INFO: (4) /api/v1/namespaces/proxy-6519/pods/https:proxy-service-hslnl-mr6kz:443/proxy/: test<... (200; 15.161986ms)
Jan 10 13:37:43.402: INFO: (4) /api/v1/namespaces/proxy-6519/pods/https:proxy-service-hslnl-mr6kz:462/proxy/: tls qux (200; 16.672023ms)
Jan 10 13:37:43.402: INFO: (4) /api/v1/namespaces/proxy-6519/services/https:proxy-service-hslnl:tlsportname2/proxy/: tls qux (200; 16.853231ms)
Jan 10 13:37:43.403: INFO: (4) /api/v1/namespaces/proxy-6519/services/proxy-service-hslnl:portname1/proxy/: foo (200; 17.664964ms)
Jan 10 13:37:43.403: INFO: (4) /api/v1/namespaces/proxy-6519/services/http:proxy-service-hslnl:portname1/proxy/: foo (200; 17.551849ms)
Jan 10 13:37:43.403: INFO: (4) /api/v1/namespaces/proxy-6519/pods/http:proxy-service-hslnl-mr6kz:1080/proxy/: ... (200; 18.25844ms)
Jan 10 13:37:43.403: INFO: (4) /api/v1/namespaces/proxy-6519/services/https:proxy-service-hslnl:tlsportname1/proxy/: tls baz (200; 18.449234ms)
Jan 10 13:37:43.404: INFO: (4) /api/v1/namespaces/proxy-6519/services/http:proxy-service-hslnl:portname2/proxy/: bar (200; 18.710301ms)
Jan 10 13:37:43.408: INFO: (4) /api/v1/namespaces/proxy-6519/services/proxy-service-hslnl:portname2/proxy/: bar (200; 22.779829ms)
Jan 10 13:37:43.409: INFO: (4) /api/v1/namespaces/proxy-6519/pods/http:proxy-service-hslnl-mr6kz:160/proxy/: foo (200; 23.899876ms)
Jan 10 13:37:43.421: INFO: (5) /api/v1/namespaces/proxy-6519/pods/proxy-service-hslnl-mr6kz:160/proxy/: foo (200; 11.793062ms)
Jan 10 13:37:43.421: INFO: (5) /api/v1/namespaces/proxy-6519/pods/https:proxy-service-hslnl-mr6kz:443/proxy/: ... (200; 14.921526ms)
Jan 10 13:37:43.424: INFO: (5) /api/v1/namespaces/proxy-6519/pods/https:proxy-service-hslnl-mr6kz:462/proxy/: tls qux (200; 14.748109ms)
Jan 10 13:37:43.424: INFO: (5) /api/v1/namespaces/proxy-6519/pods/proxy-service-hslnl-mr6kz:1080/proxy/: test<... (200; 14.648916ms)
Jan 10 13:37:43.424: INFO: (5) /api/v1/namespaces/proxy-6519/pods/https:proxy-service-hslnl-mr6kz:460/proxy/: tls baz (200; 14.906376ms)
Jan 10 13:37:43.427: INFO: (5) /api/v1/namespaces/proxy-6519/services/proxy-service-hslnl:portname2/proxy/: bar (200; 17.48169ms)
Jan 10 13:37:43.427: INFO: (5) /api/v1/namespaces/proxy-6519/services/proxy-service-hslnl:portname1/proxy/: foo (200; 18.302101ms)
Jan 10 13:37:43.428: INFO: (5) /api/v1/namespaces/proxy-6519/services/http:proxy-service-hslnl:portname1/proxy/: foo (200; 18.713203ms)
Jan 10 13:37:43.428: INFO: (5) /api/v1/namespaces/proxy-6519/services/https:proxy-service-hslnl:tlsportname1/proxy/: tls baz (200; 18.579653ms)
Jan 10 13:37:43.428: INFO: (5) /api/v1/namespaces/proxy-6519/pods/proxy-service-hslnl-mr6kz/proxy/: test (200; 19.013719ms)
Jan 10 13:37:43.429: INFO: (5) /api/v1/namespaces/proxy-6519/services/http:proxy-service-hslnl:portname2/proxy/: bar (200; 19.690547ms)
Jan 10 13:37:43.430: INFO: (5) /api/v1/namespaces/proxy-6519/services/https:proxy-service-hslnl:tlsportname2/proxy/: tls qux (200; 20.28706ms)
Jan 10 13:37:43.439: INFO: (6) /api/v1/namespaces/proxy-6519/pods/proxy-service-hslnl-mr6kz:162/proxy/: bar (200; 8.697862ms)
Jan 10 13:37:43.439: INFO: (6) /api/v1/namespaces/proxy-6519/pods/http:proxy-service-hslnl-mr6kz:160/proxy/: foo (200; 8.813629ms)
Jan 10 13:37:43.439: INFO: (6) /api/v1/namespaces/proxy-6519/pods/https:proxy-service-hslnl-mr6kz:462/proxy/: tls qux (200; 9.09135ms)
Jan 10 13:37:43.439: INFO: (6) /api/v1/namespaces/proxy-6519/pods/https:proxy-service-hslnl-mr6kz:460/proxy/: tls baz (200; 9.11351ms)
Jan 10 13:37:43.439: INFO: (6) /api/v1/namespaces/proxy-6519/pods/proxy-service-hslnl-mr6kz/proxy/: test (200; 9.247794ms)
Jan 10 13:37:43.439: INFO: (6) /api/v1/namespaces/proxy-6519/pods/proxy-service-hslnl-mr6kz:160/proxy/: foo (200; 9.437068ms)
Jan 10 13:37:43.439: INFO: (6) /api/v1/namespaces/proxy-6519/pods/http:proxy-service-hslnl-mr6kz:162/proxy/: bar (200; 9.876645ms)
Jan 10 13:37:43.440: INFO: (6) /api/v1/namespaces/proxy-6519/pods/https:proxy-service-hslnl-mr6kz:443/proxy/: ... (200; 10.192534ms)
Jan 10 13:37:43.443: INFO: (6) /api/v1/namespaces/proxy-6519/pods/proxy-service-hslnl-mr6kz:1080/proxy/: test<... (200; 13.398507ms)
Jan 10 13:37:43.447: INFO: (6) /api/v1/namespaces/proxy-6519/services/http:proxy-service-hslnl:portname1/proxy/: foo (200; 17.247238ms)
Jan 10 13:37:43.447: INFO: (6) /api/v1/namespaces/proxy-6519/services/proxy-service-hslnl:portname2/proxy/: bar (200; 17.590807ms)
Jan 10 13:37:43.448: INFO: (6) /api/v1/namespaces/proxy-6519/services/https:proxy-service-hslnl:tlsportname1/proxy/: tls baz (200; 18.275384ms)
Jan 10 13:37:43.449: INFO: (6) /api/v1/namespaces/proxy-6519/services/proxy-service-hslnl:portname1/proxy/: foo (200; 19.030329ms)
Jan 10 13:37:43.449: INFO: (6) /api/v1/namespaces/proxy-6519/services/https:proxy-service-hslnl:tlsportname2/proxy/: tls qux (200; 18.918689ms)
Jan 10 13:37:43.449: INFO: (6) /api/v1/namespaces/proxy-6519/services/http:proxy-service-hslnl:portname2/proxy/: bar (200; 19.184751ms)
Jan 10 13:37:43.464: INFO: (7) /api/v1/namespaces/proxy-6519/pods/proxy-service-hslnl-mr6kz:162/proxy/: bar (200; 14.767478ms)
Jan 10 13:37:43.464: INFO: (7) /api/v1/namespaces/proxy-6519/services/proxy-service-hslnl:portname1/proxy/: foo (200; 14.969512ms)
Jan 10 13:37:43.464: INFO: (7) /api/v1/namespaces/proxy-6519/pods/https:proxy-service-hslnl-mr6kz:462/proxy/: tls qux (200; 14.83957ms)
Jan 10 13:37:43.464: INFO: (7) /api/v1/namespaces/proxy-6519/pods/proxy-service-hslnl-mr6kz/proxy/: test (200; 15.126554ms)
Jan 10 13:37:43.464: INFO: (7) /api/v1/namespaces/proxy-6519/pods/http:proxy-service-hslnl-mr6kz:1080/proxy/: ... (200; 15.158947ms)
Jan 10 13:37:43.464: INFO: (7) /api/v1/namespaces/proxy-6519/pods/http:proxy-service-hslnl-mr6kz:160/proxy/: foo (200; 14.994775ms)
Jan 10 13:37:43.464: INFO: (7) /api/v1/namespaces/proxy-6519/pods/http:proxy-service-hslnl-mr6kz:162/proxy/: bar (200; 15.042544ms)
Jan 10 13:37:43.464: INFO: (7) /api/v1/namespaces/proxy-6519/services/https:proxy-service-hslnl:tlsportname1/proxy/: tls baz (200; 15.313199ms)
Jan 10 13:37:43.466: INFO: (7) /api/v1/namespaces/proxy-6519/pods/proxy-service-hslnl-mr6kz:1080/proxy/: test<... (200; 16.939784ms)
Jan 10 13:37:43.466: INFO: (7) /api/v1/namespaces/proxy-6519/pods/https:proxy-service-hslnl-mr6kz:460/proxy/: tls baz (200; 17.011164ms)
Jan 10 13:37:43.467: INFO: (7) /api/v1/namespaces/proxy-6519/pods/https:proxy-service-hslnl-mr6kz:443/proxy/: test<... (200; 11.286832ms)
Jan 10 13:37:43.485: INFO: (8) /api/v1/namespaces/proxy-6519/pods/https:proxy-service-hslnl-mr6kz:462/proxy/: tls qux (200; 12.354557ms)
Jan 10 13:37:43.485: INFO: (8) /api/v1/namespaces/proxy-6519/pods/proxy-service-hslnl-mr6kz:160/proxy/: foo (200; 12.492623ms)
Jan 10 13:37:43.485: INFO: (8) /api/v1/namespaces/proxy-6519/pods/proxy-service-hslnl-mr6kz:162/proxy/: bar (200; 12.924229ms)
Jan 10 13:37:43.485: INFO: (8) /api/v1/namespaces/proxy-6519/pods/http:proxy-service-hslnl-mr6kz:1080/proxy/: ... (200; 12.656071ms)
Jan 10 13:37:43.485: INFO: (8) /api/v1/namespaces/proxy-6519/pods/proxy-service-hslnl-mr6kz/proxy/: test (200; 12.748189ms)
Jan 10 13:37:43.485: INFO: (8) /api/v1/namespaces/proxy-6519/pods/http:proxy-service-hslnl-mr6kz:162/proxy/: bar (200; 12.605222ms)
Jan 10 13:37:43.485: INFO: (8) /api/v1/namespaces/proxy-6519/pods/https:proxy-service-hslnl-mr6kz:460/proxy/: tls baz (200; 12.607165ms)
Jan 10 13:37:43.487: INFO: (8) /api/v1/namespaces/proxy-6519/pods/http:proxy-service-hslnl-mr6kz:160/proxy/: foo (200; 14.140635ms)
Jan 10 13:37:43.488: INFO: (8) /api/v1/namespaces/proxy-6519/services/proxy-service-hslnl:portname1/proxy/: foo (200; 15.4944ms)
Jan 10 13:37:43.505: INFO: (8) /api/v1/namespaces/proxy-6519/services/http:proxy-service-hslnl:portname1/proxy/: foo (200; 32.789332ms)
Jan 10 13:37:43.506: INFO: (8) /api/v1/namespaces/proxy-6519/services/http:proxy-service-hslnl:portname2/proxy/: bar (200; 33.017937ms)
Jan 10 13:37:43.507: INFO: (8) /api/v1/namespaces/proxy-6519/services/https:proxy-service-hslnl:tlsportname1/proxy/: tls baz (200; 34.287782ms)
Jan 10 13:37:43.507: INFO: (8) /api/v1/namespaces/proxy-6519/services/proxy-service-hslnl:portname2/proxy/: bar (200; 34.644599ms)
Jan 10 13:37:43.507: INFO: (8) /api/v1/namespaces/proxy-6519/services/https:proxy-service-hslnl:tlsportname2/proxy/: tls qux (200; 34.846031ms)
Jan 10 13:37:43.538: INFO: (9) /api/v1/namespaces/proxy-6519/pods/https:proxy-service-hslnl-mr6kz:460/proxy/: tls baz (200; 30.458135ms)
Jan 10 13:37:43.538: INFO: (9) /api/v1/namespaces/proxy-6519/pods/proxy-service-hslnl-mr6kz:1080/proxy/: test<... (200; 30.341549ms)
Jan 10 13:37:43.538: INFO: (9) /api/v1/namespaces/proxy-6519/services/https:proxy-service-hslnl:tlsportname1/proxy/: tls baz (200; 30.452598ms)
Jan 10 13:37:43.538: INFO: (9) /api/v1/namespaces/proxy-6519/pods/proxy-service-hslnl-mr6kz:160/proxy/: foo (200; 30.316182ms)
Jan 10 13:37:43.538: INFO: (9) /api/v1/namespaces/proxy-6519/pods/https:proxy-service-hslnl-mr6kz:443/proxy/: ... (200; 30.803323ms)
Jan 10 13:37:43.539: INFO: (9) /api/v1/namespaces/proxy-6519/services/https:proxy-service-hslnl:tlsportname2/proxy/: tls qux (200; 30.954992ms)
Jan 10 13:37:43.539: INFO: (9) /api/v1/namespaces/proxy-6519/pods/https:proxy-service-hslnl-mr6kz:462/proxy/: tls qux (200; 31.322149ms)
Jan 10 13:37:43.539: INFO: (9) /api/v1/namespaces/proxy-6519/pods/proxy-service-hslnl-mr6kz:162/proxy/: bar (200; 31.563577ms)
Jan 10 13:37:43.539: INFO: (9) /api/v1/namespaces/proxy-6519/pods/proxy-service-hslnl-mr6kz/proxy/: test (200; 31.481353ms)
Jan 10 13:37:43.539: INFO: (9) /api/v1/namespaces/proxy-6519/pods/http:proxy-service-hslnl-mr6kz:160/proxy/: foo (200; 31.172126ms)
Jan 10 13:37:43.539: INFO: (9) /api/v1/namespaces/proxy-6519/services/http:proxy-service-hslnl:portname2/proxy/: bar (200; 31.898334ms)
Jan 10 13:37:43.540: INFO: (9) /api/v1/namespaces/proxy-6519/services/proxy-service-hslnl:portname1/proxy/: foo (200; 32.855602ms)
Jan 10 13:37:43.540: INFO: (9) /api/v1/namespaces/proxy-6519/services/proxy-service-hslnl:portname2/proxy/: bar (200; 32.979939ms)
Jan 10 13:37:43.541: INFO: (9) /api/v1/namespaces/proxy-6519/services/http:proxy-service-hslnl:portname1/proxy/: foo (200; 33.244235ms)
Jan 10 13:37:43.563: INFO: (10) /api/v1/namespaces/proxy-6519/pods/proxy-service-hslnl-mr6kz:162/proxy/: bar (200; 21.875234ms)
Jan 10 13:37:43.563: INFO: (10) /api/v1/namespaces/proxy-6519/pods/https:proxy-service-hslnl-mr6kz:462/proxy/: tls qux (200; 22.268715ms)
Jan 10 13:37:43.564: INFO: (10) /api/v1/namespaces/proxy-6519/pods/https:proxy-service-hslnl-mr6kz:443/proxy/: test (200; 23.315736ms)
Jan 10 13:37:43.565: INFO: (10) /api/v1/namespaces/proxy-6519/pods/https:proxy-service-hslnl-mr6kz:460/proxy/: tls baz (200; 23.517439ms)
Jan 10 13:37:43.565: INFO: (10) /api/v1/namespaces/proxy-6519/services/https:proxy-service-hslnl:tlsportname2/proxy/: tls qux (200; 24.36112ms)
Jan 10 13:37:43.565: INFO: (10) /api/v1/namespaces/proxy-6519/services/https:proxy-service-hslnl:tlsportname1/proxy/: tls baz (200; 24.509111ms)
Jan 10 13:37:43.566: INFO: (10) /api/v1/namespaces/proxy-6519/pods/proxy-service-hslnl-mr6kz:1080/proxy/: test<... (200; 24.593015ms)
Jan 10 13:37:43.566: INFO: (10) /api/v1/namespaces/proxy-6519/pods/http:proxy-service-hslnl-mr6kz:1080/proxy/: ... (200; 24.819755ms)
Jan 10 13:37:43.566: INFO: (10) /api/v1/namespaces/proxy-6519/services/http:proxy-service-hslnl:portname1/proxy/: foo (200; 25.277916ms)
Jan 10 13:37:43.566: INFO: (10) /api/v1/namespaces/proxy-6519/pods/http:proxy-service-hslnl-mr6kz:162/proxy/: bar (200; 25.351602ms)
Jan 10 13:37:43.567: INFO: (10) /api/v1/namespaces/proxy-6519/services/proxy-service-hslnl:portname2/proxy/: bar (200; 25.744661ms)
Jan 10 13:37:43.567: INFO: (10) /api/v1/namespaces/proxy-6519/services/proxy-service-hslnl:portname1/proxy/: foo (200; 25.805377ms)
Jan 10 13:37:43.567: INFO: (10) /api/v1/namespaces/proxy-6519/pods/proxy-service-hslnl-mr6kz:160/proxy/: foo (200; 25.695029ms)
Jan 10 13:37:43.576: INFO: (11) /api/v1/namespaces/proxy-6519/pods/https:proxy-service-hslnl-mr6kz:443/proxy/: test<... (200; 8.350423ms)
Jan 10 13:37:43.576: INFO: (11) /api/v1/namespaces/proxy-6519/pods/https:proxy-service-hslnl-mr6kz:462/proxy/: tls qux (200; 8.730456ms)
Jan 10 13:37:43.576: INFO: (11) /api/v1/namespaces/proxy-6519/pods/http:proxy-service-hslnl-mr6kz:1080/proxy/: ... (200; 9.06449ms)
Jan 10 13:37:43.580: INFO: (11) /api/v1/namespaces/proxy-6519/services/https:proxy-service-hslnl:tlsportname2/proxy/: tls qux (200; 12.628289ms)
Jan 10 13:37:43.580: INFO: (11) /api/v1/namespaces/proxy-6519/pods/https:proxy-service-hslnl-mr6kz:460/proxy/: tls baz (200; 12.910519ms)
Jan 10 13:37:43.580: INFO: (11) /api/v1/namespaces/proxy-6519/services/proxy-service-hslnl:portname1/proxy/: foo (200; 13.148155ms)
Jan 10 13:37:43.580: INFO: (11) /api/v1/namespaces/proxy-6519/pods/proxy-service-hslnl-mr6kz/proxy/: test (200; 13.174468ms)
Jan 10 13:37:43.581: INFO: (11) /api/v1/namespaces/proxy-6519/pods/http:proxy-service-hslnl-mr6kz:160/proxy/: foo (200; 13.522233ms)
Jan 10 13:37:43.581: INFO: (11) /api/v1/namespaces/proxy-6519/pods/proxy-service-hslnl-mr6kz:160/proxy/: foo (200; 14.270171ms)
Jan 10 13:37:43.584: INFO: (11) /api/v1/namespaces/proxy-6519/pods/http:proxy-service-hslnl-mr6kz:162/proxy/: bar (200; 16.51522ms)
Jan 10 13:37:43.585: INFO: (11) /api/v1/namespaces/proxy-6519/pods/proxy-service-hslnl-mr6kz:162/proxy/: bar (200; 17.814558ms)
Jan 10 13:37:43.585: INFO: (11) /api/v1/namespaces/proxy-6519/services/http:proxy-service-hslnl:portname1/proxy/: foo (200; 18.03422ms)
Jan 10 13:37:43.586: INFO: (11) /api/v1/namespaces/proxy-6519/services/https:proxy-service-hslnl:tlsportname1/proxy/: tls baz (200; 18.410293ms)
Jan 10 13:37:43.586: INFO: (11) /api/v1/namespaces/proxy-6519/services/http:proxy-service-hslnl:portname2/proxy/: bar (200; 18.980953ms)
Jan 10 13:37:43.601: INFO: (12) /api/v1/namespaces/proxy-6519/services/http:proxy-service-hslnl:portname2/proxy/: bar (200; 14.350086ms)
Jan 10 13:37:43.603: INFO: (12) /api/v1/namespaces/proxy-6519/services/proxy-service-hslnl:portname1/proxy/: foo (200; 15.748055ms)
Jan 10 13:37:43.603: INFO: (12) /api/v1/namespaces/proxy-6519/services/proxy-service-hslnl:portname2/proxy/: bar (200; 16.517108ms)
Jan 10 13:37:43.607: INFO: (12) /api/v1/namespaces/proxy-6519/services/https:proxy-service-hslnl:tlsportname1/proxy/: tls baz (200; 19.905307ms)
Jan 10 13:37:43.607: INFO: (12) /api/v1/namespaces/proxy-6519/services/https:proxy-service-hslnl:tlsportname2/proxy/: tls qux (200; 20.002632ms)
Jan 10 13:37:43.607: INFO: (12) /api/v1/namespaces/proxy-6519/pods/https:proxy-service-hslnl-mr6kz:462/proxy/: tls qux (200; 20.053589ms)
Jan 10 13:37:43.607: INFO: (12) /api/v1/namespaces/proxy-6519/services/http:proxy-service-hslnl:portname1/proxy/: foo (200; 19.834787ms)
Jan 10 13:37:43.607: INFO: (12) /api/v1/namespaces/proxy-6519/pods/proxy-service-hslnl-mr6kz/proxy/: test (200; 20.106825ms)
Jan 10 13:37:43.607: INFO: (12) /api/v1/namespaces/proxy-6519/pods/proxy-service-hslnl-mr6kz:160/proxy/: foo (200; 20.435502ms)
Jan 10 13:37:43.607: INFO: (12) /api/v1/namespaces/proxy-6519/pods/proxy-service-hslnl-mr6kz:1080/proxy/: test<... (200; 20.178731ms)
Jan 10 13:37:43.607: INFO: (12) /api/v1/namespaces/proxy-6519/pods/http:proxy-service-hslnl-mr6kz:1080/proxy/: ... (200; 20.6526ms)
Jan 10 13:37:43.607: INFO: (12) /api/v1/namespaces/proxy-6519/pods/proxy-service-hslnl-mr6kz:162/proxy/: bar (200; 20.466379ms)
Jan 10 13:37:43.607: INFO: (12) /api/v1/namespaces/proxy-6519/pods/http:proxy-service-hslnl-mr6kz:162/proxy/: bar (200; 20.430061ms)
Jan 10 13:37:43.607: INFO: (12) /api/v1/namespaces/proxy-6519/pods/http:proxy-service-hslnl-mr6kz:160/proxy/: foo (200; 20.5306ms)
Jan 10 13:37:43.607: INFO: (12) /api/v1/namespaces/proxy-6519/pods/https:proxy-service-hslnl-mr6kz:460/proxy/: tls baz (200; 20.513722ms)
Jan 10 13:37:43.607: INFO: (12) /api/v1/namespaces/proxy-6519/pods/https:proxy-service-hslnl-mr6kz:443/proxy/: test<... (200; 13.215701ms)
Jan 10 13:37:43.623: INFO: (13) /api/v1/namespaces/proxy-6519/pods/http:proxy-service-hslnl-mr6kz:162/proxy/: bar (200; 14.011213ms)
Jan 10 13:37:43.624: INFO: (13) /api/v1/namespaces/proxy-6519/services/http:proxy-service-hslnl:portname2/proxy/: bar (200; 16.028067ms)
Jan 10 13:37:43.624: INFO: (13) /api/v1/namespaces/proxy-6519/pods/proxy-service-hslnl-mr6kz:160/proxy/: foo (200; 16.19181ms)
Jan 10 13:37:43.625: INFO: (13) /api/v1/namespaces/proxy-6519/pods/https:proxy-service-hslnl-mr6kz:460/proxy/: tls baz (200; 15.896207ms)
Jan 10 13:37:43.625: INFO: (13) /api/v1/namespaces/proxy-6519/services/https:proxy-service-hslnl:tlsportname1/proxy/: tls baz (200; 17.178562ms)
Jan 10 13:37:43.625: INFO: (13) /api/v1/namespaces/proxy-6519/services/http:proxy-service-hslnl:portname1/proxy/: foo (200; 16.373191ms)
Jan 10 13:37:43.625: INFO: (13) /api/v1/namespaces/proxy-6519/pods/http:proxy-service-hslnl-mr6kz:1080/proxy/: ... (200; 16.6846ms)
Jan 10 13:37:43.626: INFO: (13) /api/v1/namespaces/proxy-6519/services/https:proxy-service-hslnl:tlsportname2/proxy/: tls qux (200; 18.056703ms)
Jan 10 13:37:43.627: INFO: (13) /api/v1/namespaces/proxy-6519/pods/proxy-service-hslnl-mr6kz/proxy/: test (200; 18.44154ms)
Jan 10 13:37:43.627: INFO: (13) /api/v1/namespaces/proxy-6519/services/proxy-service-hslnl:portname1/proxy/: foo (200; 18.272614ms)
Jan 10 13:37:43.627: INFO: (13) /api/v1/namespaces/proxy-6519/services/proxy-service-hslnl:portname2/proxy/: bar (200; 17.871579ms)
Jan 10 13:37:43.627: INFO: (13) /api/v1/namespaces/proxy-6519/pods/proxy-service-hslnl-mr6kz:162/proxy/: bar (200; 18.814918ms)
Jan 10 13:37:43.627: INFO: (13) /api/v1/namespaces/proxy-6519/pods/https:proxy-service-hslnl-mr6kz:443/proxy/: test<... (200; 9.97161ms)
Jan 10 13:37:43.637: INFO: (14) /api/v1/namespaces/proxy-6519/pods/http:proxy-service-hslnl-mr6kz:1080/proxy/: ... (200; 9.591279ms)
Jan 10 13:37:43.641: INFO: (14) /api/v1/namespaces/proxy-6519/pods/http:proxy-service-hslnl-mr6kz:162/proxy/: bar (200; 13.249317ms)
Jan 10 13:37:43.641: INFO: (14) /api/v1/namespaces/proxy-6519/services/http:proxy-service-hslnl:portname2/proxy/: bar (200; 13.725077ms)
Jan 10 13:37:43.642: INFO: (14) /api/v1/namespaces/proxy-6519/services/proxy-service-hslnl:portname1/proxy/: foo (200; 14.409918ms)
Jan 10 13:37:43.642: INFO: (14) /api/v1/namespaces/proxy-6519/services/https:proxy-service-hslnl:tlsportname1/proxy/: tls baz (200; 14.293786ms)
Jan 10 13:37:43.643: INFO: (14) /api/v1/namespaces/proxy-6519/services/https:proxy-service-hslnl:tlsportname2/proxy/: tls qux (200; 15.606697ms)
Jan 10 13:37:43.643: INFO: (14) /api/v1/namespaces/proxy-6519/pods/http:proxy-service-hslnl-mr6kz:160/proxy/: foo (200; 15.724344ms)
Jan 10 13:37:43.643: INFO: (14) /api/v1/namespaces/proxy-6519/pods/https:proxy-service-hslnl-mr6kz:443/proxy/: test (200; 16.747868ms)
Jan 10 13:37:43.652: INFO: (15) /api/v1/namespaces/proxy-6519/pods/proxy-service-hslnl-mr6kz:1080/proxy/: test<... (200; 7.3589ms)
Jan 10 13:37:43.652: INFO: (15) /api/v1/namespaces/proxy-6519/pods/http:proxy-service-hslnl-mr6kz:162/proxy/: bar (200; 7.2724ms)
Jan 10 13:37:43.652: INFO: (15) /api/v1/namespaces/proxy-6519/pods/proxy-service-hslnl-mr6kz:160/proxy/: foo (200; 7.913557ms)
Jan 10 13:37:43.652: INFO: (15) /api/v1/namespaces/proxy-6519/pods/proxy-service-hslnl-mr6kz/proxy/: test (200; 7.915129ms)
Jan 10 13:37:43.652: INFO: (15) /api/v1/namespaces/proxy-6519/pods/https:proxy-service-hslnl-mr6kz:462/proxy/: tls qux (200; 7.799925ms)
Jan 10 13:37:43.654: INFO: (15) /api/v1/namespaces/proxy-6519/pods/proxy-service-hslnl-mr6kz:162/proxy/: bar (200; 10.200768ms)
Jan 10 13:37:43.654: INFO: (15) /api/v1/namespaces/proxy-6519/pods/http:proxy-service-hslnl-mr6kz:160/proxy/: foo (200; 10.014548ms)
Jan 10 13:37:43.655: INFO: (15) /api/v1/namespaces/proxy-6519/pods/https:proxy-service-hslnl-mr6kz:460/proxy/: tls baz (200; 10.854319ms)
Jan 10 13:37:43.655: INFO: (15) /api/v1/namespaces/proxy-6519/pods/http:proxy-service-hslnl-mr6kz:1080/proxy/: ... (200; 10.841696ms)
Jan 10 13:37:43.655: INFO: (15) /api/v1/namespaces/proxy-6519/pods/https:proxy-service-hslnl-mr6kz:443/proxy/: test (200; 13.884573ms)
Jan 10 13:37:43.676: INFO: (16) /api/v1/namespaces/proxy-6519/pods/proxy-service-hslnl-mr6kz:1080/proxy/: test<... (200; 14.241356ms)
Jan 10 13:37:43.676: INFO: (16) /api/v1/namespaces/proxy-6519/pods/http:proxy-service-hslnl-mr6kz:1080/proxy/: ... (200; 14.326872ms)
Jan 10 13:37:43.676: INFO: (16) /api/v1/namespaces/proxy-6519/pods/proxy-service-hslnl-mr6kz:162/proxy/: bar (200; 14.701459ms)
Jan 10 13:37:43.676: INFO: (16) /api/v1/namespaces/proxy-6519/pods/proxy-service-hslnl-mr6kz:160/proxy/: foo (200; 14.785583ms)
Jan 10 13:37:43.676: INFO: (16) /api/v1/namespaces/proxy-6519/services/http:proxy-service-hslnl:portname2/proxy/: bar (200; 14.708235ms)
Jan 10 13:37:43.677: INFO: (16) /api/v1/namespaces/proxy-6519/pods/http:proxy-service-hslnl-mr6kz:162/proxy/: bar (200; 15.951669ms)
Jan 10 13:37:43.678: INFO: (16) /api/v1/namespaces/proxy-6519/pods/https:proxy-service-hslnl-mr6kz:462/proxy/: tls qux (200; 16.09943ms)
Jan 10 13:37:43.678: INFO: (16) /api/v1/namespaces/proxy-6519/services/proxy-service-hslnl:portname1/proxy/: foo (200; 16.194797ms)
Jan 10 13:37:43.678: INFO: (16) /api/v1/namespaces/proxy-6519/services/proxy-service-hslnl:portname2/proxy/: bar (200; 16.177687ms)
Jan 10 13:37:43.683: INFO: (17) /api/v1/namespaces/proxy-6519/pods/http:proxy-service-hslnl-mr6kz:162/proxy/: bar (200; 4.775907ms)
Jan 10 13:37:43.692: INFO: (17) /api/v1/namespaces/proxy-6519/pods/proxy-service-hslnl-mr6kz:1080/proxy/: test<... (200; 13.438094ms)
Jan 10 13:37:43.692: INFO: (17) /api/v1/namespaces/proxy-6519/services/http:proxy-service-hslnl:portname2/proxy/: bar (200; 13.314081ms)
Jan 10 13:37:43.692: INFO: (17) /api/v1/namespaces/proxy-6519/services/https:proxy-service-hslnl:tlsportname2/proxy/: tls qux (200; 13.915727ms)
Jan 10 13:37:43.692: INFO: (17) /api/v1/namespaces/proxy-6519/pods/http:proxy-service-hslnl-mr6kz:1080/proxy/: ... (200; 13.634339ms)
Jan 10 13:37:43.692: INFO: (17) /api/v1/namespaces/proxy-6519/services/proxy-service-hslnl:portname1/proxy/: foo (200; 14.063855ms)
Jan 10 13:37:43.692: INFO: (17) /api/v1/namespaces/proxy-6519/pods/https:proxy-service-hslnl-mr6kz:443/proxy/: test (200; 14.910555ms)
Jan 10 13:37:43.693: INFO: (17) /api/v1/namespaces/proxy-6519/pods/proxy-service-hslnl-mr6kz:160/proxy/: foo (200; 15.192755ms)
Jan 10 13:37:43.693: INFO: (17) /api/v1/namespaces/proxy-6519/pods/proxy-service-hslnl-mr6kz:162/proxy/: bar (200; 14.978886ms)
Jan 10 13:37:43.694: INFO: (17) /api/v1/namespaces/proxy-6519/services/https:proxy-service-hslnl:tlsportname1/proxy/: tls baz (200; 15.454651ms)
Jan 10 13:37:43.694: INFO: (17) /api/v1/namespaces/proxy-6519/pods/http:proxy-service-hslnl-mr6kz:160/proxy/: foo (200; 15.644501ms)
Jan 10 13:37:43.694: INFO: (17) /api/v1/namespaces/proxy-6519/pods/https:proxy-service-hslnl-mr6kz:460/proxy/: tls baz (200; 15.938997ms)
Jan 10 13:37:43.702: INFO: (18) /api/v1/namespaces/proxy-6519/pods/https:proxy-service-hslnl-mr6kz:460/proxy/: tls baz (200; 8.059887ms)
Jan 10 13:37:43.702: INFO: (18) /api/v1/namespaces/proxy-6519/pods/https:proxy-service-hslnl-mr6kz:462/proxy/: tls qux (200; 8.130354ms)
Jan 10 13:37:43.703: INFO: (18) /api/v1/namespaces/proxy-6519/pods/proxy-service-hslnl-mr6kz:162/proxy/: bar (200; 8.787272ms)
Jan 10 13:37:43.704: INFO: (18) /api/v1/namespaces/proxy-6519/pods/proxy-service-hslnl-mr6kz/proxy/: test (200; 9.36081ms)
Jan 10 13:37:43.704: INFO: (18) /api/v1/namespaces/proxy-6519/pods/http:proxy-service-hslnl-mr6kz:160/proxy/: foo (200; 9.649297ms)
Jan 10 13:37:43.704: INFO: (18) /api/v1/namespaces/proxy-6519/pods/http:proxy-service-hslnl-mr6kz:1080/proxy/: ... (200; 9.629393ms)
Jan 10 13:37:43.704: INFO: (18) /api/v1/namespaces/proxy-6519/pods/proxy-service-hslnl-mr6kz:160/proxy/: foo (200; 10.05444ms)
Jan 10 13:37:43.706: INFO: (18) /api/v1/namespaces/proxy-6519/pods/proxy-service-hslnl-mr6kz:1080/proxy/: test<... (200; 11.607015ms)
Jan 10 13:37:43.706: INFO: (18) /api/v1/namespaces/proxy-6519/pods/https:proxy-service-hslnl-mr6kz:443/proxy/: test (200; 7.480252ms)
Jan 10 13:37:43.722: INFO: (19) /api/v1/namespaces/proxy-6519/pods/http:proxy-service-hslnl-mr6kz:160/proxy/: foo (200; 8.68637ms)
Jan 10 13:37:43.723: INFO: (19) /api/v1/namespaces/proxy-6519/services/http:proxy-service-hslnl:portname1/proxy/: foo (200; 10.61128ms)
Jan 10 13:37:43.724: INFO: (19) /api/v1/namespaces/proxy-6519/pods/proxy-service-hslnl-mr6kz:1080/proxy/: test<... (200; 10.552303ms)
Jan 10 13:37:43.724: INFO: (19) /api/v1/namespaces/proxy-6519/pods/proxy-service-hslnl-mr6kz:162/proxy/: bar (200; 11.010622ms)
Jan 10 13:37:43.724: INFO: (19) /api/v1/namespaces/proxy-6519/pods/https:proxy-service-hslnl-mr6kz:460/proxy/: tls baz (200; 11.651561ms)
Jan 10 13:37:43.725: INFO: (19) /api/v1/namespaces/proxy-6519/pods/https:proxy-service-hslnl-mr6kz:443/proxy/: ... (200; 12.336183ms)
Jan 10 13:37:43.725: INFO: (19) /api/v1/namespaces/proxy-6519/services/http:proxy-service-hslnl:portname2/proxy/: bar (200; 12.435693ms)
Jan 10 13:37:43.726: INFO: (19) /api/v1/namespaces/proxy-6519/services/https:proxy-service-hslnl:tlsportname1/proxy/: tls baz (200; 12.901461ms)
Jan 10 13:37:43.727: INFO: (19) /api/v1/namespaces/proxy-6519/services/proxy-service-hslnl:portname2/proxy/: bar (200; 13.672759ms)
Jan 10 13:37:43.729: INFO: (19) /api/v1/namespaces/proxy-6519/services/proxy-service-hslnl:portname1/proxy/: foo (200; 16.10204ms)
Jan 10 13:37:43.729: INFO: (19) /api/v1/namespaces/proxy-6519/services/https:proxy-service-hslnl:tlsportname2/proxy/: tls qux (200; 16.200217ms)
STEP: deleting ReplicationController proxy-service-hslnl in namespace proxy-6519, will wait for the garbage collector to delete the pods
Jan 10 13:37:43.802: INFO: Deleting ReplicationController proxy-service-hslnl took: 18.403003ms
Jan 10 13:37:44.103: INFO: Terminating ReplicationController proxy-service-hslnl pods took: 301.037743ms
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 13:37:49.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-6519" for this suite.
Jan 10 13:37:55.067: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:37:55.198: INFO: namespace proxy-6519 deletion completed in 6.171529883s

• [SLOW TEST:27.208 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy through a service and a pod  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 13:37:55.199: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0110 13:38:05.346854       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 10 13:38:05.346: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 13:38:05.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1811" for this suite.
Jan 10 13:38:11.390: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:38:11.549: INFO: namespace gc-1811 deletion completed in 6.197014209s

• [SLOW TEST:16.350 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 13:38:11.552: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating all guestbook components
Jan 10 13:38:11.625: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Jan 10 13:38:11.625: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2039'
Jan 10 13:38:14.377: INFO: stderr: ""
Jan 10 13:38:14.377: INFO: stdout: "service/redis-slave created\n"
Jan 10 13:38:14.378: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Jan 10 13:38:14.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2039'
Jan 10 13:38:14.799: INFO: stderr: ""
Jan 10 13:38:14.799: INFO: stdout: "service/redis-master created\n"
Jan 10 13:38:14.799: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Jan 10 13:38:14.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2039'
Jan 10 13:38:15.130: INFO: stderr: ""
Jan 10 13:38:15.131: INFO: stdout: "service/frontend created\n"
Jan 10 13:38:15.131: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Jan 10 13:38:15.132: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2039'
Jan 10 13:38:15.462: INFO: stderr: ""
Jan 10 13:38:15.462: INFO: stdout: "deployment.apps/frontend created\n"
Jan 10 13:38:15.463: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jan 10 13:38:15.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2039'
Jan 10 13:38:15.791: INFO: stderr: ""
Jan 10 13:38:15.791: INFO: stdout: "deployment.apps/redis-master created\n"
Jan 10 13:38:15.792: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Jan 10 13:38:15.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2039'
Jan 10 13:38:16.122: INFO: stderr: ""
Jan 10 13:38:16.122: INFO: stdout: "deployment.apps/redis-slave created\n"
STEP: validating guestbook app
Jan 10 13:38:16.122: INFO: Waiting for all frontend pods to be Running.
Jan 10 13:38:36.177: INFO: Waiting for frontend to serve content.
Jan 10 13:38:36.244: INFO: Trying to add a new entry to the guestbook.
Jan 10 13:38:36.295: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Jan 10 13:38:36.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2039'
Jan 10 13:38:36.558: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 10 13:38:36.558: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Jan 10 13:38:36.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2039'
Jan 10 13:38:36.770: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 10 13:38:36.770: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jan 10 13:38:36.771: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2039'
Jan 10 13:38:36.896: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 10 13:38:36.896: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan 10 13:38:36.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2039'
Jan 10 13:38:37.008: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 10 13:38:37.008: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan 10 13:38:37.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2039'
Jan 10 13:38:37.209: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 10 13:38:37.210: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jan 10 13:38:37.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2039'
Jan 10 13:38:37.432: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 10 13:38:37.432: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 13:38:37.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2039" for this suite.
Jan 10 13:39:17.597: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:39:17.745: INFO: namespace kubectl-2039 deletion completed in 40.210958761s

• [SLOW TEST:66.193 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 13:39:17.746: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 10 13:39:25.023: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 13:39:25.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-2348" for this suite.
Jan 10 13:39:31.116: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:39:31.260: INFO: namespace container-runtime-2348 deletion completed in 6.191824618s

• [SLOW TEST:13.514 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 13:39:31.261: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-9548
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-9548
STEP: Creating statefulset with conflicting port in namespace statefulset-9548
STEP: Waiting until pod test-pod will start running in namespace statefulset-9548
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-9548
Jan 10 13:39:41.540: INFO: Observed stateful pod in namespace: statefulset-9548, name: ss-0, uid: d85d0464-eb1b-4ac3-a29b-a51c58fff71a, status phase: Pending. Waiting for statefulset controller to delete.
Jan 10 13:39:46.498: INFO: Observed stateful pod in namespace: statefulset-9548, name: ss-0, uid: d85d0464-eb1b-4ac3-a29b-a51c58fff71a, status phase: Failed. Waiting for statefulset controller to delete.
Jan 10 13:39:46.564: INFO: Observed stateful pod in namespace: statefulset-9548, name: ss-0, uid: d85d0464-eb1b-4ac3-a29b-a51c58fff71a, status phase: Failed. Waiting for statefulset controller to delete.
Jan 10 13:39:46.616: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-9548
STEP: Removing pod with conflicting port in namespace statefulset-9548
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-9548 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jan 10 13:39:56.814: INFO: Deleting all statefulset in ns statefulset-9548
Jan 10 13:39:56.820: INFO: Scaling statefulset ss to 0
Jan 10 13:40:06.863: INFO: Waiting for statefulset status.replicas updated to 0
Jan 10 13:40:06.869: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 13:40:06.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9548" for this suite.
Jan 10 13:40:12.988: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:40:13.137: INFO: namespace statefulset-9548 deletion completed in 6.17084255s

• [SLOW TEST:41.876 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 13:40:13.138: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-7911/configmap-test-6c9122e9-3edc-4052-85eb-62ceff5f8c60
STEP: Creating a pod to test consume configMaps
Jan 10 13:40:13.521: INFO: Waiting up to 5m0s for pod "pod-configmaps-523e752a-e433-4792-83e3-e4523c4fa69c" in namespace "configmap-7911" to be "success or failure"
Jan 10 13:40:13.605: INFO: Pod "pod-configmaps-523e752a-e433-4792-83e3-e4523c4fa69c": Phase="Pending", Reason="", readiness=false. Elapsed: 83.094997ms
Jan 10 13:40:15.618: INFO: Pod "pod-configmaps-523e752a-e433-4792-83e3-e4523c4fa69c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096289377s
Jan 10 13:40:17.638: INFO: Pod "pod-configmaps-523e752a-e433-4792-83e3-e4523c4fa69c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116289158s
Jan 10 13:40:19.649: INFO: Pod "pod-configmaps-523e752a-e433-4792-83e3-e4523c4fa69c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.127535486s
Jan 10 13:40:21.657: INFO: Pod "pod-configmaps-523e752a-e433-4792-83e3-e4523c4fa69c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.135729006s
Jan 10 13:40:23.666: INFO: Pod "pod-configmaps-523e752a-e433-4792-83e3-e4523c4fa69c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.144304297s
STEP: Saw pod success
Jan 10 13:40:23.666: INFO: Pod "pod-configmaps-523e752a-e433-4792-83e3-e4523c4fa69c" satisfied condition "success or failure"
Jan 10 13:40:23.670: INFO: Trying to get logs from node iruya-node pod pod-configmaps-523e752a-e433-4792-83e3-e4523c4fa69c container env-test: 
STEP: delete the pod
Jan 10 13:40:23.803: INFO: Waiting for pod pod-configmaps-523e752a-e433-4792-83e3-e4523c4fa69c to disappear
Jan 10 13:40:23.813: INFO: Pod pod-configmaps-523e752a-e433-4792-83e3-e4523c4fa69c no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 13:40:23.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7911" for this suite.
Jan 10 13:40:29.879: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:40:30.002: INFO: namespace configmap-7911 deletion completed in 6.167066745s

• [SLOW TEST:16.864 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 13:40:30.002: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Jan 10 13:40:30.116: INFO: PodSpec: initContainers in spec.initContainers
Jan 10 13:41:32.616: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-bc0136e0-9f4d-4366-8aa9-237d3c9b94bd", GenerateName:"", Namespace:"init-container-7093", SelfLink:"/api/v1/namespaces/init-container-7093/pods/pod-init-bc0136e0-9f4d-4366-8aa9-237d3c9b94bd", UID:"ae315ccc-bdc1-4310-b560-8503d326cc9e", ResourceVersion:"20028398", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63714260430, loc:(*time.Location)(0x7ea48a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"115973584"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-xtdw2", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00238e800), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-xtdw2", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-xtdw2", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-xtdw2", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0018384a8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0025f00c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001838530)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001838550)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001838558), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00183855c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714260430, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714260430, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714260430, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714260430, loc:(*time.Location)(0x7ea48a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.3.65", PodIP:"10.44.0.1", StartTime:(*v1.Time)(0xc0011089a0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00217c4d0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00217c540)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://0f288d6f8d7dd3ea9715f9b62cdd2072cc5310502da759eda8cdfec0ebc0f119"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001108aa0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001108a00), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 13:41:32.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-7093" for this suite.
Jan 10 13:41:54.744: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:41:54.877: INFO: namespace init-container-7093 deletion completed in 22.167653029s

• [SLOW TEST:84.875 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 13:41:54.878: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Jan 10 13:42:03.041: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-8267ad19-97f4-479b-a478-33970e4fc17b,GenerateName:,Namespace:events-908,SelfLink:/api/v1/namespaces/events-908/pods/send-events-8267ad19-97f4-479b-a478-33970e4fc17b,UID:5dc23e15-55f8-4854-8cbd-7d5f517c05e2,ResourceVersion:20028462,Generation:0,CreationTimestamp:2020-01-10 13:41:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 993519366,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rs925 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rs925,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-rs925 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0005598e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000559900}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:41:55 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:42:01 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:42:01 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:41:55 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-01-10 13:41:55 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-01-10 13:42:01 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://3b6871055bf0d9f5f28bacde9d2f384dfa55d5617948f3b23ccf7bfa75b94b79}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Jan 10 13:42:05.052: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Jan 10 13:42:07.066: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 13:42:07.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-908" for this suite.
Jan 10 13:42:47.129: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:42:47.306: INFO: namespace events-908 deletion completed in 40.206275598s

• [SLOW TEST:52.429 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 13:42:47.307: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Jan 10 13:42:47.480: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 10 13:42:47.488: INFO: Waiting for terminating namespaces to be deleted...
Jan 10 13:42:47.491: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Jan 10 13:42:47.503: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Jan 10 13:42:47.503: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 10 13:42:47.503: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Jan 10 13:42:47.503: INFO: 	Container weave ready: true, restart count 0
Jan 10 13:42:47.503: INFO: 	Container weave-npc ready: true, restart count 0
Jan 10 13:42:47.503: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Jan 10 13:42:47.515: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan 10 13:42:47.515: INFO: 	Container coredns ready: true, restart count 0
Jan 10 13:42:47.515: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Jan 10 13:42:47.515: INFO: 	Container kube-scheduler ready: true, restart count 13
Jan 10 13:42:47.515: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Jan 10 13:42:47.515: INFO: 	Container weave ready: true, restart count 0
Jan 10 13:42:47.515: INFO: 	Container weave-npc ready: true, restart count 0
Jan 10 13:42:47.515: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan 10 13:42:47.515: INFO: 	Container coredns ready: true, restart count 0
Jan 10 13:42:47.515: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Jan 10 13:42:47.515: INFO: 	Container etcd ready: true, restart count 0
Jan 10 13:42:47.515: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Jan 10 13:42:47.515: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 10 13:42:47.515: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Jan 10 13:42:47.516: INFO: 	Container kube-controller-manager ready: true, restart count 19
Jan 10 13:42:47.516: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Jan 10 13:42:47.516: INFO: 	Container kube-apiserver ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: verifying the node has the label node iruya-node
STEP: verifying the node has the label node iruya-server-sfge57q7djm7
Jan 10 13:42:47.661: INFO: Pod coredns-5c98db65d4-bm4gs requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Jan 10 13:42:47.662: INFO: Pod coredns-5c98db65d4-xx8w8 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Jan 10 13:42:47.662: INFO: Pod etcd-iruya-server-sfge57q7djm7 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Jan 10 13:42:47.662: INFO: Pod kube-apiserver-iruya-server-sfge57q7djm7 requesting resource cpu=250m on Node iruya-server-sfge57q7djm7
Jan 10 13:42:47.662: INFO: Pod kube-controller-manager-iruya-server-sfge57q7djm7 requesting resource cpu=200m on Node iruya-server-sfge57q7djm7
Jan 10 13:42:47.662: INFO: Pod kube-proxy-58v95 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Jan 10 13:42:47.662: INFO: Pod kube-proxy-976zl requesting resource cpu=0m on Node iruya-node
Jan 10 13:42:47.662: INFO: Pod kube-scheduler-iruya-server-sfge57q7djm7 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Jan 10 13:42:47.662: INFO: Pod weave-net-bzl4d requesting resource cpu=20m on Node iruya-server-sfge57q7djm7
Jan 10 13:42:47.662: INFO: Pod weave-net-rlp57 requesting resource cpu=20m on Node iruya-node
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-6dacd645-db32-407c-8492-8054b3a535bc.15e88a552a78ed25], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9352/filler-pod-6dacd645-db32-407c-8492-8054b3a535bc to iruya-server-sfge57q7djm7]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-6dacd645-db32-407c-8492-8054b3a535bc.15e88a565dbc06f5], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-6dacd645-db32-407c-8492-8054b3a535bc.15e88a5739e976cf], Reason = [Created], Message = [Created container filler-pod-6dacd645-db32-407c-8492-8054b3a535bc]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-6dacd645-db32-407c-8492-8054b3a535bc.15e88a575dce01d3], Reason = [Started], Message = [Started container filler-pod-6dacd645-db32-407c-8492-8054b3a535bc]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-790d8367-045b-4871-a29e-7752e261aa42.15e88a552584bd21], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9352/filler-pod-790d8367-045b-4871-a29e-7752e261aa42 to iruya-node]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-790d8367-045b-4871-a29e-7752e261aa42.15e88a563e93bfb0], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-790d8367-045b-4871-a29e-7752e261aa42.15e88a570bf92e3d], Reason = [Created], Message = [Created container filler-pod-790d8367-045b-4871-a29e-7752e261aa42]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-790d8367-045b-4871-a29e-7752e261aa42.15e88a572cb7295d], Reason = [Started], Message = [Started container filler-pod-790d8367-045b-4871-a29e-7752e261aa42]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15e88a57f7c5e948], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: removing the label node off the node iruya-node
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node iruya-server-sfge57q7djm7
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 13:43:00.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-9352" for this suite.
Jan 10 13:43:08.988: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:43:09.069: INFO: namespace sched-pred-9352 deletion completed in 8.105555387s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:21.762 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 13:43:09.069: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 10 13:43:10.619: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 13:43:11.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-6988" for this suite.
Jan 10 13:43:17.857: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:43:17.972: INFO: namespace custom-resource-definition-6988 deletion completed in 6.156585879s

• [SLOW TEST:8.903 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 13:43:17.973: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 13:43:27.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-6593" for this suite.
Jan 10 13:43:49.305: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:43:49.423: INFO: namespace replication-controller-6593 deletion completed in 22.15152864s

• [SLOW TEST:31.450 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 13:43:49.424: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-42c83005-80fc-43d8-8471-dd182adf25ab
STEP: Creating a pod to test consume secrets
Jan 10 13:43:49.659: INFO: Waiting up to 5m0s for pod "pod-secrets-bdec7101-7492-4228-925a-5559bac61307" in namespace "secrets-9086" to be "success or failure"
Jan 10 13:43:49.674: INFO: Pod "pod-secrets-bdec7101-7492-4228-925a-5559bac61307": Phase="Pending", Reason="", readiness=false. Elapsed: 15.162529ms
Jan 10 13:43:51.684: INFO: Pod "pod-secrets-bdec7101-7492-4228-925a-5559bac61307": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024818279s
Jan 10 13:43:53.692: INFO: Pod "pod-secrets-bdec7101-7492-4228-925a-5559bac61307": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032853098s
Jan 10 13:43:55.701: INFO: Pod "pod-secrets-bdec7101-7492-4228-925a-5559bac61307": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041488594s
Jan 10 13:43:57.712: INFO: Pod "pod-secrets-bdec7101-7492-4228-925a-5559bac61307": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.053058861s
STEP: Saw pod success
Jan 10 13:43:57.713: INFO: Pod "pod-secrets-bdec7101-7492-4228-925a-5559bac61307" satisfied condition "success or failure"
Jan 10 13:43:57.719: INFO: Trying to get logs from node iruya-node pod pod-secrets-bdec7101-7492-4228-925a-5559bac61307 container secret-volume-test: 
STEP: delete the pod
Jan 10 13:43:57.879: INFO: Waiting for pod pod-secrets-bdec7101-7492-4228-925a-5559bac61307 to disappear
Jan 10 13:43:57.887: INFO: Pod pod-secrets-bdec7101-7492-4228-925a-5559bac61307 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 13:43:57.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9086" for this suite.
Jan 10 13:44:04.045: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:44:04.154: INFO: namespace secrets-9086 deletion completed in 6.240222929s

• [SLOW TEST:14.731 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 13:44:04.155: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-f3a68759-1e33-462a-adb2-6761cd5e3a49
STEP: Creating a pod to test consume secrets
Jan 10 13:44:04.293: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e828d4b3-aa4a-4790-b08e-158e546f6a01" in namespace "projected-2382" to be "success or failure"
Jan 10 13:44:04.300: INFO: Pod "pod-projected-secrets-e828d4b3-aa4a-4790-b08e-158e546f6a01": Phase="Pending", Reason="", readiness=false. Elapsed: 6.095835ms
Jan 10 13:44:06.328: INFO: Pod "pod-projected-secrets-e828d4b3-aa4a-4790-b08e-158e546f6a01": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03462382s
Jan 10 13:44:08.340: INFO: Pod "pod-projected-secrets-e828d4b3-aa4a-4790-b08e-158e546f6a01": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046153522s
Jan 10 13:44:10.350: INFO: Pod "pod-projected-secrets-e828d4b3-aa4a-4790-b08e-158e546f6a01": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056131292s
Jan 10 13:44:12.361: INFO: Pod "pod-projected-secrets-e828d4b3-aa4a-4790-b08e-158e546f6a01": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.067727103s
STEP: Saw pod success
Jan 10 13:44:12.362: INFO: Pod "pod-projected-secrets-e828d4b3-aa4a-4790-b08e-158e546f6a01" satisfied condition "success or failure"
Jan 10 13:44:12.372: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-e828d4b3-aa4a-4790-b08e-158e546f6a01 container projected-secret-volume-test: 
STEP: delete the pod
Jan 10 13:44:12.909: INFO: Waiting for pod pod-projected-secrets-e828d4b3-aa4a-4790-b08e-158e546f6a01 to disappear
Jan 10 13:44:12.950: INFO: Pod pod-projected-secrets-e828d4b3-aa4a-4790-b08e-158e546f6a01 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 13:44:12.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2382" for this suite.
Jan 10 13:44:19.017: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:44:19.165: INFO: namespace projected-2382 deletion completed in 6.196771864s

• [SLOW TEST:15.010 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 13:44:19.166: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-7e02d22f-86f8-4b4c-94c3-662edc352b79
STEP: Creating a pod to test consume configMaps
Jan 10 13:44:19.328: INFO: Waiting up to 5m0s for pod "pod-configmaps-6258e387-7cc8-4746-8860-4842a0d71898" in namespace "configmap-411" to be "success or failure"
Jan 10 13:44:19.338: INFO: Pod "pod-configmaps-6258e387-7cc8-4746-8860-4842a0d71898": Phase="Pending", Reason="", readiness=false. Elapsed: 9.696276ms
Jan 10 13:44:21.371: INFO: Pod "pod-configmaps-6258e387-7cc8-4746-8860-4842a0d71898": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043522145s
Jan 10 13:44:23.385: INFO: Pod "pod-configmaps-6258e387-7cc8-4746-8860-4842a0d71898": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057109779s
Jan 10 13:44:25.417: INFO: Pod "pod-configmaps-6258e387-7cc8-4746-8860-4842a0d71898": Phase="Pending", Reason="", readiness=false. Elapsed: 6.088891294s
Jan 10 13:44:27.423: INFO: Pod "pod-configmaps-6258e387-7cc8-4746-8860-4842a0d71898": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.095593804s
STEP: Saw pod success
Jan 10 13:44:27.424: INFO: Pod "pod-configmaps-6258e387-7cc8-4746-8860-4842a0d71898" satisfied condition "success or failure"
Jan 10 13:44:27.430: INFO: Trying to get logs from node iruya-node pod pod-configmaps-6258e387-7cc8-4746-8860-4842a0d71898 container configmap-volume-test: 
STEP: delete the pod
Jan 10 13:44:27.577: INFO: Waiting for pod pod-configmaps-6258e387-7cc8-4746-8860-4842a0d71898 to disappear
Jan 10 13:44:27.588: INFO: Pod pod-configmaps-6258e387-7cc8-4746-8860-4842a0d71898 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 13:44:27.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-411" for this suite.
Jan 10 13:44:33.784: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:44:33.910: INFO: namespace configmap-411 deletion completed in 6.310604089s

• [SLOW TEST:14.744 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 13:44:33.912: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override arguments
Jan 10 13:44:34.034: INFO: Waiting up to 5m0s for pod "client-containers-66ca8226-32e0-4783-a58b-1d74e9b437db" in namespace "containers-8757" to be "success or failure"
Jan 10 13:44:34.045: INFO: Pod "client-containers-66ca8226-32e0-4783-a58b-1d74e9b437db": Phase="Pending", Reason="", readiness=false. Elapsed: 10.080381ms
Jan 10 13:44:36.061: INFO: Pod "client-containers-66ca8226-32e0-4783-a58b-1d74e9b437db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026694378s
Jan 10 13:44:38.078: INFO: Pod "client-containers-66ca8226-32e0-4783-a58b-1d74e9b437db": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043368644s
Jan 10 13:44:40.100: INFO: Pod "client-containers-66ca8226-32e0-4783-a58b-1d74e9b437db": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065180394s
Jan 10 13:44:42.112: INFO: Pod "client-containers-66ca8226-32e0-4783-a58b-1d74e9b437db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.077731462s
STEP: Saw pod success
Jan 10 13:44:42.113: INFO: Pod "client-containers-66ca8226-32e0-4783-a58b-1d74e9b437db" satisfied condition "success or failure"
Jan 10 13:44:42.118: INFO: Trying to get logs from node iruya-node pod client-containers-66ca8226-32e0-4783-a58b-1d74e9b437db container test-container: 
STEP: delete the pod
Jan 10 13:44:42.255: INFO: Waiting for pod client-containers-66ca8226-32e0-4783-a58b-1d74e9b437db to disappear
Jan 10 13:44:42.264: INFO: Pod client-containers-66ca8226-32e0-4783-a58b-1d74e9b437db no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 13:44:42.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-8757" for this suite.
Jan 10 13:44:48.383: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:44:48.546: INFO: namespace containers-8757 deletion completed in 6.272059352s

• [SLOW TEST:14.634 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 13:44:48.547: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-a2747c2f-a033-42df-a23b-c64537413ee8
STEP: Creating secret with name s-test-opt-upd-66efbc99-644b-47a9-bbe7-fb4b73b00652
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-a2747c2f-a033-42df-a23b-c64537413ee8
STEP: Updating secret s-test-opt-upd-66efbc99-644b-47a9-bbe7-fb4b73b00652
STEP: Creating secret with name s-test-opt-create-0a40dff0-8b80-491f-8233-316ce273eb41
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 13:45:01.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9598" for this suite.
Jan 10 13:45:23.135: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:45:23.256: INFO: namespace secrets-9598 deletion completed in 22.146460925s

• [SLOW TEST:34.709 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 13:45:23.257: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 13:45:33.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1842" for this suite.
Jan 10 13:46:19.462: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:46:19.609: INFO: namespace kubelet-test-1842 deletion completed in 46.180532215s

• [SLOW TEST:56.353 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 13:46:19.612: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 10 13:46:19.759: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5bf4804b-ac87-4992-98d0-b7db39f53573" in namespace "downward-api-9662" to be "success or failure"
Jan 10 13:46:19.770: INFO: Pod "downwardapi-volume-5bf4804b-ac87-4992-98d0-b7db39f53573": Phase="Pending", Reason="", readiness=false. Elapsed: 10.47415ms
Jan 10 13:46:21.786: INFO: Pod "downwardapi-volume-5bf4804b-ac87-4992-98d0-b7db39f53573": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026359278s
Jan 10 13:46:23.804: INFO: Pod "downwardapi-volume-5bf4804b-ac87-4992-98d0-b7db39f53573": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043886897s
Jan 10 13:46:25.811: INFO: Pod "downwardapi-volume-5bf4804b-ac87-4992-98d0-b7db39f53573": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050822842s
Jan 10 13:46:27.827: INFO: Pod "downwardapi-volume-5bf4804b-ac87-4992-98d0-b7db39f53573": Phase="Pending", Reason="", readiness=false. Elapsed: 8.066552201s
Jan 10 13:46:29.836: INFO: Pod "downwardapi-volume-5bf4804b-ac87-4992-98d0-b7db39f53573": Phase="Pending", Reason="", readiness=false. Elapsed: 10.076278551s
Jan 10 13:46:31.845: INFO: Pod "downwardapi-volume-5bf4804b-ac87-4992-98d0-b7db39f53573": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.085218099s
STEP: Saw pod success
Jan 10 13:46:31.845: INFO: Pod "downwardapi-volume-5bf4804b-ac87-4992-98d0-b7db39f53573" satisfied condition "success or failure"
Jan 10 13:46:31.853: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-5bf4804b-ac87-4992-98d0-b7db39f53573 container client-container: 
STEP: delete the pod
Jan 10 13:46:32.465: INFO: Waiting for pod downwardapi-volume-5bf4804b-ac87-4992-98d0-b7db39f53573 to disappear
Jan 10 13:46:32.487: INFO: Pod downwardapi-volume-5bf4804b-ac87-4992-98d0-b7db39f53573 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 13:46:32.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9662" for this suite.
Jan 10 13:46:38.523: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:46:38.669: INFO: namespace downward-api-9662 deletion completed in 6.173327267s

• [SLOW TEST:19.058 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 13:46:38.676: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 13:46:44.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-6967" for this suite.
Jan 10 13:46:52.324: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:46:52.488: INFO: namespace watch-6967 deletion completed in 8.301667469s

• [SLOW TEST:13.812 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 13:46:52.489: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Jan 10 13:46:52.576: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 10 13:46:52.614: INFO: Waiting for terminating namespaces to be deleted...
Jan 10 13:46:52.616: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Jan 10 13:46:52.626: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Jan 10 13:46:52.626: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 10 13:46:52.626: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Jan 10 13:46:52.626: INFO: 	Container weave ready: true, restart count 0
Jan 10 13:46:52.626: INFO: 	Container weave-npc ready: true, restart count 0
Jan 10 13:46:52.626: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Jan 10 13:46:52.633: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Jan 10 13:46:52.633: INFO: 	Container weave ready: true, restart count 0
Jan 10 13:46:52.633: INFO: 	Container weave-npc ready: true, restart count 0
Jan 10 13:46:52.633: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan 10 13:46:52.633: INFO: 	Container coredns ready: true, restart count 0
Jan 10 13:46:52.633: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Jan 10 13:46:52.633: INFO: 	Container etcd ready: true, restart count 0
Jan 10 13:46:52.633: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Jan 10 13:46:52.633: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 10 13:46:52.633: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Jan 10 13:46:52.633: INFO: 	Container kube-controller-manager ready: true, restart count 19
Jan 10 13:46:52.633: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Jan 10 13:46:52.633: INFO: 	Container kube-apiserver ready: true, restart count 0
Jan 10 13:46:52.633: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan 10 13:46:52.633: INFO: 	Container coredns ready: true, restart count 0
Jan 10 13:46:52.633: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Jan 10 13:46:52.633: INFO: 	Container kube-scheduler ready: true, restart count 13
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-cc490547-2e90-41b7-885e-1fedb4e45bd7 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-cc490547-2e90-41b7-885e-1fedb4e45bd7 off the node iruya-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-cc490547-2e90-41b7-885e-1fedb4e45bd7
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 13:47:11.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-8789" for this suite.
Jan 10 13:47:25.078: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:47:25.208: INFO: namespace sched-pred-8789 deletion completed in 14.16250134s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:32.720 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 13:47:25.211: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating api versions
Jan 10 13:47:25.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Jan 10 13:47:25.638: INFO: stderr: ""
Jan 10 13:47:25.638: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 13:47:25.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8862" for this suite.
Jan 10 13:47:31.681: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:47:31.872: INFO: namespace kubectl-8862 deletion completed in 6.218742108s

• [SLOW TEST:6.662 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 13:47:31.873: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 10 13:47:31.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8132'
Jan 10 13:47:32.423: INFO: stderr: ""
Jan 10 13:47:32.424: INFO: stdout: "replicationcontroller/redis-master created\n"
Jan 10 13:47:32.424: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8132'
Jan 10 13:47:32.997: INFO: stderr: ""
Jan 10 13:47:32.998: INFO: stdout: "service/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan 10 13:47:34.012: INFO: Selector matched 1 pods for map[app:redis]
Jan 10 13:47:34.012: INFO: Found 0 / 1
Jan 10 13:47:35.372: INFO: Selector matched 1 pods for map[app:redis]
Jan 10 13:47:35.372: INFO: Found 0 / 1
Jan 10 13:47:36.013: INFO: Selector matched 1 pods for map[app:redis]
Jan 10 13:47:36.013: INFO: Found 0 / 1
Jan 10 13:47:37.009: INFO: Selector matched 1 pods for map[app:redis]
Jan 10 13:47:37.009: INFO: Found 0 / 1
Jan 10 13:47:38.012: INFO: Selector matched 1 pods for map[app:redis]
Jan 10 13:47:38.012: INFO: Found 0 / 1
Jan 10 13:47:39.007: INFO: Selector matched 1 pods for map[app:redis]
Jan 10 13:47:39.007: INFO: Found 0 / 1
Jan 10 13:47:40.007: INFO: Selector matched 1 pods for map[app:redis]
Jan 10 13:47:40.007: INFO: Found 1 / 1
Jan 10 13:47:40.007: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan 10 13:47:40.015: INFO: Selector matched 1 pods for map[app:redis]
Jan 10 13:47:40.015: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 10 13:47:40.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-gn69z --namespace=kubectl-8132'
Jan 10 13:47:40.201: INFO: stderr: ""
Jan 10 13:47:40.202: INFO: stdout: "Name:           redis-master-gn69z\nNamespace:      kubectl-8132\nPriority:       0\nNode:           iruya-node/10.96.3.65\nStart Time:     Fri, 10 Jan 2020 13:47:32 +0000\nLabels:         app=redis\n                role=master\nAnnotations:    \nStatus:         Running\nIP:             10.44.0.1\nControlled By:  ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   docker://7caa33926d4168931dfc2e3a851ef9b363fe34164fbc72e019ce26b40041285a\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Fri, 10 Jan 2020 13:47:39 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-z6c7v (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-z6c7v:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-z6c7v\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                 Message\n  ----    ------     ----  ----                 -------\n  Normal  Scheduled  8s    default-scheduler    Successfully assigned kubectl-8132/redis-master-gn69z to iruya-node\n  Normal  Pulled     4s    kubelet, iruya-node  Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    2s    kubelet, iruya-node  Created container redis-master\n  Normal  Started    1s    kubelet, iruya-node  Started container redis-master\n"
Jan 10 13:47:40.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-8132'
Jan 10 13:47:40.321: INFO: stderr: ""
Jan 10 13:47:40.321: INFO: stdout: "Name:         redis-master\nNamespace:    kubectl-8132\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  8s    replication-controller  Created pod: redis-master-gn69z\n"
Jan 10 13:47:40.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-8132'
Jan 10 13:47:40.541: INFO: stderr: ""
Jan 10 13:47:40.541: INFO: stdout: "Name:              redis-master\nNamespace:         kubectl-8132\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.101.96.7\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.44.0.1:6379\nSession Affinity:  None\nEvents:            \n"
Jan 10 13:47:40.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-node'
Jan 10 13:47:40.657: INFO: stderr: ""
Jan 10 13:47:40.657: INFO: stdout: "Name:               iruya-node\nRoles:              \nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=iruya-node\n                    kubernetes.io/os=linux\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sun, 04 Aug 2019 09:01:39 +0000\nTaints:             \nUnschedulable:      false\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sat, 12 Oct 2019 11:56:49 +0000   Sat, 12 Oct 2019 11:56:49 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Fri, 10 Jan 2020 13:46:56 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Fri, 10 Jan 2020 13:46:56 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Fri, 10 Jan 2020 13:46:56 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Fri, 10 Jan 2020 13:46:56 +0000   Sun, 04 Aug 2019 09:02:19 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  10.96.3.65\n  Hostname:    iruya-node\nCapacity:\n cpu:                4\n ephemeral-storage:  20145724Ki\n hugepages-2Mi:      0\n memory:             4039076Ki\n pods:               110\nAllocatable:\n cpu:                4\n ephemeral-storage:  18566299208\n hugepages-2Mi:      0\n memory:             3936676Ki\n pods:               110\nSystem Info:\n Machine ID:                 f573dcf04d6f4a87856a35d266a2fa7a\n System UUID:                F573DCF0-4D6F-4A87-856A-35D266A2FA7A\n Boot ID:                    8baf4beb-8391-43e6-b17b-b1e184b5370a\n Kernel Version:             4.15.0-52-generic\n OS Image:                   Ubuntu 18.04.2 LTS\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  docker://18.9.7\n Kubelet Version:            v1.15.1\n Kube-Proxy Version:         v1.15.1\nPodCIDR:                     10.96.1.0/24\nNon-terminated Pods:         (3 in total)\n  Namespace                  Name                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                  ------------  ----------  ---------------  -------------  ---\n  kube-system                kube-proxy-976zl      0 (0%)        0 (0%)      0 (0%)           0 (0%)         159d\n  kube-system                weave-net-rlp57       20m (0%)      0 (0%)      0 (0%)           0 (0%)         90d\n  kubectl-8132               redis-master-gn69z    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests  Limits\n  --------           --------  ------\n  cpu                20m (0%)  0 (0%)\n  memory             0 (0%)    0 (0%)\n  ephemeral-storage  0 (0%)    0 (0%)\nEvents:              \n"
Jan 10 13:47:40.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-8132'
Jan 10 13:47:40.800: INFO: stderr: ""
Jan 10 13:47:40.800: INFO: stdout: "Name:         kubectl-8132\nLabels:       e2e-framework=kubectl\n              e2e-run=fdf26298-6274-49fb-a625-32d68d475e0c\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 13:47:40.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8132" for this suite.
Jan 10 13:48:02.835: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:48:02.984: INFO: namespace kubectl-8132 deletion completed in 22.17830407s

• [SLOW TEST:31.110 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 13:48:02.984: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan 10 13:48:03.101: INFO: Waiting up to 5m0s for pod "pod-e56a874c-78b3-45a3-90a5-e2f70049d41f" in namespace "emptydir-6054" to be "success or failure"
Jan 10 13:48:03.152: INFO: Pod "pod-e56a874c-78b3-45a3-90a5-e2f70049d41f": Phase="Pending", Reason="", readiness=false. Elapsed: 50.406546ms
Jan 10 13:48:05.162: INFO: Pod "pod-e56a874c-78b3-45a3-90a5-e2f70049d41f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060574467s
Jan 10 13:48:07.181: INFO: Pod "pod-e56a874c-78b3-45a3-90a5-e2f70049d41f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079421721s
Jan 10 13:48:09.192: INFO: Pod "pod-e56a874c-78b3-45a3-90a5-e2f70049d41f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.090342455s
Jan 10 13:48:11.199: INFO: Pod "pod-e56a874c-78b3-45a3-90a5-e2f70049d41f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.097090302s
STEP: Saw pod success
Jan 10 13:48:11.199: INFO: Pod "pod-e56a874c-78b3-45a3-90a5-e2f70049d41f" satisfied condition "success or failure"
Jan 10 13:48:11.202: INFO: Trying to get logs from node iruya-node pod pod-e56a874c-78b3-45a3-90a5-e2f70049d41f container test-container: 
STEP: delete the pod
Jan 10 13:48:11.430: INFO: Waiting for pod pod-e56a874c-78b3-45a3-90a5-e2f70049d41f to disappear
Jan 10 13:48:11.442: INFO: Pod pod-e56a874c-78b3-45a3-90a5-e2f70049d41f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 13:48:11.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6054" for this suite.
Jan 10 13:48:17.548: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:48:17.710: INFO: namespace emptydir-6054 deletion completed in 6.261592719s

• [SLOW TEST:14.726 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 13:48:17.712: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Jan 10 13:48:17.884: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 13:48:36.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1067" for this suite.
Jan 10 13:48:42.671: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:48:42.818: INFO: namespace pods-1067 deletion completed in 6.2618962s

• [SLOW TEST:25.107 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 13:48:42.819: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan 10 13:48:42.910: INFO: Waiting up to 5m0s for pod "pod-44955f16-ff3a-4d80-80ea-c81b11d0dd4f" in namespace "emptydir-7814" to be "success or failure"
Jan 10 13:48:42.930: INFO: Pod "pod-44955f16-ff3a-4d80-80ea-c81b11d0dd4f": Phase="Pending", Reason="", readiness=false. Elapsed: 19.452577ms
Jan 10 13:48:44.952: INFO: Pod "pod-44955f16-ff3a-4d80-80ea-c81b11d0dd4f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041863643s
Jan 10 13:48:46.962: INFO: Pod "pod-44955f16-ff3a-4d80-80ea-c81b11d0dd4f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051834142s
Jan 10 13:48:48.976: INFO: Pod "pod-44955f16-ff3a-4d80-80ea-c81b11d0dd4f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066128676s
Jan 10 13:48:51.016: INFO: Pod "pod-44955f16-ff3a-4d80-80ea-c81b11d0dd4f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.105261253s
STEP: Saw pod success
Jan 10 13:48:51.019: INFO: Pod "pod-44955f16-ff3a-4d80-80ea-c81b11d0dd4f" satisfied condition "success or failure"
Jan 10 13:48:51.025: INFO: Trying to get logs from node iruya-node pod pod-44955f16-ff3a-4d80-80ea-c81b11d0dd4f container test-container: 
STEP: delete the pod
Jan 10 13:48:51.313: INFO: Waiting for pod pod-44955f16-ff3a-4d80-80ea-c81b11d0dd4f to disappear
Jan 10 13:48:51.368: INFO: Pod pod-44955f16-ff3a-4d80-80ea-c81b11d0dd4f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 13:48:51.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7814" for this suite.
Jan 10 13:48:57.438: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:48:57.600: INFO: namespace emptydir-7814 deletion completed in 6.219381603s

• [SLOW TEST:14.782 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 13:48:57.601: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 10 13:48:57.919: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Jan 10 13:48:57.986: INFO: Number of nodes with available pods: 0
Jan 10 13:48:57.986: INFO: Node iruya-node is running more than one daemon pod
Jan 10 13:48:59.892: INFO: Number of nodes with available pods: 0
Jan 10 13:48:59.892: INFO: Node iruya-node is running more than one daemon pod
Jan 10 13:49:00.375: INFO: Number of nodes with available pods: 0
Jan 10 13:49:00.376: INFO: Node iruya-node is running more than one daemon pod
Jan 10 13:49:01.306: INFO: Number of nodes with available pods: 0
Jan 10 13:49:01.306: INFO: Node iruya-node is running more than one daemon pod
Jan 10 13:49:02.015: INFO: Number of nodes with available pods: 0
Jan 10 13:49:02.015: INFO: Node iruya-node is running more than one daemon pod
Jan 10 13:49:03.006: INFO: Number of nodes with available pods: 0
Jan 10 13:49:03.006: INFO: Node iruya-node is running more than one daemon pod
Jan 10 13:49:05.238: INFO: Number of nodes with available pods: 0
Jan 10 13:49:05.238: INFO: Node iruya-node is running more than one daemon pod
Jan 10 13:49:06.146: INFO: Number of nodes with available pods: 0
Jan 10 13:49:06.146: INFO: Node iruya-node is running more than one daemon pod
Jan 10 13:49:07.018: INFO: Number of nodes with available pods: 0
Jan 10 13:49:07.018: INFO: Node iruya-node is running more than one daemon pod
Jan 10 13:49:08.041: INFO: Number of nodes with available pods: 0
Jan 10 13:49:08.041: INFO: Node iruya-node is running more than one daemon pod
Jan 10 13:49:09.004: INFO: Number of nodes with available pods: 2
Jan 10 13:49:09.004: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Jan 10 13:49:09.106: INFO: Wrong image for pod: daemon-set-d6gtz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 10 13:49:09.106: INFO: Wrong image for pod: daemon-set-zgqnr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 10 13:49:10.138: INFO: Wrong image for pod: daemon-set-d6gtz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 10 13:49:10.138: INFO: Wrong image for pod: daemon-set-zgqnr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 10 13:49:11.338: INFO: Wrong image for pod: daemon-set-d6gtz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 10 13:49:11.338: INFO: Wrong image for pod: daemon-set-zgqnr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 10 13:49:12.145: INFO: Wrong image for pod: daemon-set-d6gtz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 10 13:49:12.146: INFO: Wrong image for pod: daemon-set-zgqnr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 10 13:49:13.138: INFO: Wrong image for pod: daemon-set-d6gtz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 10 13:49:13.138: INFO: Wrong image for pod: daemon-set-zgqnr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 10 13:49:14.154: INFO: Wrong image for pod: daemon-set-d6gtz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 10 13:49:14.154: INFO: Pod daemon-set-d6gtz is not available
Jan 10 13:49:14.154: INFO: Wrong image for pod: daemon-set-zgqnr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 10 13:49:15.136: INFO: Wrong image for pod: daemon-set-d6gtz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 10 13:49:15.136: INFO: Pod daemon-set-d6gtz is not available
Jan 10 13:49:15.136: INFO: Wrong image for pod: daemon-set-zgqnr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 10 13:49:16.143: INFO: Wrong image for pod: daemon-set-d6gtz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 10 13:49:16.143: INFO: Pod daemon-set-d6gtz is not available
Jan 10 13:49:16.143: INFO: Wrong image for pod: daemon-set-zgqnr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 10 13:49:17.139: INFO: Wrong image for pod: daemon-set-d6gtz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 10 13:49:17.139: INFO: Pod daemon-set-d6gtz is not available
Jan 10 13:49:17.139: INFO: Wrong image for pod: daemon-set-zgqnr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 10 13:49:18.137: INFO: Pod daemon-set-pb8f9 is not available
Jan 10 13:49:18.138: INFO: Wrong image for pod: daemon-set-zgqnr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 10 13:49:19.142: INFO: Pod daemon-set-pb8f9 is not available
Jan 10 13:49:19.142: INFO: Wrong image for pod: daemon-set-zgqnr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 10 13:49:20.136: INFO: Pod daemon-set-pb8f9 is not available
Jan 10 13:49:20.136: INFO: Wrong image for pod: daemon-set-zgqnr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 10 13:49:21.141: INFO: Pod daemon-set-pb8f9 is not available
Jan 10 13:49:21.141: INFO: Wrong image for pod: daemon-set-zgqnr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 10 13:49:22.137: INFO: Pod daemon-set-pb8f9 is not available
Jan 10 13:49:22.137: INFO: Wrong image for pod: daemon-set-zgqnr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 10 13:49:23.135: INFO: Pod daemon-set-pb8f9 is not available
Jan 10 13:49:23.136: INFO: Wrong image for pod: daemon-set-zgqnr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 10 13:49:24.177: INFO: Wrong image for pod: daemon-set-zgqnr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 10 13:49:25.138: INFO: Wrong image for pod: daemon-set-zgqnr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 10 13:49:26.139: INFO: Wrong image for pod: daemon-set-zgqnr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 10 13:49:27.140: INFO: Wrong image for pod: daemon-set-zgqnr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 10 13:49:28.145: INFO: Wrong image for pod: daemon-set-zgqnr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 10 13:49:29.138: INFO: Wrong image for pod: daemon-set-zgqnr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 10 13:49:29.138: INFO: Pod daemon-set-zgqnr is not available
Jan 10 13:49:30.139: INFO: Pod daemon-set-nlqjn is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Jan 10 13:49:30.158: INFO: Number of nodes with available pods: 1
Jan 10 13:49:30.158: INFO: Node iruya-node is running more than one daemon pod
Jan 10 13:49:31.175: INFO: Number of nodes with available pods: 1
Jan 10 13:49:31.175: INFO: Node iruya-node is running more than one daemon pod
Jan 10 13:49:32.208: INFO: Number of nodes with available pods: 1
Jan 10 13:49:32.208: INFO: Node iruya-node is running more than one daemon pod
Jan 10 13:49:33.178: INFO: Number of nodes with available pods: 1
Jan 10 13:49:33.178: INFO: Node iruya-node is running more than one daemon pod
Jan 10 13:49:34.188: INFO: Number of nodes with available pods: 1
Jan 10 13:49:34.188: INFO: Node iruya-node is running more than one daemon pod
Jan 10 13:49:35.173: INFO: Number of nodes with available pods: 1
Jan 10 13:49:35.173: INFO: Node iruya-node is running more than one daemon pod
Jan 10 13:49:36.178: INFO: Number of nodes with available pods: 1
Jan 10 13:49:36.178: INFO: Node iruya-node is running more than one daemon pod
Jan 10 13:49:37.174: INFO: Number of nodes with available pods: 2
Jan 10 13:49:37.174: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4554, will wait for the garbage collector to delete the pods
Jan 10 13:49:37.265: INFO: Deleting DaemonSet.extensions daemon-set took: 17.823169ms
Jan 10 13:49:37.665: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.535731ms
Jan 10 13:49:47.873: INFO: Number of nodes with available pods: 0
Jan 10 13:49:47.873: INFO: Number of running nodes: 0, number of available pods: 0
Jan 10 13:49:47.903: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4554/daemonsets","resourceVersion":"20029732"},"items":null}

Jan 10 13:49:47.908: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4554/pods","resourceVersion":"20029732"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 13:49:47.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4554" for this suite.
Jan 10 13:49:53.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:49:54.032: INFO: namespace daemonsets-4554 deletion completed in 6.108960349s

• [SLOW TEST:56.431 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 13:49:54.032: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5071.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5071.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5071.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5071.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5071.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5071.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5071.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5071.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5071.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5071.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5071.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5071.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5071.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 131.173.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.173.131_udp@PTR;check="$$(dig +tcp +noall +answer +search 131.173.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.173.131_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5071.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5071.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5071.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5071.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5071.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5071.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5071.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5071.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5071.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5071.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5071.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5071.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5071.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 131.173.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.173.131_udp@PTR;check="$$(dig +tcp +noall +answer +search 131.173.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.173.131_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 10 13:50:06.302: INFO: Unable to read wheezy_udp@dns-test-service.dns-5071.svc.cluster.local from pod dns-5071/dns-test-c5448796-758d-481b-a258-76275e70a913: the server could not find the requested resource (get pods dns-test-c5448796-758d-481b-a258-76275e70a913)
Jan 10 13:50:06.312: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5071.svc.cluster.local from pod dns-5071/dns-test-c5448796-758d-481b-a258-76275e70a913: the server could not find the requested resource (get pods dns-test-c5448796-758d-481b-a258-76275e70a913)
Jan 10 13:50:06.320: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5071.svc.cluster.local from pod dns-5071/dns-test-c5448796-758d-481b-a258-76275e70a913: the server could not find the requested resource (get pods dns-test-c5448796-758d-481b-a258-76275e70a913)
Jan 10 13:50:06.332: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5071.svc.cluster.local from pod dns-5071/dns-test-c5448796-758d-481b-a258-76275e70a913: the server could not find the requested resource (get pods dns-test-c5448796-758d-481b-a258-76275e70a913)
Jan 10 13:50:06.343: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-5071.svc.cluster.local from pod dns-5071/dns-test-c5448796-758d-481b-a258-76275e70a913: the server could not find the requested resource (get pods dns-test-c5448796-758d-481b-a258-76275e70a913)
Jan 10 13:50:06.373: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-5071.svc.cluster.local from pod dns-5071/dns-test-c5448796-758d-481b-a258-76275e70a913: the server could not find the requested resource (get pods dns-test-c5448796-758d-481b-a258-76275e70a913)
Jan 10 13:50:06.382: INFO: Unable to read wheezy_udp@PodARecord from pod dns-5071/dns-test-c5448796-758d-481b-a258-76275e70a913: the server could not find the requested resource (get pods dns-test-c5448796-758d-481b-a258-76275e70a913)
Jan 10 13:50:06.391: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-5071/dns-test-c5448796-758d-481b-a258-76275e70a913: the server could not find the requested resource (get pods dns-test-c5448796-758d-481b-a258-76275e70a913)
Jan 10 13:50:06.396: INFO: Unable to read 10.108.173.131_udp@PTR from pod dns-5071/dns-test-c5448796-758d-481b-a258-76275e70a913: the server could not find the requested resource (get pods dns-test-c5448796-758d-481b-a258-76275e70a913)
Jan 10 13:50:06.404: INFO: Unable to read 10.108.173.131_tcp@PTR from pod dns-5071/dns-test-c5448796-758d-481b-a258-76275e70a913: the server could not find the requested resource (get pods dns-test-c5448796-758d-481b-a258-76275e70a913)
Jan 10 13:50:06.457: INFO: Unable to read jessie_udp@dns-test-service.dns-5071.svc.cluster.local from pod dns-5071/dns-test-c5448796-758d-481b-a258-76275e70a913: the server could not find the requested resource (get pods dns-test-c5448796-758d-481b-a258-76275e70a913)
Jan 10 13:50:06.468: INFO: Unable to read jessie_tcp@dns-test-service.dns-5071.svc.cluster.local from pod dns-5071/dns-test-c5448796-758d-481b-a258-76275e70a913: the server could not find the requested resource (get pods dns-test-c5448796-758d-481b-a258-76275e70a913)
Jan 10 13:50:06.474: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5071.svc.cluster.local from pod dns-5071/dns-test-c5448796-758d-481b-a258-76275e70a913: the server could not find the requested resource (get pods dns-test-c5448796-758d-481b-a258-76275e70a913)
Jan 10 13:50:06.481: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5071.svc.cluster.local from pod dns-5071/dns-test-c5448796-758d-481b-a258-76275e70a913: the server could not find the requested resource (get pods dns-test-c5448796-758d-481b-a258-76275e70a913)
Jan 10 13:50:06.488: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-5071.svc.cluster.local from pod dns-5071/dns-test-c5448796-758d-481b-a258-76275e70a913: the server could not find the requested resource (get pods dns-test-c5448796-758d-481b-a258-76275e70a913)
Jan 10 13:50:06.494: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-5071.svc.cluster.local from pod dns-5071/dns-test-c5448796-758d-481b-a258-76275e70a913: the server could not find the requested resource (get pods dns-test-c5448796-758d-481b-a258-76275e70a913)
Jan 10 13:50:06.505: INFO: Unable to read jessie_udp@PodARecord from pod dns-5071/dns-test-c5448796-758d-481b-a258-76275e70a913: the server could not find the requested resource (get pods dns-test-c5448796-758d-481b-a258-76275e70a913)
Jan 10 13:50:06.511: INFO: Unable to read jessie_tcp@PodARecord from pod dns-5071/dns-test-c5448796-758d-481b-a258-76275e70a913: the server could not find the requested resource (get pods dns-test-c5448796-758d-481b-a258-76275e70a913)
Jan 10 13:50:06.515: INFO: Unable to read 10.108.173.131_udp@PTR from pod dns-5071/dns-test-c5448796-758d-481b-a258-76275e70a913: the server could not find the requested resource (get pods dns-test-c5448796-758d-481b-a258-76275e70a913)
Jan 10 13:50:06.521: INFO: Unable to read 10.108.173.131_tcp@PTR from pod dns-5071/dns-test-c5448796-758d-481b-a258-76275e70a913: the server could not find the requested resource (get pods dns-test-c5448796-758d-481b-a258-76275e70a913)
Jan 10 13:50:06.521: INFO: Lookups using dns-5071/dns-test-c5448796-758d-481b-a258-76275e70a913 failed for: [wheezy_udp@dns-test-service.dns-5071.svc.cluster.local wheezy_tcp@dns-test-service.dns-5071.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5071.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5071.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-5071.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-5071.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.108.173.131_udp@PTR 10.108.173.131_tcp@PTR jessie_udp@dns-test-service.dns-5071.svc.cluster.local jessie_tcp@dns-test-service.dns-5071.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5071.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5071.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-5071.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-5071.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.108.173.131_udp@PTR 10.108.173.131_tcp@PTR]

Jan 10 13:50:11.682: INFO: DNS probes using dns-5071/dns-test-c5448796-758d-481b-a258-76275e70a913 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 13:50:12.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5071" for this suite.
Jan 10 13:50:18.291: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:50:18.415: INFO: namespace dns-5071 deletion completed in 6.194095611s

• [SLOW TEST:24.383 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 13:50:18.415: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-9943, will wait for the garbage collector to delete the pods
Jan 10 13:50:28.648: INFO: Deleting Job.batch foo took: 8.547283ms
Jan 10 13:50:28.948: INFO: Terminating Job.batch foo pods took: 300.502426ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 13:51:16.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-9943" for this suite.
Jan 10 13:51:22.603: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:51:22.730: INFO: namespace job-9943 deletion completed in 6.160378025s

• [SLOW TEST:64.315 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 13:51:22.731: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-downwardapi-xqtg
STEP: Creating a pod to test atomic-volume-subpath
Jan 10 13:51:22.893: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-xqtg" in namespace "subpath-4569" to be "success or failure"
Jan 10 13:51:22.899: INFO: Pod "pod-subpath-test-downwardapi-xqtg": Phase="Pending", Reason="", readiness=false. Elapsed: 5.405185ms
Jan 10 13:51:24.911: INFO: Pod "pod-subpath-test-downwardapi-xqtg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018184937s
Jan 10 13:51:26.961: INFO: Pod "pod-subpath-test-downwardapi-xqtg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067929249s
Jan 10 13:51:28.977: INFO: Pod "pod-subpath-test-downwardapi-xqtg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083464481s
Jan 10 13:51:30.988: INFO: Pod "pod-subpath-test-downwardapi-xqtg": Phase="Running", Reason="", readiness=true. Elapsed: 8.094908219s
Jan 10 13:51:32.999: INFO: Pod "pod-subpath-test-downwardapi-xqtg": Phase="Running", Reason="", readiness=true. Elapsed: 10.105981117s
Jan 10 13:51:35.010: INFO: Pod "pod-subpath-test-downwardapi-xqtg": Phase="Running", Reason="", readiness=true. Elapsed: 12.116930881s
Jan 10 13:51:37.018: INFO: Pod "pod-subpath-test-downwardapi-xqtg": Phase="Running", Reason="", readiness=true. Elapsed: 14.124475164s
Jan 10 13:51:39.027: INFO: Pod "pod-subpath-test-downwardapi-xqtg": Phase="Running", Reason="", readiness=true. Elapsed: 16.133644637s
Jan 10 13:51:41.034: INFO: Pod "pod-subpath-test-downwardapi-xqtg": Phase="Running", Reason="", readiness=true. Elapsed: 18.140363112s
Jan 10 13:51:43.042: INFO: Pod "pod-subpath-test-downwardapi-xqtg": Phase="Running", Reason="", readiness=true. Elapsed: 20.148391235s
Jan 10 13:51:45.054: INFO: Pod "pod-subpath-test-downwardapi-xqtg": Phase="Running", Reason="", readiness=true. Elapsed: 22.161238849s
Jan 10 13:51:47.068: INFO: Pod "pod-subpath-test-downwardapi-xqtg": Phase="Running", Reason="", readiness=true. Elapsed: 24.174816302s
Jan 10 13:51:49.077: INFO: Pod "pod-subpath-test-downwardapi-xqtg": Phase="Running", Reason="", readiness=true. Elapsed: 26.183805059s
Jan 10 13:51:51.090: INFO: Pod "pod-subpath-test-downwardapi-xqtg": Phase="Running", Reason="", readiness=true. Elapsed: 28.196744132s
Jan 10 13:51:53.098: INFO: Pod "pod-subpath-test-downwardapi-xqtg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.204909934s
STEP: Saw pod success
Jan 10 13:51:53.098: INFO: Pod "pod-subpath-test-downwardapi-xqtg" satisfied condition "success or failure"
Jan 10 13:51:53.102: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-downwardapi-xqtg container test-container-subpath-downwardapi-xqtg: 
STEP: delete the pod
Jan 10 13:51:53.249: INFO: Waiting for pod pod-subpath-test-downwardapi-xqtg to disappear
Jan 10 13:51:53.262: INFO: Pod pod-subpath-test-downwardapi-xqtg no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-xqtg
Jan 10 13:51:53.262: INFO: Deleting pod "pod-subpath-test-downwardapi-xqtg" in namespace "subpath-4569"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 13:51:53.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-4569" for this suite.
Jan 10 13:51:59.305: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:51:59.441: INFO: namespace subpath-4569 deletion completed in 6.168712386s

• [SLOW TEST:36.710 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 13:51:59.442: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 13:51:59.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2785" for this suite.
Jan 10 13:52:05.691: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:52:05.839: INFO: namespace kubelet-test-2785 deletion completed in 6.17346934s

• [SLOW TEST:6.397 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 13:52:05.839: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 10 13:52:05.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-6637'
Jan 10 13:52:07.972: INFO: stderr: ""
Jan 10 13:52:07.972: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Jan 10 13:52:07.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-6637'
Jan 10 13:52:16.538: INFO: stderr: ""
Jan 10 13:52:16.539: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 13:52:16.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6637" for this suite.
Jan 10 13:52:22.616: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:52:22.744: INFO: namespace kubectl-6637 deletion completed in 6.191973803s

• [SLOW TEST:16.905 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 13:52:22.745: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-1663
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating stateful set ss in namespace statefulset-1663
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1663
Jan 10 13:52:22.972: INFO: Found 0 stateful pods, waiting for 1
Jan 10 13:52:32.982: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Jan 10 13:52:32.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 10 13:52:33.524: INFO: stderr: "I0110 13:52:33.198186     697 log.go:172] (0xc0009f0370) (0xc0005866e0) Create stream\nI0110 13:52:33.198370     697 log.go:172] (0xc0009f0370) (0xc0005866e0) Stream added, broadcasting: 1\nI0110 13:52:33.210450     697 log.go:172] (0xc0009f0370) Reply frame received for 1\nI0110 13:52:33.210496     697 log.go:172] (0xc0009f0370) (0xc0007b8000) Create stream\nI0110 13:52:33.210504     697 log.go:172] (0xc0009f0370) (0xc0007b8000) Stream added, broadcasting: 3\nI0110 13:52:33.212606     697 log.go:172] (0xc0009f0370) Reply frame received for 3\nI0110 13:52:33.212653     697 log.go:172] (0xc0009f0370) (0xc000586780) Create stream\nI0110 13:52:33.212663     697 log.go:172] (0xc0009f0370) (0xc000586780) Stream added, broadcasting: 5\nI0110 13:52:33.218681     697 log.go:172] (0xc0009f0370) Reply frame received for 5\nI0110 13:52:33.348947     697 log.go:172] (0xc0009f0370) Data frame received for 5\nI0110 13:52:33.349001     697 log.go:172] (0xc000586780) (5) Data frame handling\nI0110 13:52:33.349033     697 log.go:172] (0xc000586780) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0110 13:52:33.407809     697 log.go:172] (0xc0009f0370) Data frame received for 3\nI0110 13:52:33.407873     697 log.go:172] (0xc0007b8000) (3) Data frame handling\nI0110 13:52:33.407903     697 log.go:172] (0xc0007b8000) (3) Data frame sent\nI0110 13:52:33.515647     697 log.go:172] (0xc0009f0370) Data frame received for 1\nI0110 13:52:33.515702     697 log.go:172] (0xc0009f0370) (0xc0007b8000) Stream removed, broadcasting: 3\nI0110 13:52:33.515743     697 log.go:172] (0xc0005866e0) (1) Data frame handling\nI0110 13:52:33.515765     697 log.go:172] (0xc0009f0370) (0xc000586780) Stream removed, broadcasting: 5\nI0110 13:52:33.515784     697 log.go:172] (0xc0005866e0) (1) Data frame sent\nI0110 13:52:33.515799     697 log.go:172] (0xc0009f0370) (0xc0005866e0) Stream removed, broadcasting: 1\nI0110 13:52:33.515815     697 log.go:172] (0xc0009f0370) Go away received\nI0110 13:52:33.516289     697 log.go:172] (0xc0009f0370) (0xc0005866e0) Stream removed, broadcasting: 1\nI0110 13:52:33.516384     697 log.go:172] (0xc0009f0370) (0xc0007b8000) Stream removed, broadcasting: 3\nI0110 13:52:33.516409     697 log.go:172] (0xc0009f0370) (0xc000586780) Stream removed, broadcasting: 5\n"
Jan 10 13:52:33.525: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 10 13:52:33.525: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 10 13:52:33.533: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan 10 13:52:43.545: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 10 13:52:43.545: INFO: Waiting for statefulset status.replicas updated to 0
Jan 10 13:52:43.579: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Jan 10 13:52:43.579: INFO: ss-0  iruya-node  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:22 +0000 UTC  }]
Jan 10 13:52:43.579: INFO: 
Jan 10 13:52:43.579: INFO: StatefulSet ss has not reached scale 3, at 1
Jan 10 13:52:45.154: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.987127544s
Jan 10 13:52:46.163: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.411855136s
Jan 10 13:52:47.180: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.403090608s
Jan 10 13:52:48.189: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.386333958s
Jan 10 13:52:49.838: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.376687149s
Jan 10 13:52:51.387: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.728438003s
Jan 10 13:52:52.399: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.178851244s
Jan 10 13:52:53.412: INFO: Verifying statefulset ss doesn't scale past 3 for another 166.96914ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1663
Jan 10 13:52:54.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 13:52:54.882: INFO: stderr: "I0110 13:52:54.627734     715 log.go:172] (0xc000756370) (0xc0006de6e0) Create stream\nI0110 13:52:54.627854     715 log.go:172] (0xc000756370) (0xc0006de6e0) Stream added, broadcasting: 1\nI0110 13:52:54.633346     715 log.go:172] (0xc000756370) Reply frame received for 1\nI0110 13:52:54.633386     715 log.go:172] (0xc000756370) (0xc0004d41e0) Create stream\nI0110 13:52:54.633399     715 log.go:172] (0xc000756370) (0xc0004d41e0) Stream added, broadcasting: 3\nI0110 13:52:54.635793     715 log.go:172] (0xc000756370) Reply frame received for 3\nI0110 13:52:54.635820     715 log.go:172] (0xc000756370) (0xc0006de780) Create stream\nI0110 13:52:54.635828     715 log.go:172] (0xc000756370) (0xc0006de780) Stream added, broadcasting: 5\nI0110 13:52:54.637111     715 log.go:172] (0xc000756370) Reply frame received for 5\nI0110 13:52:54.728211     715 log.go:172] (0xc000756370) Data frame received for 3\nI0110 13:52:54.728332     715 log.go:172] (0xc000756370) Data frame received for 5\nI0110 13:52:54.728513     715 log.go:172] (0xc0006de780) (5) Data frame handling\nI0110 13:52:54.728522     715 log.go:172] (0xc0006de780) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0110 13:52:54.728534     715 log.go:172] (0xc0004d41e0) (3) Data frame handling\nI0110 13:52:54.728551     715 log.go:172] (0xc0004d41e0) (3) Data frame sent\nI0110 13:52:54.877526     715 log.go:172] (0xc000756370) (0xc0004d41e0) Stream removed, broadcasting: 3\nI0110 13:52:54.877665     715 log.go:172] (0xc000756370) Data frame received for 1\nI0110 13:52:54.877678     715 log.go:172] (0xc0006de6e0) (1) Data frame handling\nI0110 13:52:54.877689     715 log.go:172] (0xc0006de6e0) (1) Data frame sent\nI0110 13:52:54.877699     715 log.go:172] (0xc000756370) (0xc0006de6e0) Stream removed, broadcasting: 1\nI0110 13:52:54.877742     715 log.go:172] (0xc000756370) (0xc0006de780) Stream removed, broadcasting: 5\nI0110 13:52:54.877792     715 log.go:172] (0xc000756370) Go away received\nI0110 13:52:54.878108     715 log.go:172] (0xc000756370) (0xc0006de6e0) Stream removed, broadcasting: 1\nI0110 13:52:54.878133     715 log.go:172] (0xc000756370) (0xc0004d41e0) Stream removed, broadcasting: 3\nI0110 13:52:54.878141     715 log.go:172] (0xc000756370) (0xc0006de780) Stream removed, broadcasting: 5\n"
Jan 10 13:52:54.883: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 10 13:52:54.883: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 10 13:52:54.883: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 13:52:55.281: INFO: stderr: "I0110 13:52:55.036554     731 log.go:172] (0xc00091c580) (0xc00091a960) Create stream\nI0110 13:52:55.036639     731 log.go:172] (0xc00091c580) (0xc00091a960) Stream added, broadcasting: 1\nI0110 13:52:55.040272     731 log.go:172] (0xc00091c580) Reply frame received for 1\nI0110 13:52:55.040292     731 log.go:172] (0xc00091c580) (0xc00091a000) Create stream\nI0110 13:52:55.040297     731 log.go:172] (0xc00091c580) (0xc00091a000) Stream added, broadcasting: 3\nI0110 13:52:55.041069     731 log.go:172] (0xc00091c580) Reply frame received for 3\nI0110 13:52:55.041103     731 log.go:172] (0xc00091c580) (0xc0005ea3c0) Create stream\nI0110 13:52:55.041118     731 log.go:172] (0xc00091c580) (0xc0005ea3c0) Stream added, broadcasting: 5\nI0110 13:52:55.042031     731 log.go:172] (0xc00091c580) Reply frame received for 5\nI0110 13:52:55.168131     731 log.go:172] (0xc00091c580) Data frame received for 5\nI0110 13:52:55.168220     731 log.go:172] (0xc0005ea3c0) (5) Data frame handling\nI0110 13:52:55.168270     731 log.go:172] (0xc0005ea3c0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0110 13:52:55.210458     731 log.go:172] (0xc00091c580) Data frame received for 3\nI0110 13:52:55.210600     731 log.go:172] (0xc00091a000) (3) Data frame handling\nI0110 13:52:55.210618     731 log.go:172] (0xc00091a000) (3) Data frame sent\nI0110 13:52:55.210658     731 log.go:172] (0xc00091c580) Data frame received for 5\nI0110 13:52:55.210696     731 log.go:172] (0xc0005ea3c0) (5) Data frame handling\nI0110 13:52:55.210708     731 log.go:172] (0xc0005ea3c0) (5) Data frame sent\nI0110 13:52:55.210717     731 log.go:172] (0xc00091c580) Data frame received for 5\nI0110 13:52:55.210724     731 log.go:172] (0xc0005ea3c0) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0110 13:52:55.210741     731 log.go:172] (0xc0005ea3c0) (5) Data frame sent\nI0110 13:52:55.276387     731 log.go:172] (0xc00091c580) (0xc00091a000) Stream removed, broadcasting: 3\nI0110 13:52:55.276488     731 log.go:172] (0xc00091c580) Data frame received for 1\nI0110 13:52:55.276518     731 log.go:172] (0xc00091c580) (0xc0005ea3c0) Stream removed, broadcasting: 5\nI0110 13:52:55.276563     731 log.go:172] (0xc00091a960) (1) Data frame handling\nI0110 13:52:55.276618     731 log.go:172] (0xc00091a960) (1) Data frame sent\nI0110 13:52:55.276635     731 log.go:172] (0xc00091c580) (0xc00091a960) Stream removed, broadcasting: 1\nI0110 13:52:55.276655     731 log.go:172] (0xc00091c580) Go away received\nI0110 13:52:55.277220     731 log.go:172] (0xc00091c580) (0xc00091a960) Stream removed, broadcasting: 1\nI0110 13:52:55.277289     731 log.go:172] (0xc00091c580) (0xc00091a000) Stream removed, broadcasting: 3\nI0110 13:52:55.277308     731 log.go:172] (0xc00091c580) (0xc0005ea3c0) Stream removed, broadcasting: 5\n"
Jan 10 13:52:55.281: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 10 13:52:55.281: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 10 13:52:55.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 13:52:55.789: INFO: stderr: "I0110 13:52:55.467350     749 log.go:172] (0xc00012adc0) (0xc0008dc640) Create stream\nI0110 13:52:55.467469     749 log.go:172] (0xc00012adc0) (0xc0008dc640) Stream added, broadcasting: 1\nI0110 13:52:55.474822     749 log.go:172] (0xc00012adc0) Reply frame received for 1\nI0110 13:52:55.474857     749 log.go:172] (0xc00012adc0) (0xc000924000) Create stream\nI0110 13:52:55.474866     749 log.go:172] (0xc00012adc0) (0xc000924000) Stream added, broadcasting: 3\nI0110 13:52:55.476972     749 log.go:172] (0xc00012adc0) Reply frame received for 3\nI0110 13:52:55.477007     749 log.go:172] (0xc00012adc0) (0xc0008dc6e0) Create stream\nI0110 13:52:55.477036     749 log.go:172] (0xc00012adc0) (0xc0008dc6e0) Stream added, broadcasting: 5\nI0110 13:52:55.479826     749 log.go:172] (0xc00012adc0) Reply frame received for 5\nI0110 13:52:55.621459     749 log.go:172] (0xc00012adc0) Data frame received for 5\nI0110 13:52:55.621565     749 log.go:172] (0xc0008dc6e0) (5) Data frame handling\nI0110 13:52:55.621590     749 log.go:172] (0xc0008dc6e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0110 13:52:55.624899     749 log.go:172] (0xc00012adc0) Data frame received for 3\nI0110 13:52:55.624931     749 log.go:172] (0xc000924000) (3) Data frame handling\nI0110 13:52:55.624964     749 log.go:172] (0xc000924000) (3) Data frame sent\nI0110 13:52:55.625312     749 log.go:172] (0xc00012adc0) Data frame received for 5\nI0110 13:52:55.625334     749 log.go:172] (0xc0008dc6e0) (5) Data frame handling\nI0110 13:52:55.625363     749 log.go:172] (0xc0008dc6e0) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0110 13:52:55.782297     749 log.go:172] (0xc00012adc0) (0xc000924000) Stream removed, broadcasting: 3\nI0110 13:52:55.782938     749 log.go:172] (0xc00012adc0) (0xc0008dc6e0) Stream removed, broadcasting: 5\nI0110 13:52:55.782999     749 log.go:172] (0xc00012adc0) Data frame received for 1\nI0110 13:52:55.783016     749 log.go:172] (0xc0008dc640) (1) Data frame handling\nI0110 13:52:55.783035     749 log.go:172] (0xc0008dc640) (1) Data frame sent\nI0110 13:52:55.783045     749 log.go:172] (0xc00012adc0) (0xc0008dc640) Stream removed, broadcasting: 1\nI0110 13:52:55.783054     749 log.go:172] (0xc00012adc0) Go away received\nI0110 13:52:55.783562     749 log.go:172] (0xc00012adc0) (0xc0008dc640) Stream removed, broadcasting: 1\nI0110 13:52:55.783668     749 log.go:172] (0xc00012adc0) (0xc000924000) Stream removed, broadcasting: 3\nI0110 13:52:55.783714     749 log.go:172] (0xc00012adc0) (0xc0008dc6e0) Stream removed, broadcasting: 5\n"
Jan 10 13:52:55.789: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 10 13:52:55.789: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 10 13:52:55.801: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 10 13:52:55.801: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 10 13:52:55.801: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Jan 10 13:52:55.808: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 10 13:52:56.191: INFO: stderr: "I0110 13:52:55.952874     769 log.go:172] (0xc0005620b0) (0xc00083c5a0) Create stream\nI0110 13:52:55.953036     769 log.go:172] (0xc0005620b0) (0xc00083c5a0) Stream added, broadcasting: 1\nI0110 13:52:55.958617     769 log.go:172] (0xc0005620b0) Reply frame received for 1\nI0110 13:52:55.958646     769 log.go:172] (0xc0005620b0) (0xc0006801e0) Create stream\nI0110 13:52:55.958654     769 log.go:172] (0xc0005620b0) (0xc0006801e0) Stream added, broadcasting: 3\nI0110 13:52:55.960867     769 log.go:172] (0xc0005620b0) Reply frame received for 3\nI0110 13:52:55.960895     769 log.go:172] (0xc0005620b0) (0xc000288000) Create stream\nI0110 13:52:55.960902     769 log.go:172] (0xc0005620b0) (0xc000288000) Stream added, broadcasting: 5\nI0110 13:52:55.962187     769 log.go:172] (0xc0005620b0) Reply frame received for 5\nI0110 13:52:56.040480     769 log.go:172] (0xc0005620b0) Data frame received for 5\nI0110 13:52:56.040536     769 log.go:172] (0xc000288000) (5) Data frame handling\nI0110 13:52:56.040547     769 log.go:172] (0xc000288000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0110 13:52:56.040584     769 log.go:172] (0xc0005620b0) Data frame received for 3\nI0110 13:52:56.040598     769 log.go:172] (0xc0006801e0) (3) Data frame handling\nI0110 13:52:56.040619     769 log.go:172] (0xc0006801e0) (3) Data frame sent\nI0110 13:52:56.182961     769 log.go:172] (0xc0005620b0) (0xc0006801e0) Stream removed, broadcasting: 3\nI0110 13:52:56.183150     769 log.go:172] (0xc0005620b0) Data frame received for 1\nI0110 13:52:56.183173     769 log.go:172] (0xc00083c5a0) (1) Data frame handling\nI0110 13:52:56.183204     769 log.go:172] (0xc00083c5a0) (1) Data frame sent\nI0110 13:52:56.183218     769 log.go:172] (0xc0005620b0) (0xc00083c5a0) Stream removed, broadcasting: 1\nI0110 13:52:56.184092     769 log.go:172] (0xc0005620b0) (0xc000288000) Stream removed, broadcasting: 5\nI0110 13:52:56.184116     769 log.go:172] (0xc0005620b0) Go away received\nI0110 13:52:56.184173     769 log.go:172] (0xc0005620b0) (0xc00083c5a0) Stream removed, broadcasting: 1\nI0110 13:52:56.184202     769 log.go:172] (0xc0005620b0) (0xc0006801e0) Stream removed, broadcasting: 3\nI0110 13:52:56.184224     769 log.go:172] (0xc0005620b0) (0xc000288000) Stream removed, broadcasting: 5\n"
Jan 10 13:52:56.191: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 10 13:52:56.191: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 10 13:52:56.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 10 13:52:56.491: INFO: stderr: "I0110 13:52:56.318176     785 log.go:172] (0xc0001160b0) (0xc000842640) Create stream\nI0110 13:52:56.318303     785 log.go:172] (0xc0001160b0) (0xc000842640) Stream added, broadcasting: 1\nI0110 13:52:56.321523     785 log.go:172] (0xc0001160b0) Reply frame received for 1\nI0110 13:52:56.321551     785 log.go:172] (0xc0001160b0) (0xc0005e41e0) Create stream\nI0110 13:52:56.321559     785 log.go:172] (0xc0001160b0) (0xc0005e41e0) Stream added, broadcasting: 3\nI0110 13:52:56.322371     785 log.go:172] (0xc0001160b0) Reply frame received for 3\nI0110 13:52:56.322394     785 log.go:172] (0xc0001160b0) (0xc0008426e0) Create stream\nI0110 13:52:56.322399     785 log.go:172] (0xc0001160b0) (0xc0008426e0) Stream added, broadcasting: 5\nI0110 13:52:56.323294     785 log.go:172] (0xc0001160b0) Reply frame received for 5\nI0110 13:52:56.386790     785 log.go:172] (0xc0001160b0) Data frame received for 5\nI0110 13:52:56.386880     785 log.go:172] (0xc0008426e0) (5) Data frame handling\nI0110 13:52:56.386896     785 log.go:172] (0xc0008426e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0110 13:52:56.412236     785 log.go:172] (0xc0001160b0) Data frame received for 3\nI0110 13:52:56.412262     785 log.go:172] (0xc0005e41e0) (3) Data frame handling\nI0110 13:52:56.412282     785 log.go:172] (0xc0005e41e0) (3) Data frame sent\nI0110 13:52:56.484304     785 log.go:172] (0xc0001160b0) (0xc0005e41e0) Stream removed, broadcasting: 3\nI0110 13:52:56.484418     785 log.go:172] (0xc0001160b0) Data frame received for 1\nI0110 13:52:56.484439     785 log.go:172] (0xc000842640) (1) Data frame handling\nI0110 13:52:56.484448     785 log.go:172] (0xc000842640) (1) Data frame sent\nI0110 13:52:56.484458     785 log.go:172] (0xc0001160b0) (0xc000842640) Stream removed, broadcasting: 1\nI0110 13:52:56.484539     785 log.go:172] (0xc0001160b0) (0xc0008426e0) Stream removed, broadcasting: 5\nI0110 13:52:56.484569     785 log.go:172] (0xc0001160b0) Go away received\nI0110 13:52:56.484733     785 log.go:172] (0xc0001160b0) (0xc000842640) Stream removed, broadcasting: 1\nI0110 13:52:56.484825     785 log.go:172] (0xc0001160b0) (0xc0005e41e0) Stream removed, broadcasting: 3\nI0110 13:52:56.484863     785 log.go:172] (0xc0001160b0) (0xc0008426e0) Stream removed, broadcasting: 5\n"
Jan 10 13:52:56.492: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 10 13:52:56.492: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 10 13:52:56.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 10 13:52:57.180: INFO: stderr: "I0110 13:52:56.801117     800 log.go:172] (0xc0009c0420) (0xc0009ba780) Create stream\nI0110 13:52:56.801507     800 log.go:172] (0xc0009c0420) (0xc0009ba780) Stream added, broadcasting: 1\nI0110 13:52:56.873151     800 log.go:172] (0xc0009c0420) Reply frame received for 1\nI0110 13:52:56.873326     800 log.go:172] (0xc0009c0420) (0xc000590280) Create stream\nI0110 13:52:56.873346     800 log.go:172] (0xc0009c0420) (0xc000590280) Stream added, broadcasting: 3\nI0110 13:52:56.876721     800 log.go:172] (0xc0009c0420) Reply frame received for 3\nI0110 13:52:56.876778     800 log.go:172] (0xc0009c0420) (0xc0009ba000) Create stream\nI0110 13:52:56.876787     800 log.go:172] (0xc0009c0420) (0xc0009ba000) Stream added, broadcasting: 5\nI0110 13:52:56.878873     800 log.go:172] (0xc0009c0420) Reply frame received for 5\nI0110 13:52:57.010500     800 log.go:172] (0xc0009c0420) Data frame received for 5\nI0110 13:52:57.011050     800 log.go:172] (0xc0009ba000) (5) Data frame handling\nI0110 13:52:57.011107     800 log.go:172] (0xc0009ba000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0110 13:52:57.046035     800 log.go:172] (0xc0009c0420) Data frame received for 3\nI0110 13:52:57.046068     800 log.go:172] (0xc000590280) (3) Data frame handling\nI0110 13:52:57.046087     800 log.go:172] (0xc000590280) (3) Data frame sent\nI0110 13:52:57.175120     800 log.go:172] (0xc0009c0420) Data frame received for 1\nI0110 13:52:57.175207     800 log.go:172] (0xc0009ba780) (1) Data frame handling\nI0110 13:52:57.175236     800 log.go:172] (0xc0009ba780) (1) Data frame sent\nI0110 13:52:57.175612     800 log.go:172] (0xc0009c0420) (0xc0009ba780) Stream removed, broadcasting: 1\nI0110 13:52:57.176058     800 log.go:172] (0xc0009c0420) (0xc000590280) Stream removed, broadcasting: 3\nI0110 13:52:57.176337     800 log.go:172] (0xc0009c0420) (0xc0009ba000) Stream removed, broadcasting: 5\nI0110 13:52:57.176361     800 log.go:172] (0xc0009c0420) (0xc0009ba780) Stream removed, broadcasting: 1\nI0110 13:52:57.176368     800 log.go:172] (0xc0009c0420) (0xc000590280) Stream removed, broadcasting: 3\nI0110 13:52:57.176374     800 log.go:172] (0xc0009c0420) (0xc0009ba000) Stream removed, broadcasting: 5\n"
Jan 10 13:52:57.180: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 10 13:52:57.180: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 10 13:52:57.181: INFO: Waiting for statefulset status.replicas updated to 0
Jan 10 13:52:57.188: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Jan 10 13:53:07.211: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 10 13:53:07.211: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan 10 13:53:07.211: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan 10 13:53:07.246: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 10 13:53:07.246: INFO: ss-0  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:22 +0000 UTC  }]
Jan 10 13:53:07.246: INFO: ss-1  iruya-server-sfge57q7djm7  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:43 +0000 UTC  }]
Jan 10 13:53:07.246: INFO: ss-2  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:43 +0000 UTC  }]
Jan 10 13:53:07.246: INFO: 
Jan 10 13:53:07.246: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 10 13:53:09.414: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 10 13:53:09.414: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:22 +0000 UTC  }]
Jan 10 13:53:09.414: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:43 +0000 UTC  }]
Jan 10 13:53:09.414: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:43 +0000 UTC  }]
Jan 10 13:53:09.414: INFO: 
Jan 10 13:53:09.414: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 10 13:53:10.443: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 10 13:53:10.443: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:22 +0000 UTC  }]
Jan 10 13:53:10.444: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:43 +0000 UTC  }]
Jan 10 13:53:10.444: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:43 +0000 UTC  }]
Jan 10 13:53:10.444: INFO: 
Jan 10 13:53:10.444: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 10 13:53:11.910: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 10 13:53:11.910: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:22 +0000 UTC  }]
Jan 10 13:53:11.910: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:43 +0000 UTC  }]
Jan 10 13:53:11.910: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:43 +0000 UTC  }]
Jan 10 13:53:11.910: INFO: 
Jan 10 13:53:11.910: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 10 13:53:12.920: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 10 13:53:12.920: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:22 +0000 UTC  }]
Jan 10 13:53:12.920: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:43 +0000 UTC  }]
Jan 10 13:53:12.920: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:43 +0000 UTC  }]
Jan 10 13:53:12.920: INFO: 
Jan 10 13:53:12.920: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 10 13:53:13.934: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 10 13:53:13.935: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:22 +0000 UTC  }]
Jan 10 13:53:13.935: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:43 +0000 UTC  }]
Jan 10 13:53:13.935: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:43 +0000 UTC  }]
Jan 10 13:53:13.935: INFO: 
Jan 10 13:53:13.935: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 10 13:53:14.955: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Jan 10 13:53:14.955: INFO: ss-0  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:22 +0000 UTC  }]
Jan 10 13:53:14.955: INFO: ss-2  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:43 +0000 UTC  }]
Jan 10 13:53:14.955: INFO: 
Jan 10 13:53:14.955: INFO: StatefulSet ss has not reached scale 0, at 2
Jan 10 13:53:15.987: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Jan 10 13:53:15.987: INFO: ss-0  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:22 +0000 UTC  }]
Jan 10 13:53:15.987: INFO: ss-2  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:43 +0000 UTC  }]
Jan 10 13:53:15.987: INFO: 
Jan 10 13:53:15.987: INFO: StatefulSet ss has not reached scale 0, at 2
Jan 10 13:53:17.001: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Jan 10 13:53:17.001: INFO: ss-0  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:22 +0000 UTC  }]
Jan 10 13:53:17.002: INFO: ss-2  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 13:52:43 +0000 UTC  }]
Jan 10 13:53:17.002: INFO: 
Jan 10 13:53:17.002: INFO: StatefulSet ss has not reached scale 0, at 2
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1663
Jan 10 13:53:18.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 13:53:18.186: INFO: rc: 1
Jan 10 13:53:18.187: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc0030506c0 exit status 1   true [0xc00035cfd8 0xc00035d090 0xc00035d108] [0xc00035cfd8 0xc00035d090 0xc00035d108] [0xc00035d080 0xc00035d0d0] [0xba6c50 0xba6c50] 0xc001714e40 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1
Jan 10 13:53:28.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 13:53:28.316: INFO: rc: 1
Jan 10 13:53:28.317: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc003050780 exit status 1   true [0xc00035d148 0xc00035d228 0xc00035d2c8] [0xc00035d148 0xc00035d228 0xc00035d2c8] [0xc00035d208 0xc00035d2b0] [0xba6c50 0xba6c50] 0xc001715140 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 10 13:53:38.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 13:53:38.516: INFO: rc: 1
Jan 10 13:53:38.517: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc003050870 exit status 1   true [0xc00035d2d0 0xc00035d368 0xc00035d3d0] [0xc00035d2d0 0xc00035d368 0xc00035d3d0] [0xc00035d338 0xc00035d3c0] [0xba6c50 0xba6c50] 0xc001715620 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 10 13:53:48.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 13:53:48.649: INFO: rc: 1
Jan 10 13:53:48.650: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc003050930 exit status 1   true [0xc00035d3e0 0xc00035d428 0xc00035d4b0] [0xc00035d3e0 0xc00035d428 0xc00035d4b0] [0xc00035d410 0xc00035d498] [0xba6c50 0xba6c50] 0xc001715bc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 10 13:53:58.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 13:53:58.833: INFO: rc: 1
Jan 10 13:53:58.833: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001fc0210 exit status 1   true [0xc000d8a1a0 0xc000d8a340 0xc000d8a4f0] [0xc000d8a1a0 0xc000d8a340 0xc000d8a4f0] [0xc000d8a300 0xc000d8a470] [0xba6c50 0xba6c50] 0xc0029222a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 10 13:54:08.835: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 13:54:09.022: INFO: rc: 1
Jan 10 13:54:09.022: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002a6b200 exit status 1   true [0xc0006d9868 0xc0006d98d0 0xc0006d99c8] [0xc0006d9868 0xc0006d98d0 0xc0006d99c8] [0xc0006d98a0 0xc0006d99c0] [0xba6c50 0xba6c50] 0xc0020ec900 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 10 13:54:19.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 13:54:19.183: INFO: rc: 1
Jan 10 13:54:19.183: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002a6b2f0 exit status 1   true [0xc0006d99d8 0xc0006d9a40 0xc0006d9a88] [0xc0006d99d8 0xc0006d9a40 0xc0006d9a88] [0xc0006d9a28 0xc0006d9a78] [0xba6c50 0xba6c50] 0xc0020ecde0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 10 13:54:29.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 13:54:29.327: INFO: rc: 1
Jan 10 13:54:29.327: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0024fb680 exit status 1   true [0xc0006c9350 0xc0006c93c0 0xc0006c9470] [0xc0006c9350 0xc0006c93c0 0xc0006c9470] [0xc0006c93a8 0xc0006c9428] [0xba6c50 0xba6c50] 0xc002e007e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 10 13:54:39.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 13:54:39.491: INFO: rc: 1
Jan 10 13:54:39.491: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002a6b3e0 exit status 1   true [0xc0006d9aa8 0xc0006d9ad8 0xc0006d9b08] [0xc0006d9aa8 0xc0006d9ad8 0xc0006d9b08] [0xc0006d9ac8 0xc0006d9af8] [0xba6c50 0xba6c50] 0xc0020ed2c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 10 13:54:49.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 13:54:49.660: INFO: rc: 1
Jan 10 13:54:49.660: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002a6b4a0 exit status 1   true [0xc0006d9b28 0xc0006d9b68 0xc0006d9b90] [0xc0006d9b28 0xc0006d9b68 0xc0006d9b90] [0xc0006d9b58 0xc0006d9b88] [0xba6c50 0xba6c50] 0xc0020ed800 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 10 13:54:59.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 13:54:59.856: INFO: rc: 1
Jan 10 13:54:59.857: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0032580f0 exit status 1   true [0xc0001862d8 0xc0006c9010 0xc0006c9040] [0xc0001862d8 0xc0006c9010 0xc0006c9040] [0xc0006c8fc0 0xc0006c9028] [0xba6c50 0xba6c50] 0xc002532720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 10 13:55:09.858: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 13:55:09.980: INFO: rc: 1
Jan 10 13:55:09.981: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0032b0090 exit status 1   true [0xc0006d8000 0xc0006d80b8 0xc0006d82a8] [0xc0006d8000 0xc0006d80b8 0xc0006d82a8] [0xc0006d8098 0xc0006d81b0] [0xba6c50 0xba6c50] 0xc002896900 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 10 13:55:19.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 13:55:20.155: INFO: rc: 1
Jan 10 13:55:20.155: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc003258210 exit status 1   true [0xc0006c9058 0xc0006c90c0 0xc0006c9108] [0xc0006c9058 0xc0006c90c0 0xc0006c9108] [0xc0006c90b8 0xc0006c90f8] [0xba6c50 0xba6c50] 0xc002532a20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 10 13:55:30.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 13:55:30.297: INFO: rc: 1
Jan 10 13:55:30.298: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0032b0150 exit status 1   true [0xc0006d8370 0xc0006d8608 0xc0006d88f8] [0xc0006d8370 0xc0006d8608 0xc0006d88f8] [0xc0006d84f0 0xc0006d87b8] [0xba6c50 0xba6c50] 0xc002ef6f00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 10 13:55:40.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 13:55:40.469: INFO: rc: 1
Jan 10 13:55:40.470: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc003258330 exit status 1   true [0xc0006c9118 0xc0006c9148 0xc0006c9168] [0xc0006c9118 0xc0006c9148 0xc0006c9168] [0xc0006c9138 0xc0006c9160] [0xba6c50 0xba6c50] 0xc002532ea0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 10 13:55:50.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 13:55:50.627: INFO: rc: 1
Jan 10 13:55:50.627: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0032583f0 exit status 1   true [0xc0006c9178 0xc0006c91d0 0xc0006c9200] [0xc0006c9178 0xc0006c91d0 0xc0006c9200] [0xc0006c91c0 0xc0006c91f0] [0xba6c50 0xba6c50] 0xc002533320 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 10 13:56:00.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 13:56:00.755: INFO: rc: 1
Jan 10 13:56:00.756: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002e440f0 exit status 1   true [0xc00035c1b8 0xc00035c280 0xc00035c3e0] [0xc00035c1b8 0xc00035c280 0xc00035c3e0] [0xc00035c230 0xc00035c380] [0xba6c50 0xba6c50] 0xc002e00720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 10 13:56:10.757: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 13:56:10.942: INFO: rc: 1
Jan 10 13:56:10.942: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002e441e0 exit status 1   true [0xc00035c418 0xc00035c510 0xc00035c610] [0xc00035c418 0xc00035c510 0xc00035c610] [0xc00035c4d8 0xc00035c5b0] [0xba6c50 0xba6c50] 0xc002e00a20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 10 13:56:20.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 13:56:21.053: INFO: rc: 1
Jan 10 13:56:21.054: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0024fa060 exit status 1   true [0xc000d8a080 0xc000d8a168 0xc000d8a238] [0xc000d8a080 0xc000d8a168 0xc000d8a238] [0xc000d8a150 0xc000d8a1a0] [0xba6c50 0xba6c50] 0xc0020ec5a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 10 13:56:31.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 13:56:31.208: INFO: rc: 1
Jan 10 13:56:31.209: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002e44300 exit status 1   true [0xc00035c740 0xc00035c840 0xc00035c950] [0xc00035c740 0xc00035c840 0xc00035c950] [0xc00035c7e0 0xc00035c8b8] [0xba6c50 0xba6c50] 0xc002e00fc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 10 13:56:41.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 13:56:41.339: INFO: rc: 1
Jan 10 13:56:41.340: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002e443c0 exit status 1   true [0xc00035c958 0xc00035ca08 0xc00035cf48] [0xc00035c958 0xc00035ca08 0xc00035cf48] [0xc00035c990 0xc00035cf28] [0xba6c50 0xba6c50] 0xc002e016e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 10 13:56:51.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 13:56:51.511: INFO: rc: 1
Jan 10 13:56:51.512: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0032584e0 exit status 1   true [0xc0006c9220 0xc0006c9248 0xc0006c9270] [0xc0006c9220 0xc0006c9248 0xc0006c9270] [0xc0006c9240 0xc0006c9258] [0xba6c50 0xba6c50] 0xc002533620 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 10 13:57:01.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 13:57:01.679: INFO: rc: 1
Jan 10 13:57:01.679: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002e44060 exit status 1   true [0xc0001862d8 0xc00035c230 0xc00035c380] [0xc0001862d8 0xc00035c230 0xc00035c380] [0xc00035c1f8 0xc00035c2c8] [0xba6c50 0xba6c50] 0xc002896900 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 10 13:57:11.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 13:57:11.824: INFO: rc: 1
Jan 10 13:57:11.825: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0032b00f0 exit status 1   true [0xc0006d8000 0xc0006d80b8 0xc0006d82a8] [0xc0006d8000 0xc0006d80b8 0xc0006d82a8] [0xc0006d8098 0xc0006d81b0] [0xba6c50 0xba6c50] 0xc002e00720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 10 13:57:21.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 13:57:21.968: INFO: rc: 1
Jan 10 13:57:21.968: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0032b0210 exit status 1   true [0xc0006d8370 0xc0006d8608 0xc0006d88f8] [0xc0006d8370 0xc0006d8608 0xc0006d88f8] [0xc0006d84f0 0xc0006d87b8] [0xba6c50 0xba6c50] 0xc002e00a20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 10 13:57:31.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 13:57:32.201: INFO: rc: 1
Jan 10 13:57:32.201: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0032b02d0 exit status 1   true [0xc0006d8980 0xc0006d8aa0 0xc0006d8ba8] [0xc0006d8980 0xc0006d8aa0 0xc0006d8ba8] [0xc0006d8a28 0xc0006d8ae0] [0xba6c50 0xba6c50] 0xc002e00fc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 10 13:57:42.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 13:57:42.327: INFO: rc: 1
Jan 10 13:57:42.327: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0024fa0c0 exit status 1   true [0xc000d8a080 0xc000d8a168 0xc000d8a238] [0xc000d8a080 0xc000d8a168 0xc000d8a238] [0xc000d8a150 0xc000d8a1a0] [0xba6c50 0xba6c50] 0xc002ef6fc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 10 13:57:52.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 13:57:52.457: INFO: rc: 1
Jan 10 13:57:52.458: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002e44210 exit status 1   true [0xc00035c3e0 0xc00035c4d8 0xc00035c5b0] [0xc00035c3e0 0xc00035c4d8 0xc00035c5b0] [0xc00035c488 0xc00035c568] [0xba6c50 0xba6c50] 0xc0020ec420 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 10 13:58:02.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 13:58:02.586: INFO: rc: 1
Jan 10 13:58:02.587: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc003258120 exit status 1   true [0xc0006c8fa0 0xc0006c9018 0xc0006c9058] [0xc0006c8fa0 0xc0006c9018 0xc0006c9058] [0xc0006c9010 0xc0006c9040] [0xba6c50 0xba6c50] 0xc002532720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 10 13:58:12.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 13:58:12.691: INFO: rc: 1
Jan 10 13:58:12.691: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc003258240 exit status 1   true [0xc0006c9078 0xc0006c90e8 0xc0006c9118] [0xc0006c9078 0xc0006c90e8 0xc0006c9118] [0xc0006c90c0 0xc0006c9108] [0xba6c50 0xba6c50] 0xc002532a20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 10 13:58:22.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 13:58:22.836: INFO: rc: 1
Jan 10 13:58:22.836: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: 
Jan 10 13:58:22.836: INFO: Scaling statefulset ss to 0
Jan 10 13:58:22.871: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jan 10 13:58:22.876: INFO: Deleting all statefulset in ns statefulset-1663
Jan 10 13:58:22.880: INFO: Scaling statefulset ss to 0
Jan 10 13:58:22.896: INFO: Waiting for statefulset status.replicas updated to 0
Jan 10 13:58:22.902: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 13:58:22.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1663" for this suite.
Jan 10 13:58:29.008: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:58:29.115: INFO: namespace statefulset-1663 deletion completed in 6.143628341s

• [SLOW TEST:366.371 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 13:58:29.116: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 10 13:58:29.254: INFO: Create a RollingUpdate DaemonSet
Jan 10 13:58:29.289: INFO: Check that daemon pods launch on every node of the cluster
Jan 10 13:58:29.304: INFO: Number of nodes with available pods: 0
Jan 10 13:58:29.304: INFO: Node iruya-node is running more than one daemon pod
Jan 10 13:58:30.942: INFO: Number of nodes with available pods: 0
Jan 10 13:58:30.942: INFO: Node iruya-node is running more than one daemon pod
Jan 10 13:58:31.382: INFO: Number of nodes with available pods: 0
Jan 10 13:58:31.382: INFO: Node iruya-node is running more than one daemon pod
Jan 10 13:58:32.325: INFO: Number of nodes with available pods: 0
Jan 10 13:58:32.325: INFO: Node iruya-node is running more than one daemon pod
Jan 10 13:58:33.318: INFO: Number of nodes with available pods: 0
Jan 10 13:58:33.318: INFO: Node iruya-node is running more than one daemon pod
Jan 10 13:58:35.826: INFO: Number of nodes with available pods: 0
Jan 10 13:58:35.826: INFO: Node iruya-node is running more than one daemon pod
Jan 10 13:58:36.330: INFO: Number of nodes with available pods: 0
Jan 10 13:58:36.330: INFO: Node iruya-node is running more than one daemon pod
Jan 10 13:58:37.381: INFO: Number of nodes with available pods: 0
Jan 10 13:58:37.381: INFO: Node iruya-node is running more than one daemon pod
Jan 10 13:58:38.329: INFO: Number of nodes with available pods: 1
Jan 10 13:58:38.329: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 10 13:58:39.324: INFO: Number of nodes with available pods: 2
Jan 10 13:58:39.325: INFO: Number of running nodes: 2, number of available pods: 2
Jan 10 13:58:39.325: INFO: Update the DaemonSet to trigger a rollout
Jan 10 13:58:39.345: INFO: Updating DaemonSet daemon-set
Jan 10 13:58:45.408: INFO: Roll back the DaemonSet before rollout is complete
Jan 10 13:58:45.432: INFO: Updating DaemonSet daemon-set
Jan 10 13:58:45.432: INFO: Make sure DaemonSet rollback is complete
Jan 10 13:58:45.443: INFO: Wrong image for pod: daemon-set-qpml5. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jan 10 13:58:45.444: INFO: Pod daemon-set-qpml5 is not available
Jan 10 13:58:46.472: INFO: Wrong image for pod: daemon-set-qpml5. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jan 10 13:58:46.472: INFO: Pod daemon-set-qpml5 is not available
Jan 10 13:58:47.474: INFO: Wrong image for pod: daemon-set-qpml5. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jan 10 13:58:47.474: INFO: Pod daemon-set-qpml5 is not available
Jan 10 13:58:48.477: INFO: Wrong image for pod: daemon-set-qpml5. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jan 10 13:58:48.477: INFO: Pod daemon-set-qpml5 is not available
Jan 10 13:58:49.469: INFO: Pod daemon-set-cc52m is not available
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-933, will wait for the garbage collector to delete the pods
Jan 10 13:58:49.564: INFO: Deleting DaemonSet.extensions daemon-set took: 18.791148ms
Jan 10 13:58:49.965: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.927955ms
Jan 10 13:58:56.372: INFO: Number of nodes with available pods: 0
Jan 10 13:58:56.372: INFO: Number of running nodes: 0, number of available pods: 0
Jan 10 13:58:56.428: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-933/daemonsets","resourceVersion":"20030906"},"items":null}

Jan 10 13:58:56.432: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-933/pods","resourceVersion":"20030906"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 13:58:56.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-933" for this suite.
Jan 10 13:59:02.519: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:59:02.704: INFO: namespace daemonsets-933 deletion completed in 6.251495034s

• [SLOW TEST:33.589 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 13:59:02.706: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47
[It] should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Jan 10 13:59:10.934: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Jan 10 13:59:31.105: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 13:59:31.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7269" for this suite.
Jan 10 13:59:37.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:59:37.312: INFO: namespace pods-7269 deletion completed in 6.194458713s

• [SLOW TEST:34.606 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 13:59:37.312: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-9b0e14be-3a8b-4d58-b962-ca9fe2f06c11
STEP: Creating a pod to test consume configMaps
Jan 10 13:59:37.476: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bc529c1a-4d16-499c-9d4a-245c8108807e" in namespace "projected-592" to be "success or failure"
Jan 10 13:59:37.497: INFO: Pod "pod-projected-configmaps-bc529c1a-4d16-499c-9d4a-245c8108807e": Phase="Pending", Reason="", readiness=false. Elapsed: 21.337751ms
Jan 10 13:59:39.507: INFO: Pod "pod-projected-configmaps-bc529c1a-4d16-499c-9d4a-245c8108807e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031157082s
Jan 10 13:59:41.513: INFO: Pod "pod-projected-configmaps-bc529c1a-4d16-499c-9d4a-245c8108807e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037135199s
Jan 10 13:59:43.524: INFO: Pod "pod-projected-configmaps-bc529c1a-4d16-499c-9d4a-245c8108807e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048150016s
Jan 10 13:59:45.545: INFO: Pod "pod-projected-configmaps-bc529c1a-4d16-499c-9d4a-245c8108807e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.069083942s
STEP: Saw pod success
Jan 10 13:59:45.545: INFO: Pod "pod-projected-configmaps-bc529c1a-4d16-499c-9d4a-245c8108807e" satisfied condition "success or failure"
Jan 10 13:59:45.551: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-bc529c1a-4d16-499c-9d4a-245c8108807e container projected-configmap-volume-test: 
STEP: delete the pod
Jan 10 13:59:45.590: INFO: Waiting for pod pod-projected-configmaps-bc529c1a-4d16-499c-9d4a-245c8108807e to disappear
Jan 10 13:59:45.600: INFO: Pod pod-projected-configmaps-bc529c1a-4d16-499c-9d4a-245c8108807e no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 13:59:45.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-592" for this suite.
Jan 10 13:59:51.628: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:59:51.813: INFO: namespace projected-592 deletion completed in 6.208672478s

• [SLOW TEST:14.501 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 13:59:51.814: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-7172e50d-8839-4d20-9a43-0e0f18c625a5
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:00:04.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2108" for this suite.
Jan 10 14:00:26.130: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:00:26.266: INFO: namespace configmap-2108 deletion completed in 22.19759674s

• [SLOW TEST:34.452 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:00:26.266: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-19b201ef-1d33-4fa9-8aaf-d9e1a10578d0
STEP: Creating a pod to test consume secrets
Jan 10 14:00:26.396: INFO: Waiting up to 5m0s for pod "pod-secrets-e69c9117-08ac-47ce-9b34-d02cb92142bc" in namespace "secrets-9527" to be "success or failure"
Jan 10 14:00:26.404: INFO: Pod "pod-secrets-e69c9117-08ac-47ce-9b34-d02cb92142bc": Phase="Pending", Reason="", readiness=false. Elapsed: 7.902096ms
Jan 10 14:00:28.418: INFO: Pod "pod-secrets-e69c9117-08ac-47ce-9b34-d02cb92142bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021381056s
Jan 10 14:00:30.427: INFO: Pod "pod-secrets-e69c9117-08ac-47ce-9b34-d02cb92142bc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030458585s
Jan 10 14:00:32.615: INFO: Pod "pod-secrets-e69c9117-08ac-47ce-9b34-d02cb92142bc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.219304493s
Jan 10 14:00:34.631: INFO: Pod "pod-secrets-e69c9117-08ac-47ce-9b34-d02cb92142bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.23507906s
STEP: Saw pod success
Jan 10 14:00:34.632: INFO: Pod "pod-secrets-e69c9117-08ac-47ce-9b34-d02cb92142bc" satisfied condition "success or failure"
Jan 10 14:00:34.637: INFO: Trying to get logs from node iruya-node pod pod-secrets-e69c9117-08ac-47ce-9b34-d02cb92142bc container secret-volume-test: 
STEP: delete the pod
Jan 10 14:00:34.721: INFO: Waiting for pod pod-secrets-e69c9117-08ac-47ce-9b34-d02cb92142bc to disappear
Jan 10 14:00:34.838: INFO: Pod pod-secrets-e69c9117-08ac-47ce-9b34-d02cb92142bc no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:00:34.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9527" for this suite.
Jan 10 14:00:41.444: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:00:41.587: INFO: namespace secrets-9527 deletion completed in 6.741498253s

• [SLOW TEST:15.321 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:00:41.588: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Jan 10 14:00:41.672: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2670,SelfLink:/api/v1/namespaces/watch-2670/configmaps/e2e-watch-test-configmap-a,UID:57d9ee69-e4bc-4735-a0c3-d75dfee7936a,ResourceVersion:20031183,Generation:0,CreationTimestamp:2020-01-10 14:00:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 10 14:00:41.672: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2670,SelfLink:/api/v1/namespaces/watch-2670/configmaps/e2e-watch-test-configmap-a,UID:57d9ee69-e4bc-4735-a0c3-d75dfee7936a,ResourceVersion:20031183,Generation:0,CreationTimestamp:2020-01-10 14:00:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Jan 10 14:00:51.690: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2670,SelfLink:/api/v1/namespaces/watch-2670/configmaps/e2e-watch-test-configmap-a,UID:57d9ee69-e4bc-4735-a0c3-d75dfee7936a,ResourceVersion:20031197,Generation:0,CreationTimestamp:2020-01-10 14:00:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan 10 14:00:51.691: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2670,SelfLink:/api/v1/namespaces/watch-2670/configmaps/e2e-watch-test-configmap-a,UID:57d9ee69-e4bc-4735-a0c3-d75dfee7936a,ResourceVersion:20031197,Generation:0,CreationTimestamp:2020-01-10 14:00:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Jan 10 14:01:01.716: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2670,SelfLink:/api/v1/namespaces/watch-2670/configmaps/e2e-watch-test-configmap-a,UID:57d9ee69-e4bc-4735-a0c3-d75dfee7936a,ResourceVersion:20031212,Generation:0,CreationTimestamp:2020-01-10 14:00:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 10 14:01:01.717: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2670,SelfLink:/api/v1/namespaces/watch-2670/configmaps/e2e-watch-test-configmap-a,UID:57d9ee69-e4bc-4735-a0c3-d75dfee7936a,ResourceVersion:20031212,Generation:0,CreationTimestamp:2020-01-10 14:00:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Jan 10 14:01:11.733: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2670,SelfLink:/api/v1/namespaces/watch-2670/configmaps/e2e-watch-test-configmap-a,UID:57d9ee69-e4bc-4735-a0c3-d75dfee7936a,ResourceVersion:20031226,Generation:0,CreationTimestamp:2020-01-10 14:00:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 10 14:01:11.734: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2670,SelfLink:/api/v1/namespaces/watch-2670/configmaps/e2e-watch-test-configmap-a,UID:57d9ee69-e4bc-4735-a0c3-d75dfee7936a,ResourceVersion:20031226,Generation:0,CreationTimestamp:2020-01-10 14:00:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Jan 10 14:01:21.756: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-2670,SelfLink:/api/v1/namespaces/watch-2670/configmaps/e2e-watch-test-configmap-b,UID:682ad6ee-949b-4cea-9bca-fd742fd87b4c,ResourceVersion:20031241,Generation:0,CreationTimestamp:2020-01-10 14:01:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 10 14:01:21.756: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-2670,SelfLink:/api/v1/namespaces/watch-2670/configmaps/e2e-watch-test-configmap-b,UID:682ad6ee-949b-4cea-9bca-fd742fd87b4c,ResourceVersion:20031241,Generation:0,CreationTimestamp:2020-01-10 14:01:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Jan 10 14:01:31.773: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-2670,SelfLink:/api/v1/namespaces/watch-2670/configmaps/e2e-watch-test-configmap-b,UID:682ad6ee-949b-4cea-9bca-fd742fd87b4c,ResourceVersion:20031255,Generation:0,CreationTimestamp:2020-01-10 14:01:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 10 14:01:31.774: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-2670,SelfLink:/api/v1/namespaces/watch-2670/configmaps/e2e-watch-test-configmap-b,UID:682ad6ee-949b-4cea-9bca-fd742fd87b4c,ResourceVersion:20031255,Generation:0,CreationTimestamp:2020-01-10 14:01:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:01:41.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-2670" for this suite.
Jan 10 14:01:47.840: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:01:48.004: INFO: namespace watch-2670 deletion completed in 6.205128363s

• [SLOW TEST:66.416 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:01:48.009: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 10 14:01:56.319: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:01:56.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-1554" for this suite.
Jan 10 14:02:02.464: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:02:02.589: INFO: namespace container-runtime-1554 deletion completed in 6.161777886s

• [SLOW TEST:14.581 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:02:02.591: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-cb3528d5-15cb-4e29-a0b6-0aa2f872293f
STEP: Creating a pod to test consume configMaps
Jan 10 14:02:02.711: INFO: Waiting up to 5m0s for pod "pod-configmaps-6c9111d3-7e4e-48fd-8618-1455cda8a81f" in namespace "configmap-2041" to be "success or failure"
Jan 10 14:02:02.721: INFO: Pod "pod-configmaps-6c9111d3-7e4e-48fd-8618-1455cda8a81f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.109156ms
Jan 10 14:02:04.730: INFO: Pod "pod-configmaps-6c9111d3-7e4e-48fd-8618-1455cda8a81f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019311684s
Jan 10 14:02:06.739: INFO: Pod "pod-configmaps-6c9111d3-7e4e-48fd-8618-1455cda8a81f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028216659s
Jan 10 14:02:08.752: INFO: Pod "pod-configmaps-6c9111d3-7e4e-48fd-8618-1455cda8a81f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041088071s
Jan 10 14:02:10.761: INFO: Pod "pod-configmaps-6c9111d3-7e4e-48fd-8618-1455cda8a81f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.05045942s
STEP: Saw pod success
Jan 10 14:02:10.761: INFO: Pod "pod-configmaps-6c9111d3-7e4e-48fd-8618-1455cda8a81f" satisfied condition "success or failure"
Jan 10 14:02:10.768: INFO: Trying to get logs from node iruya-node pod pod-configmaps-6c9111d3-7e4e-48fd-8618-1455cda8a81f container configmap-volume-test: 
STEP: delete the pod
Jan 10 14:02:10.824: INFO: Waiting for pod pod-configmaps-6c9111d3-7e4e-48fd-8618-1455cda8a81f to disappear
Jan 10 14:02:10.836: INFO: Pod pod-configmaps-6c9111d3-7e4e-48fd-8618-1455cda8a81f no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:02:10.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2041" for this suite.
Jan 10 14:02:16.983: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:02:17.573: INFO: namespace configmap-2041 deletion completed in 6.728386228s

• [SLOW TEST:14.983 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:02:17.575: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6614.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6614.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 10 14:02:31.830: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-6614/dns-test-b5a9ae17-2d99-46df-8c29-fc14d8e58720: the server could not find the requested resource (get pods dns-test-b5a9ae17-2d99-46df-8c29-fc14d8e58720)
Jan 10 14:02:31.867: INFO: Unable to read wheezy_udp@PodARecord from pod dns-6614/dns-test-b5a9ae17-2d99-46df-8c29-fc14d8e58720: the server could not find the requested resource (get pods dns-test-b5a9ae17-2d99-46df-8c29-fc14d8e58720)
Jan 10 14:02:31.890: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-6614/dns-test-b5a9ae17-2d99-46df-8c29-fc14d8e58720: the server could not find the requested resource (get pods dns-test-b5a9ae17-2d99-46df-8c29-fc14d8e58720)
Jan 10 14:02:31.900: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-6614/dns-test-b5a9ae17-2d99-46df-8c29-fc14d8e58720: the server could not find the requested resource (get pods dns-test-b5a9ae17-2d99-46df-8c29-fc14d8e58720)
Jan 10 14:02:31.910: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-6614/dns-test-b5a9ae17-2d99-46df-8c29-fc14d8e58720: the server could not find the requested resource (get pods dns-test-b5a9ae17-2d99-46df-8c29-fc14d8e58720)
Jan 10 14:02:31.915: INFO: Unable to read jessie_udp@PodARecord from pod dns-6614/dns-test-b5a9ae17-2d99-46df-8c29-fc14d8e58720: the server could not find the requested resource (get pods dns-test-b5a9ae17-2d99-46df-8c29-fc14d8e58720)
Jan 10 14:02:31.919: INFO: Unable to read jessie_tcp@PodARecord from pod dns-6614/dns-test-b5a9ae17-2d99-46df-8c29-fc14d8e58720: the server could not find the requested resource (get pods dns-test-b5a9ae17-2d99-46df-8c29-fc14d8e58720)
Jan 10 14:02:31.919: INFO: Lookups using dns-6614/dns-test-b5a9ae17-2d99-46df-8c29-fc14d8e58720 failed for: [wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Jan 10 14:02:37.043: INFO: DNS probes using dns-6614/dns-test-b5a9ae17-2d99-46df-8c29-fc14d8e58720 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:02:37.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6614" for this suite.
Jan 10 14:02:43.194: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:02:43.323: INFO: namespace dns-6614 deletion completed in 6.17097519s

• [SLOW TEST:25.748 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:02:43.324: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan 10 14:02:43.430: INFO: Waiting up to 5m0s for pod "pod-104997b2-34e3-4ea0-94e1-a7b021d8f4b2" in namespace "emptydir-3211" to be "success or failure"
Jan 10 14:02:43.469: INFO: Pod "pod-104997b2-34e3-4ea0-94e1-a7b021d8f4b2": Phase="Pending", Reason="", readiness=false. Elapsed: 38.312908ms
Jan 10 14:02:45.483: INFO: Pod "pod-104997b2-34e3-4ea0-94e1-a7b021d8f4b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051953796s
Jan 10 14:02:47.491: INFO: Pod "pod-104997b2-34e3-4ea0-94e1-a7b021d8f4b2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060565673s
Jan 10 14:02:49.508: INFO: Pod "pod-104997b2-34e3-4ea0-94e1-a7b021d8f4b2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077482701s
Jan 10 14:02:51.524: INFO: Pod "pod-104997b2-34e3-4ea0-94e1-a7b021d8f4b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.093323841s
STEP: Saw pod success
Jan 10 14:02:51.524: INFO: Pod "pod-104997b2-34e3-4ea0-94e1-a7b021d8f4b2" satisfied condition "success or failure"
Jan 10 14:02:51.531: INFO: Trying to get logs from node iruya-node pod pod-104997b2-34e3-4ea0-94e1-a7b021d8f4b2 container test-container: 
STEP: delete the pod
Jan 10 14:02:51.661: INFO: Waiting for pod pod-104997b2-34e3-4ea0-94e1-a7b021d8f4b2 to disappear
Jan 10 14:02:51.672: INFO: Pod pod-104997b2-34e3-4ea0-94e1-a7b021d8f4b2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:02:51.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3211" for this suite.
Jan 10 14:02:57.736: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:02:57.941: INFO: namespace emptydir-3211 deletion completed in 6.262023452s

• [SLOW TEST:14.618 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:02:57.942: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 10 14:02:58.028: INFO: (0) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 15.262462ms)
Jan 10 14:02:58.036: INFO: (1) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.774951ms)
Jan 10 14:02:58.073: INFO: (2) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 36.665294ms)
Jan 10 14:02:58.080: INFO: (3) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.836285ms)
Jan 10 14:02:58.087: INFO: (4) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.004082ms)
Jan 10 14:02:58.093: INFO: (5) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.574516ms)
Jan 10 14:02:58.099: INFO: (6) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.726171ms)
Jan 10 14:02:58.106: INFO: (7) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.102683ms)
Jan 10 14:02:58.112: INFO: (8) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.457449ms)
Jan 10 14:02:58.117: INFO: (9) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.191972ms)
Jan 10 14:02:58.123: INFO: (10) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.505218ms)
Jan 10 14:02:58.128: INFO: (11) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.822243ms)
Jan 10 14:02:58.135: INFO: (12) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.769774ms)
Jan 10 14:02:58.140: INFO: (13) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.350446ms)
Jan 10 14:02:58.146: INFO: (14) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.173576ms)
Jan 10 14:02:58.152: INFO: (15) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.929204ms)
Jan 10 14:02:58.159: INFO: (16) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.693837ms)
Jan 10 14:02:58.214: INFO: (17) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 54.848243ms)
Jan 10 14:02:58.222: INFO: (18) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.725532ms)
Jan 10 14:02:58.228: INFO: (19) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.04109ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:02:58.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-4081" for this suite.
Jan 10 14:03:04.256: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:03:04.434: INFO: namespace proxy-4081 deletion completed in 6.201684682s

• [SLOW TEST:6.493 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:03:04.434: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Jan 10 14:03:04.531: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:03:18.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-8219" for this suite.
Jan 10 14:03:24.089: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:03:24.908: INFO: namespace init-container-8219 deletion completed in 6.848507479s

• [SLOW TEST:20.474 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:03:24.909: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:04:13.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-1779" for this suite.
Jan 10 14:04:19.972: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:04:20.139: INFO: namespace container-runtime-1779 deletion completed in 6.278978734s

• [SLOW TEST:55.230 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:04:20.141: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 10 14:04:20.222: INFO: Waiting up to 5m0s for pod "downwardapi-volume-616b8ceb-e1fe-4d31-b9bf-78112565f90e" in namespace "projected-8392" to be "success or failure"
Jan 10 14:04:20.232: INFO: Pod "downwardapi-volume-616b8ceb-e1fe-4d31-b9bf-78112565f90e": Phase="Pending", Reason="", readiness=false. Elapsed: 9.200029ms
Jan 10 14:04:22.247: INFO: Pod "downwardapi-volume-616b8ceb-e1fe-4d31-b9bf-78112565f90e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024791126s
Jan 10 14:04:24.255: INFO: Pod "downwardapi-volume-616b8ceb-e1fe-4d31-b9bf-78112565f90e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032046962s
Jan 10 14:04:26.270: INFO: Pod "downwardapi-volume-616b8ceb-e1fe-4d31-b9bf-78112565f90e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047628498s
Jan 10 14:04:28.281: INFO: Pod "downwardapi-volume-616b8ceb-e1fe-4d31-b9bf-78112565f90e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.058423834s
STEP: Saw pod success
Jan 10 14:04:28.281: INFO: Pod "downwardapi-volume-616b8ceb-e1fe-4d31-b9bf-78112565f90e" satisfied condition "success or failure"
Jan 10 14:04:28.286: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-616b8ceb-e1fe-4d31-b9bf-78112565f90e container client-container: 
STEP: delete the pod
Jan 10 14:04:28.374: INFO: Waiting for pod downwardapi-volume-616b8ceb-e1fe-4d31-b9bf-78112565f90e to disappear
Jan 10 14:04:28.414: INFO: Pod downwardapi-volume-616b8ceb-e1fe-4d31-b9bf-78112565f90e no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:04:28.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8392" for this suite.
Jan 10 14:04:34.455: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:04:34.634: INFO: namespace projected-8392 deletion completed in 6.215164999s

• [SLOW TEST:14.493 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:04:34.636: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service multi-endpoint-test in namespace services-5816
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5816 to expose endpoints map[]
Jan 10 14:04:34.883: INFO: Get endpoints failed (61.197151ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Jan 10 14:04:35.897: INFO: successfully validated that service multi-endpoint-test in namespace services-5816 exposes endpoints map[] (1.075805603s elapsed)
STEP: Creating pod pod1 in namespace services-5816
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5816 to expose endpoints map[pod1:[100]]
Jan 10 14:04:40.010: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.084995763s elapsed, will retry)
Jan 10 14:04:44.120: INFO: successfully validated that service multi-endpoint-test in namespace services-5816 exposes endpoints map[pod1:[100]] (8.195245096s elapsed)
STEP: Creating pod pod2 in namespace services-5816
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5816 to expose endpoints map[pod1:[100] pod2:[101]]
Jan 10 14:04:48.756: INFO: Unexpected endpoints: found map[63ff94da-a2c9-42e8-980a-1792e0b9d3bc:[100]], expected map[pod1:[100] pod2:[101]] (4.627970192s elapsed, will retry)
Jan 10 14:04:50.817: INFO: successfully validated that service multi-endpoint-test in namespace services-5816 exposes endpoints map[pod1:[100] pod2:[101]] (6.689310208s elapsed)
STEP: Deleting pod pod1 in namespace services-5816
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5816 to expose endpoints map[pod2:[101]]
Jan 10 14:04:51.877: INFO: successfully validated that service multi-endpoint-test in namespace services-5816 exposes endpoints map[pod2:[101]] (1.05105878s elapsed)
STEP: Deleting pod pod2 in namespace services-5816
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5816 to expose endpoints map[]
Jan 10 14:04:53.750: INFO: successfully validated that service multi-endpoint-test in namespace services-5816 exposes endpoints map[] (1.860874855s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:04:54.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5816" for this suite.
Jan 10 14:05:16.294: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:05:16.478: INFO: namespace services-5816 deletion completed in 22.216815154s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:41.842 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:05:16.479: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-c8d5a580-cced-4d36-9dfa-8f6d6be4f620
STEP: Creating a pod to test consume configMaps
Jan 10 14:05:16.622: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-733f091b-bbaa-45b4-b6e5-9bce231c1c50" in namespace "projected-1292" to be "success or failure"
Jan 10 14:05:16.632: INFO: Pod "pod-projected-configmaps-733f091b-bbaa-45b4-b6e5-9bce231c1c50": Phase="Pending", Reason="", readiness=false. Elapsed: 9.697908ms
Jan 10 14:05:18.709: INFO: Pod "pod-projected-configmaps-733f091b-bbaa-45b4-b6e5-9bce231c1c50": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087276906s
Jan 10 14:05:20.723: INFO: Pod "pod-projected-configmaps-733f091b-bbaa-45b4-b6e5-9bce231c1c50": Phase="Pending", Reason="", readiness=false. Elapsed: 4.100682582s
Jan 10 14:05:22.735: INFO: Pod "pod-projected-configmaps-733f091b-bbaa-45b4-b6e5-9bce231c1c50": Phase="Pending", Reason="", readiness=false. Elapsed: 6.112391838s
Jan 10 14:05:24.745: INFO: Pod "pod-projected-configmaps-733f091b-bbaa-45b4-b6e5-9bce231c1c50": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.12307053s
STEP: Saw pod success
Jan 10 14:05:24.746: INFO: Pod "pod-projected-configmaps-733f091b-bbaa-45b4-b6e5-9bce231c1c50" satisfied condition "success or failure"
Jan 10 14:05:24.756: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-733f091b-bbaa-45b4-b6e5-9bce231c1c50 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 10 14:05:24.840: INFO: Waiting for pod pod-projected-configmaps-733f091b-bbaa-45b4-b6e5-9bce231c1c50 to disappear
Jan 10 14:05:24.860: INFO: Pod pod-projected-configmaps-733f091b-bbaa-45b4-b6e5-9bce231c1c50 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:05:24.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1292" for this suite.
Jan 10 14:05:30.920: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:05:31.099: INFO: namespace projected-1292 deletion completed in 6.22536485s

• [SLOW TEST:14.620 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:05:31.101: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override all
Jan 10 14:05:31.198: INFO: Waiting up to 5m0s for pod "client-containers-19b4228e-cc7b-4ce3-b869-5a9d79aaa8af" in namespace "containers-5319" to be "success or failure"
Jan 10 14:05:31.388: INFO: Pod "client-containers-19b4228e-cc7b-4ce3-b869-5a9d79aaa8af": Phase="Pending", Reason="", readiness=false. Elapsed: 189.476689ms
Jan 10 14:05:33.397: INFO: Pod "client-containers-19b4228e-cc7b-4ce3-b869-5a9d79aaa8af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.198630067s
Jan 10 14:05:35.412: INFO: Pod "client-containers-19b4228e-cc7b-4ce3-b869-5a9d79aaa8af": Phase="Pending", Reason="", readiness=false. Elapsed: 4.213112265s
Jan 10 14:05:37.424: INFO: Pod "client-containers-19b4228e-cc7b-4ce3-b869-5a9d79aaa8af": Phase="Pending", Reason="", readiness=false. Elapsed: 6.22565578s
Jan 10 14:05:39.438: INFO: Pod "client-containers-19b4228e-cc7b-4ce3-b869-5a9d79aaa8af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.239918063s
STEP: Saw pod success
Jan 10 14:05:39.439: INFO: Pod "client-containers-19b4228e-cc7b-4ce3-b869-5a9d79aaa8af" satisfied condition "success or failure"
Jan 10 14:05:39.445: INFO: Trying to get logs from node iruya-node pod client-containers-19b4228e-cc7b-4ce3-b869-5a9d79aaa8af container test-container: 
STEP: delete the pod
Jan 10 14:05:39.515: INFO: Waiting for pod client-containers-19b4228e-cc7b-4ce3-b869-5a9d79aaa8af to disappear
Jan 10 14:05:39.521: INFO: Pod client-containers-19b4228e-cc7b-4ce3-b869-5a9d79aaa8af no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:05:39.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-5319" for this suite.
Jan 10 14:05:45.552: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:05:45.715: INFO: namespace containers-5319 deletion completed in 6.185768124s

• [SLOW TEST:14.614 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:05:45.717: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
Jan 10 14:05:46.409: INFO: created pod pod-service-account-defaultsa
Jan 10 14:05:46.409: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Jan 10 14:05:46.467: INFO: created pod pod-service-account-mountsa
Jan 10 14:05:46.468: INFO: pod pod-service-account-mountsa service account token volume mount: true
Jan 10 14:05:46.478: INFO: created pod pod-service-account-nomountsa
Jan 10 14:05:46.478: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Jan 10 14:05:46.497: INFO: created pod pod-service-account-defaultsa-mountspec
Jan 10 14:05:46.497: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Jan 10 14:05:46.538: INFO: created pod pod-service-account-mountsa-mountspec
Jan 10 14:05:46.538: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Jan 10 14:05:46.632: INFO: created pod pod-service-account-nomountsa-mountspec
Jan 10 14:05:46.632: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Jan 10 14:05:46.642: INFO: created pod pod-service-account-defaultsa-nomountspec
Jan 10 14:05:46.642: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Jan 10 14:05:46.677: INFO: created pod pod-service-account-mountsa-nomountspec
Jan 10 14:05:46.677: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Jan 10 14:05:46.711: INFO: created pod pod-service-account-nomountsa-nomountspec
Jan 10 14:05:46.712: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:05:46.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-2937" for this suite.
Jan 10 14:06:10.926: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:06:11.121: INFO: namespace svcaccounts-2937 deletion completed in 24.334438834s

• [SLOW TEST:25.405 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:06:11.122: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 pods, got 2 pods
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0110 14:06:13.906755       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 10 14:06:13.907: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:06:13.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6374" for this suite.
Jan 10 14:06:21.144: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:06:21.257: INFO: namespace gc-6374 deletion completed in 6.546068861s

• [SLOW TEST:10.135 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:06:21.258: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's command
Jan 10 14:06:21.372: INFO: Waiting up to 5m0s for pod "var-expansion-8e037c12-15d1-40f0-9d89-c7c6e1579166" in namespace "var-expansion-7200" to be "success or failure"
Jan 10 14:06:21.383: INFO: Pod "var-expansion-8e037c12-15d1-40f0-9d89-c7c6e1579166": Phase="Pending", Reason="", readiness=false. Elapsed: 11.39435ms
Jan 10 14:06:23.391: INFO: Pod "var-expansion-8e037c12-15d1-40f0-9d89-c7c6e1579166": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019209382s
Jan 10 14:06:25.401: INFO: Pod "var-expansion-8e037c12-15d1-40f0-9d89-c7c6e1579166": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029154617s
Jan 10 14:06:27.450: INFO: Pod "var-expansion-8e037c12-15d1-40f0-9d89-c7c6e1579166": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078454606s
Jan 10 14:06:29.460: INFO: Pod "var-expansion-8e037c12-15d1-40f0-9d89-c7c6e1579166": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.088088848s
STEP: Saw pod success
Jan 10 14:06:29.460: INFO: Pod "var-expansion-8e037c12-15d1-40f0-9d89-c7c6e1579166" satisfied condition "success or failure"
Jan 10 14:06:29.492: INFO: Trying to get logs from node iruya-node pod var-expansion-8e037c12-15d1-40f0-9d89-c7c6e1579166 container dapi-container: 
STEP: delete the pod
Jan 10 14:06:29.575: INFO: Waiting for pod var-expansion-8e037c12-15d1-40f0-9d89-c7c6e1579166 to disappear
Jan 10 14:06:29.582: INFO: Pod var-expansion-8e037c12-15d1-40f0-9d89-c7c6e1579166 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:06:29.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-7200" for this suite.
Jan 10 14:06:35.655: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:06:35.838: INFO: namespace var-expansion-7200 deletion completed in 6.250380792s

• [SLOW TEST:14.581 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:06:35.840: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 10 14:06:36.021: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a1125988-7778-4d20-a162-b6882c4ce8f1" in namespace "downward-api-137" to be "success or failure"
Jan 10 14:06:36.028: INFO: Pod "downwardapi-volume-a1125988-7778-4d20-a162-b6882c4ce8f1": Phase="Pending", Reason="", readiness=false. Elapsed: 7.059147ms
Jan 10 14:06:38.038: INFO: Pod "downwardapi-volume-a1125988-7778-4d20-a162-b6882c4ce8f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016355596s
Jan 10 14:06:40.050: INFO: Pod "downwardapi-volume-a1125988-7778-4d20-a162-b6882c4ce8f1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028812685s
Jan 10 14:06:42.063: INFO: Pod "downwardapi-volume-a1125988-7778-4d20-a162-b6882c4ce8f1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041955423s
Jan 10 14:06:44.070: INFO: Pod "downwardapi-volume-a1125988-7778-4d20-a162-b6882c4ce8f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.04867986s
STEP: Saw pod success
Jan 10 14:06:44.070: INFO: Pod "downwardapi-volume-a1125988-7778-4d20-a162-b6882c4ce8f1" satisfied condition "success or failure"
Jan 10 14:06:44.074: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-a1125988-7778-4d20-a162-b6882c4ce8f1 container client-container: 
STEP: delete the pod
Jan 10 14:06:44.296: INFO: Waiting for pod downwardapi-volume-a1125988-7778-4d20-a162-b6882c4ce8f1 to disappear
Jan 10 14:06:44.309: INFO: Pod downwardapi-volume-a1125988-7778-4d20-a162-b6882c4ce8f1 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:06:44.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-137" for this suite.
Jan 10 14:06:50.406: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:06:50.538: INFO: namespace downward-api-137 deletion completed in 6.165498956s

• [SLOW TEST:14.698 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:06:50.540: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 10 14:06:50.743: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"14162977-17b6-4eeb-a28f-afac02b9df2d", Controller:(*bool)(0xc0028f64da), BlockOwnerDeletion:(*bool)(0xc0028f64db)}}
Jan 10 14:06:50.783: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"44223ae1-154e-42bb-9da7-16bdae472119", Controller:(*bool)(0xc0028f669a), BlockOwnerDeletion:(*bool)(0xc0028f669b)}}
Jan 10 14:06:50.835: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"bea81604-ad5d-4469-9035-fa05b2f2c30e", Controller:(*bool)(0xc0028f683a), BlockOwnerDeletion:(*bool)(0xc0028f683b)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:06:55.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3226" for this suite.
Jan 10 14:07:01.926: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:07:02.027: INFO: namespace gc-3226 deletion completed in 6.13577959s

• [SLOW TEST:11.487 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:07:02.028: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-13401e91-fedc-4e66-824e-efed1e5547b6
STEP: Creating a pod to test consume configMaps
Jan 10 14:07:02.183: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f74cce7f-82b4-47c2-b16d-0cad18216c75" in namespace "projected-4389" to be "success or failure"
Jan 10 14:07:02.210: INFO: Pod "pod-projected-configmaps-f74cce7f-82b4-47c2-b16d-0cad18216c75": Phase="Pending", Reason="", readiness=false. Elapsed: 26.805068ms
Jan 10 14:07:04.222: INFO: Pod "pod-projected-configmaps-f74cce7f-82b4-47c2-b16d-0cad18216c75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039299308s
Jan 10 14:07:06.231: INFO: Pod "pod-projected-configmaps-f74cce7f-82b4-47c2-b16d-0cad18216c75": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047467075s
Jan 10 14:07:08.242: INFO: Pod "pod-projected-configmaps-f74cce7f-82b4-47c2-b16d-0cad18216c75": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05926992s
Jan 10 14:07:10.254: INFO: Pod "pod-projected-configmaps-f74cce7f-82b4-47c2-b16d-0cad18216c75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.071269271s
STEP: Saw pod success
Jan 10 14:07:10.255: INFO: Pod "pod-projected-configmaps-f74cce7f-82b4-47c2-b16d-0cad18216c75" satisfied condition "success or failure"
Jan 10 14:07:10.258: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-f74cce7f-82b4-47c2-b16d-0cad18216c75 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 10 14:07:10.457: INFO: Waiting for pod pod-projected-configmaps-f74cce7f-82b4-47c2-b16d-0cad18216c75 to disappear
Jan 10 14:07:10.467: INFO: Pod pod-projected-configmaps-f74cce7f-82b4-47c2-b16d-0cad18216c75 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:07:10.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4389" for this suite.
Jan 10 14:07:16.572: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:07:16.697: INFO: namespace projected-4389 deletion completed in 6.223075355s

• [SLOW TEST:14.670 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:07:16.699: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:07:24.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-7432" for this suite.
Jan 10 14:07:31.071: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:07:31.177: INFO: namespace emptydir-wrapper-7432 deletion completed in 6.161542521s

• [SLOW TEST:14.478 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:07:31.177: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-896262b4-52c1-4909-80d1-54d60ad5128d
STEP: Creating a pod to test consume secrets
Jan 10 14:07:31.298: INFO: Waiting up to 5m0s for pod "pod-secrets-563d02af-2d2a-444a-b343-8c212921a863" in namespace "secrets-9290" to be "success or failure"
Jan 10 14:07:31.325: INFO: Pod "pod-secrets-563d02af-2d2a-444a-b343-8c212921a863": Phase="Pending", Reason="", readiness=false. Elapsed: 27.155824ms
Jan 10 14:07:33.712: INFO: Pod "pod-secrets-563d02af-2d2a-444a-b343-8c212921a863": Phase="Pending", Reason="", readiness=false. Elapsed: 2.414434015s
Jan 10 14:07:35.731: INFO: Pod "pod-secrets-563d02af-2d2a-444a-b343-8c212921a863": Phase="Pending", Reason="", readiness=false. Elapsed: 4.433265179s
Jan 10 14:07:37.743: INFO: Pod "pod-secrets-563d02af-2d2a-444a-b343-8c212921a863": Phase="Pending", Reason="", readiness=false. Elapsed: 6.44549302s
Jan 10 14:07:39.754: INFO: Pod "pod-secrets-563d02af-2d2a-444a-b343-8c212921a863": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.455986396s
STEP: Saw pod success
Jan 10 14:07:39.754: INFO: Pod "pod-secrets-563d02af-2d2a-444a-b343-8c212921a863" satisfied condition "success or failure"
Jan 10 14:07:39.758: INFO: Trying to get logs from node iruya-node pod pod-secrets-563d02af-2d2a-444a-b343-8c212921a863 container secret-env-test: 
STEP: delete the pod
Jan 10 14:07:39.944: INFO: Waiting for pod pod-secrets-563d02af-2d2a-444a-b343-8c212921a863 to disappear
Jan 10 14:07:39.952: INFO: Pod pod-secrets-563d02af-2d2a-444a-b343-8c212921a863 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:07:39.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9290" for this suite.
Jan 10 14:07:45.992: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:07:46.138: INFO: namespace secrets-9290 deletion completed in 6.17932011s

• [SLOW TEST:14.961 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:07:46.139: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan 10 14:07:46.281: INFO: Waiting up to 5m0s for pod "downward-api-cfb1ed6c-c457-4694-a6e3-3af0ab17d1b7" in namespace "downward-api-1590" to be "success or failure"
Jan 10 14:07:46.290: INFO: Pod "downward-api-cfb1ed6c-c457-4694-a6e3-3af0ab17d1b7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.353243ms
Jan 10 14:07:48.302: INFO: Pod "downward-api-cfb1ed6c-c457-4694-a6e3-3af0ab17d1b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020399728s
Jan 10 14:07:50.315: INFO: Pod "downward-api-cfb1ed6c-c457-4694-a6e3-3af0ab17d1b7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032865452s
Jan 10 14:07:52.333: INFO: Pod "downward-api-cfb1ed6c-c457-4694-a6e3-3af0ab17d1b7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051054503s
Jan 10 14:07:54.349: INFO: Pod "downward-api-cfb1ed6c-c457-4694-a6e3-3af0ab17d1b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.067524096s
STEP: Saw pod success
Jan 10 14:07:54.350: INFO: Pod "downward-api-cfb1ed6c-c457-4694-a6e3-3af0ab17d1b7" satisfied condition "success or failure"
Jan 10 14:07:54.357: INFO: Trying to get logs from node iruya-node pod downward-api-cfb1ed6c-c457-4694-a6e3-3af0ab17d1b7 container dapi-container: 
STEP: delete the pod
Jan 10 14:07:54.719: INFO: Waiting for pod downward-api-cfb1ed6c-c457-4694-a6e3-3af0ab17d1b7 to disappear
Jan 10 14:07:54.722: INFO: Pod downward-api-cfb1ed6c-c457-4694-a6e3-3af0ab17d1b7 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:07:54.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1590" for this suite.
Jan 10 14:08:00.757: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:08:00.877: INFO: namespace downward-api-1590 deletion completed in 6.149740108s

• [SLOW TEST:14.738 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:08:00.878: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test hostPath mode
Jan 10 14:08:01.011: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-8086" to be "success or failure"
Jan 10 14:08:01.017: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.606355ms
Jan 10 14:08:03.028: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017610674s
Jan 10 14:08:05.035: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023915792s
Jan 10 14:08:07.048: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03740067s
Jan 10 14:08:09.064: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.053429761s
Jan 10 14:08:11.070: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.059597807s
STEP: Saw pod success
Jan 10 14:08:11.070: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Jan 10 14:08:11.072: INFO: Trying to get logs from node iruya-node pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Jan 10 14:08:11.130: INFO: Waiting for pod pod-host-path-test to disappear
Jan 10 14:08:11.134: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:08:11.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-8086" for this suite.
Jan 10 14:08:17.230: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:08:17.390: INFO: namespace hostpath-8086 deletion completed in 6.250414986s

• [SLOW TEST:16.512 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:08:17.392: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan 10 14:08:17.572: INFO: Waiting up to 5m0s for pod "downward-api-6155add9-0413-4d49-866a-fd1ab7055e9e" in namespace "downward-api-2867" to be "success or failure"
Jan 10 14:08:17.583: INFO: Pod "downward-api-6155add9-0413-4d49-866a-fd1ab7055e9e": Phase="Pending", Reason="", readiness=false. Elapsed: 11.086284ms
Jan 10 14:08:19.592: INFO: Pod "downward-api-6155add9-0413-4d49-866a-fd1ab7055e9e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020391728s
Jan 10 14:08:21.600: INFO: Pod "downward-api-6155add9-0413-4d49-866a-fd1ab7055e9e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028384253s
Jan 10 14:08:23.611: INFO: Pod "downward-api-6155add9-0413-4d49-866a-fd1ab7055e9e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039098274s
Jan 10 14:08:25.618: INFO: Pod "downward-api-6155add9-0413-4d49-866a-fd1ab7055e9e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.046209727s
STEP: Saw pod success
Jan 10 14:08:25.618: INFO: Pod "downward-api-6155add9-0413-4d49-866a-fd1ab7055e9e" satisfied condition "success or failure"
Jan 10 14:08:25.621: INFO: Trying to get logs from node iruya-node pod downward-api-6155add9-0413-4d49-866a-fd1ab7055e9e container dapi-container: 
STEP: delete the pod
Jan 10 14:08:25.715: INFO: Waiting for pod downward-api-6155add9-0413-4d49-866a-fd1ab7055e9e to disappear
Jan 10 14:08:25.726: INFO: Pod downward-api-6155add9-0413-4d49-866a-fd1ab7055e9e no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:08:25.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2867" for this suite.
Jan 10 14:08:33.784: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:08:33.974: INFO: namespace downward-api-2867 deletion completed in 8.22909826s

• [SLOW TEST:16.583 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:08:33.975: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-e74ab8a9-8cf3-480d-8bc5-3f3b1e0175e0
STEP: Creating configMap with name cm-test-opt-upd-5affdfad-44c1-4512-8aed-a2eccfc6173d
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-e74ab8a9-8cf3-480d-8bc5-3f3b1e0175e0
STEP: Updating configmap cm-test-opt-upd-5affdfad-44c1-4512-8aed-a2eccfc6173d
STEP: Creating configMap with name cm-test-opt-create-e70de570-91f7-4c76-941a-322c4ce9c0dd
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:08:48.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8767" for this suite.
Jan 10 14:09:12.515: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:09:12.645: INFO: namespace configmap-8767 deletion completed in 24.202590368s

• [SLOW TEST:38.670 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:09:12.645: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-8524
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Jan 10 14:09:12.814: INFO: Found 0 stateful pods, waiting for 3
Jan 10 14:09:22.892: INFO: Found 2 stateful pods, waiting for 3
Jan 10 14:09:32.829: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 10 14:09:32.829: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 10 14:09:32.829: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 10 14:09:42.834: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 10 14:09:42.834: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 10 14:09:42.834: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Jan 10 14:09:42.860: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8524 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 10 14:09:45.376: INFO: stderr: "I0110 14:09:45.081184    1424 log.go:172] (0xc000b20420) (0xc0005d8a00) Create stream\nI0110 14:09:45.081352    1424 log.go:172] (0xc000b20420) (0xc0005d8a00) Stream added, broadcasting: 1\nI0110 14:09:45.085895    1424 log.go:172] (0xc000b20420) Reply frame received for 1\nI0110 14:09:45.085950    1424 log.go:172] (0xc000b20420) (0xc0005d8aa0) Create stream\nI0110 14:09:45.085966    1424 log.go:172] (0xc000b20420) (0xc0005d8aa0) Stream added, broadcasting: 3\nI0110 14:09:45.092288    1424 log.go:172] (0xc000b20420) Reply frame received for 3\nI0110 14:09:45.092328    1424 log.go:172] (0xc000b20420) (0xc0005d8b40) Create stream\nI0110 14:09:45.092344    1424 log.go:172] (0xc000b20420) (0xc0005d8b40) Stream added, broadcasting: 5\nI0110 14:09:45.096431    1424 log.go:172] (0xc000b20420) Reply frame received for 5\nI0110 14:09:45.215813    1424 log.go:172] (0xc000b20420) Data frame received for 5\nI0110 14:09:45.216468    1424 log.go:172] (0xc0005d8b40) (5) Data frame handling\nI0110 14:09:45.216506    1424 log.go:172] (0xc0005d8b40) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0110 14:09:45.244986    1424 log.go:172] (0xc000b20420) Data frame received for 3\nI0110 14:09:45.245053    1424 log.go:172] (0xc0005d8aa0) (3) Data frame handling\nI0110 14:09:45.245081    1424 log.go:172] (0xc0005d8aa0) (3) Data frame sent\nI0110 14:09:45.356082    1424 log.go:172] (0xc000b20420) Data frame received for 1\nI0110 14:09:45.356231    1424 log.go:172] (0xc000b20420) (0xc0005d8b40) Stream removed, broadcasting: 5\nI0110 14:09:45.356302    1424 log.go:172] (0xc0005d8a00) (1) Data frame handling\nI0110 14:09:45.356338    1424 log.go:172] (0xc0005d8a00) (1) Data frame sent\nI0110 14:09:45.356387    1424 log.go:172] (0xc000b20420) (0xc0005d8aa0) Stream removed, broadcasting: 3\nI0110 14:09:45.356437    1424 log.go:172] (0xc000b20420) (0xc0005d8a00) Stream removed, broadcasting: 1\nI0110 14:09:45.356470    1424 log.go:172] (0xc000b20420) Go away received\nI0110 14:09:45.357633    1424 log.go:172] (0xc000b20420) (0xc0005d8a00) Stream removed, broadcasting: 1\nI0110 14:09:45.357665    1424 log.go:172] (0xc000b20420) (0xc0005d8aa0) Stream removed, broadcasting: 3\nI0110 14:09:45.357693    1424 log.go:172] (0xc000b20420) (0xc0005d8b40) Stream removed, broadcasting: 5\n"
Jan 10 14:09:45.377: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 10 14:09:45.377: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jan 10 14:09:55.439: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Jan 10 14:10:05.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8524 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 14:10:05.990: INFO: stderr: "I0110 14:10:05.683437    1456 log.go:172] (0xc0008dc370) (0xc000916640) Create stream\nI0110 14:10:05.683636    1456 log.go:172] (0xc0008dc370) (0xc000916640) Stream added, broadcasting: 1\nI0110 14:10:05.687327    1456 log.go:172] (0xc0008dc370) Reply frame received for 1\nI0110 14:10:05.687380    1456 log.go:172] (0xc0008dc370) (0xc0005ec280) Create stream\nI0110 14:10:05.687403    1456 log.go:172] (0xc0008dc370) (0xc0005ec280) Stream added, broadcasting: 3\nI0110 14:10:05.688951    1456 log.go:172] (0xc0008dc370) Reply frame received for 3\nI0110 14:10:05.689002    1456 log.go:172] (0xc0008dc370) (0xc0005ec320) Create stream\nI0110 14:10:05.689013    1456 log.go:172] (0xc0008dc370) (0xc0005ec320) Stream added, broadcasting: 5\nI0110 14:10:05.690182    1456 log.go:172] (0xc0008dc370) Reply frame received for 5\nI0110 14:10:05.850706    1456 log.go:172] (0xc0008dc370) Data frame received for 5\nI0110 14:10:05.850938    1456 log.go:172] (0xc0005ec320) (5) Data frame handling\nI0110 14:10:05.850992    1456 log.go:172] (0xc0005ec320) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0110 14:10:05.851056    1456 log.go:172] (0xc0008dc370) Data frame received for 3\nI0110 14:10:05.851085    1456 log.go:172] (0xc0005ec280) (3) Data frame handling\nI0110 14:10:05.851113    1456 log.go:172] (0xc0005ec280) (3) Data frame sent\nI0110 14:10:05.981616    1456 log.go:172] (0xc0008dc370) (0xc0005ec280) Stream removed, broadcasting: 3\nI0110 14:10:05.981920    1456 log.go:172] (0xc0008dc370) Data frame received for 1\nI0110 14:10:05.981980    1456 log.go:172] (0xc0008dc370) (0xc0005ec320) Stream removed, broadcasting: 5\nI0110 14:10:05.982167    1456 log.go:172] (0xc000916640) (1) Data frame handling\nI0110 14:10:05.982241    1456 log.go:172] (0xc000916640) (1) Data frame sent\nI0110 14:10:05.982269    1456 log.go:172] (0xc0008dc370) (0xc000916640) Stream removed, broadcasting: 1\nI0110 14:10:05.982307    1456 log.go:172] (0xc0008dc370) Go away received\nI0110 14:10:05.982845    1456 log.go:172] (0xc0008dc370) (0xc000916640) Stream removed, broadcasting: 1\nI0110 14:10:05.982882    1456 log.go:172] (0xc0008dc370) (0xc0005ec280) Stream removed, broadcasting: 3\nI0110 14:10:05.982902    1456 log.go:172] (0xc0008dc370) (0xc0005ec320) Stream removed, broadcasting: 5\n"
Jan 10 14:10:05.991: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 10 14:10:05.991: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 10 14:10:16.034: INFO: Waiting for StatefulSet statefulset-8524/ss2 to complete update
Jan 10 14:10:16.035: INFO: Waiting for Pod statefulset-8524/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 10 14:10:16.035: INFO: Waiting for Pod statefulset-8524/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 10 14:10:16.035: INFO: Waiting for Pod statefulset-8524/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 10 14:10:26.050: INFO: Waiting for StatefulSet statefulset-8524/ss2 to complete update
Jan 10 14:10:26.050: INFO: Waiting for Pod statefulset-8524/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 10 14:10:26.050: INFO: Waiting for Pod statefulset-8524/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 10 14:10:36.066: INFO: Waiting for StatefulSet statefulset-8524/ss2 to complete update
Jan 10 14:10:36.066: INFO: Waiting for Pod statefulset-8524/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 10 14:10:36.066: INFO: Waiting for Pod statefulset-8524/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 10 14:10:46.061: INFO: Waiting for StatefulSet statefulset-8524/ss2 to complete update
Jan 10 14:10:46.061: INFO: Waiting for Pod statefulset-8524/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 10 14:10:56.053: INFO: Waiting for StatefulSet statefulset-8524/ss2 to complete update
Jan 10 14:10:56.053: INFO: Waiting for Pod statefulset-8524/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Rolling back to a previous revision
Jan 10 14:11:06.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8524 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 10 14:11:06.620: INFO: stderr: "I0110 14:11:06.280487    1477 log.go:172] (0xc000b24420) (0xc000600820) Create stream\nI0110 14:11:06.280617    1477 log.go:172] (0xc000b24420) (0xc000600820) Stream added, broadcasting: 1\nI0110 14:11:06.299537    1477 log.go:172] (0xc000b24420) Reply frame received for 1\nI0110 14:11:06.299668    1477 log.go:172] (0xc000b24420) (0xc000748000) Create stream\nI0110 14:11:06.299706    1477 log.go:172] (0xc000b24420) (0xc000748000) Stream added, broadcasting: 3\nI0110 14:11:06.303951    1477 log.go:172] (0xc000b24420) Reply frame received for 3\nI0110 14:11:06.303996    1477 log.go:172] (0xc000b24420) (0xc0007dc000) Create stream\nI0110 14:11:06.304008    1477 log.go:172] (0xc000b24420) (0xc0007dc000) Stream added, broadcasting: 5\nI0110 14:11:06.312145    1477 log.go:172] (0xc000b24420) Reply frame received for 5\nI0110 14:11:06.417726    1477 log.go:172] (0xc000b24420) Data frame received for 5\nI0110 14:11:06.417804    1477 log.go:172] (0xc0007dc000) (5) Data frame handling\nI0110 14:11:06.417821    1477 log.go:172] (0xc0007dc000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0110 14:11:06.459663    1477 log.go:172] (0xc000b24420) Data frame received for 3\nI0110 14:11:06.459821    1477 log.go:172] (0xc000748000) (3) Data frame handling\nI0110 14:11:06.459839    1477 log.go:172] (0xc000748000) (3) Data frame sent\nI0110 14:11:06.607675    1477 log.go:172] (0xc000b24420) (0xc000748000) Stream removed, broadcasting: 3\nI0110 14:11:06.607855    1477 log.go:172] (0xc000b24420) Data frame received for 1\nI0110 14:11:06.607873    1477 log.go:172] (0xc000600820) (1) Data frame handling\nI0110 14:11:06.607889    1477 log.go:172] (0xc000600820) (1) Data frame sent\nI0110 14:11:06.607941    1477 log.go:172] (0xc000b24420) (0xc0007dc000) Stream removed, broadcasting: 5\nI0110 14:11:06.608020    1477 log.go:172] (0xc000b24420) (0xc000600820) Stream removed, broadcasting: 1\nI0110 14:11:06.608049    1477 log.go:172] (0xc000b24420) Go away received\nI0110 14:11:06.609465    1477 log.go:172] (0xc000b24420) (0xc000600820) Stream removed, broadcasting: 1\nI0110 14:11:06.610362    1477 log.go:172] (0xc000b24420) (0xc000748000) Stream removed, broadcasting: 3\nI0110 14:11:06.610450    1477 log.go:172] (0xc000b24420) (0xc0007dc000) Stream removed, broadcasting: 5\n"
Jan 10 14:11:06.620: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 10 14:11:06.620: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 10 14:11:16.690: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Jan 10 14:11:26.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8524 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 14:11:27.212: INFO: stderr: "I0110 14:11:26.984711    1500 log.go:172] (0xc0001160b0) (0xc0008146e0) Create stream\nI0110 14:11:26.984838    1500 log.go:172] (0xc0001160b0) (0xc0008146e0) Stream added, broadcasting: 1\nI0110 14:11:26.987280    1500 log.go:172] (0xc0001160b0) Reply frame received for 1\nI0110 14:11:26.987318    1500 log.go:172] (0xc0001160b0) (0xc00063a320) Create stream\nI0110 14:11:26.987324    1500 log.go:172] (0xc0001160b0) (0xc00063a320) Stream added, broadcasting: 3\nI0110 14:11:26.988566    1500 log.go:172] (0xc0001160b0) Reply frame received for 3\nI0110 14:11:26.988603    1500 log.go:172] (0xc0001160b0) (0xc000364000) Create stream\nI0110 14:11:26.988618    1500 log.go:172] (0xc0001160b0) (0xc000364000) Stream added, broadcasting: 5\nI0110 14:11:26.989750    1500 log.go:172] (0xc0001160b0) Reply frame received for 5\nI0110 14:11:27.131190    1500 log.go:172] (0xc0001160b0) Data frame received for 3\nI0110 14:11:27.131332    1500 log.go:172] (0xc00063a320) (3) Data frame handling\nI0110 14:11:27.131363    1500 log.go:172] (0xc00063a320) (3) Data frame sent\nI0110 14:11:27.131399    1500 log.go:172] (0xc0001160b0) Data frame received for 5\nI0110 14:11:27.131422    1500 log.go:172] (0xc000364000) (5) Data frame handling\nI0110 14:11:27.131435    1500 log.go:172] (0xc000364000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0110 14:11:27.206688    1500 log.go:172] (0xc0001160b0) Data frame received for 1\nI0110 14:11:27.206721    1500 log.go:172] (0xc0008146e0) (1) Data frame handling\nI0110 14:11:27.206746    1500 log.go:172] (0xc0001160b0) (0xc000364000) Stream removed, broadcasting: 5\nI0110 14:11:27.206801    1500 log.go:172] (0xc0008146e0) (1) Data frame sent\nI0110 14:11:27.206871    1500 log.go:172] (0xc0001160b0) (0xc00063a320) Stream removed, broadcasting: 3\nI0110 14:11:27.206909    1500 log.go:172] (0xc0001160b0) (0xc0008146e0) Stream removed, broadcasting: 1\nI0110 14:11:27.207262    1500 log.go:172] (0xc0001160b0) (0xc0008146e0) Stream removed, broadcasting: 1\nI0110 14:11:27.207285    1500 log.go:172] (0xc0001160b0) (0xc00063a320) Stream removed, broadcasting: 3\nI0110 14:11:27.207301    1500 log.go:172] (0xc0001160b0) (0xc000364000) Stream removed, broadcasting: 5\n"
Jan 10 14:11:27.213: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 10 14:11:27.213: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 10 14:11:37.255: INFO: Waiting for StatefulSet statefulset-8524/ss2 to complete update
Jan 10 14:11:37.256: INFO: Waiting for Pod statefulset-8524/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 10 14:11:37.256: INFO: Waiting for Pod statefulset-8524/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 10 14:11:47.452: INFO: Waiting for StatefulSet statefulset-8524/ss2 to complete update
Jan 10 14:11:47.452: INFO: Waiting for Pod statefulset-8524/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 10 14:11:47.452: INFO: Waiting for Pod statefulset-8524/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 10 14:11:57.277: INFO: Waiting for StatefulSet statefulset-8524/ss2 to complete update
Jan 10 14:11:57.278: INFO: Waiting for Pod statefulset-8524/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 10 14:11:57.278: INFO: Waiting for Pod statefulset-8524/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 10 14:12:07.284: INFO: Waiting for StatefulSet statefulset-8524/ss2 to complete update
Jan 10 14:12:07.284: INFO: Waiting for Pod statefulset-8524/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 10 14:12:17.267: INFO: Waiting for StatefulSet statefulset-8524/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jan 10 14:12:27.291: INFO: Deleting all statefulset in ns statefulset-8524
Jan 10 14:12:27.296: INFO: Scaling statefulset ss2 to 0
Jan 10 14:12:57.332: INFO: Waiting for statefulset status.replicas updated to 0
Jan 10 14:12:57.341: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:12:57.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-8524" for this suite.
Jan 10 14:13:05.456: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:13:05.662: INFO: namespace statefulset-8524 deletion completed in 8.265500706s

• [SLOW TEST:233.017 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:13:05.663: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0110 14:13:17.250512       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 10 14:13:17.250: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:13:17.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8069" for this suite.
Jan 10 14:13:35.554: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:13:35.692: INFO: namespace gc-8069 deletion completed in 18.435495857s

• [SLOW TEST:30.030 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:13:35.693: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-bf7d7669-3ccf-47f4-a8eb-c1c9172e9c1a
STEP: Creating a pod to test consume configMaps
Jan 10 14:13:35.847: INFO: Waiting up to 5m0s for pod "pod-configmaps-964ffcce-1d83-4e41-8c91-7bceaf8bd566" in namespace "configmap-5087" to be "success or failure"
Jan 10 14:13:35.884: INFO: Pod "pod-configmaps-964ffcce-1d83-4e41-8c91-7bceaf8bd566": Phase="Pending", Reason="", readiness=false. Elapsed: 37.17944ms
Jan 10 14:13:37.895: INFO: Pod "pod-configmaps-964ffcce-1d83-4e41-8c91-7bceaf8bd566": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048288983s
Jan 10 14:13:39.910: INFO: Pod "pod-configmaps-964ffcce-1d83-4e41-8c91-7bceaf8bd566": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062537528s
Jan 10 14:13:41.927: INFO: Pod "pod-configmaps-964ffcce-1d83-4e41-8c91-7bceaf8bd566": Phase="Pending", Reason="", readiness=false. Elapsed: 6.080147922s
Jan 10 14:13:43.944: INFO: Pod "pod-configmaps-964ffcce-1d83-4e41-8c91-7bceaf8bd566": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.09645027s
STEP: Saw pod success
Jan 10 14:13:43.944: INFO: Pod "pod-configmaps-964ffcce-1d83-4e41-8c91-7bceaf8bd566" satisfied condition "success or failure"
Jan 10 14:13:43.958: INFO: Trying to get logs from node iruya-node pod pod-configmaps-964ffcce-1d83-4e41-8c91-7bceaf8bd566 container configmap-volume-test: 
STEP: delete the pod
Jan 10 14:13:44.190: INFO: Waiting for pod pod-configmaps-964ffcce-1d83-4e41-8c91-7bceaf8bd566 to disappear
Jan 10 14:13:44.197: INFO: Pod pod-configmaps-964ffcce-1d83-4e41-8c91-7bceaf8bd566 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:13:44.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5087" for this suite.
Jan 10 14:13:50.226: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:13:50.379: INFO: namespace configmap-5087 deletion completed in 6.173289261s

• [SLOW TEST:14.686 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:13:50.381: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 10 14:13:50.520: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:13:58.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9169" for this suite.
Jan 10 14:14:42.736: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:14:42.872: INFO: namespace pods-9169 deletion completed in 44.169768882s

• [SLOW TEST:52.491 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:14:42.873: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-3028
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-3028
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3028
Jan 10 14:14:43.031: INFO: Found 0 stateful pods, waiting for 1
Jan 10 14:14:53.044: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Jan 10 14:14:53.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 10 14:14:53.580: INFO: stderr: "I0110 14:14:53.263908    1518 log.go:172] (0xc000a2c2c0) (0xc0008a65a0) Create stream\nI0110 14:14:53.264016    1518 log.go:172] (0xc000a2c2c0) (0xc0008a65a0) Stream added, broadcasting: 1\nI0110 14:14:53.272137    1518 log.go:172] (0xc000a2c2c0) Reply frame received for 1\nI0110 14:14:53.272187    1518 log.go:172] (0xc000a2c2c0) (0xc00061a320) Create stream\nI0110 14:14:53.272198    1518 log.go:172] (0xc000a2c2c0) (0xc00061a320) Stream added, broadcasting: 3\nI0110 14:14:53.275125    1518 log.go:172] (0xc000a2c2c0) Reply frame received for 3\nI0110 14:14:53.275153    1518 log.go:172] (0xc000a2c2c0) (0xc0008a66e0) Create stream\nI0110 14:14:53.275162    1518 log.go:172] (0xc000a2c2c0) (0xc0008a66e0) Stream added, broadcasting: 5\nI0110 14:14:53.277129    1518 log.go:172] (0xc000a2c2c0) Reply frame received for 5\nI0110 14:14:53.397794    1518 log.go:172] (0xc000a2c2c0) Data frame received for 5\nI0110 14:14:53.397842    1518 log.go:172] (0xc0008a66e0) (5) Data frame handling\nI0110 14:14:53.397864    1518 log.go:172] (0xc0008a66e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0110 14:14:53.427990    1518 log.go:172] (0xc000a2c2c0) Data frame received for 3\nI0110 14:14:53.428013    1518 log.go:172] (0xc00061a320) (3) Data frame handling\nI0110 14:14:53.428035    1518 log.go:172] (0xc00061a320) (3) Data frame sent\nI0110 14:14:53.570162    1518 log.go:172] (0xc000a2c2c0) Data frame received for 1\nI0110 14:14:53.570428    1518 log.go:172] (0xc0008a65a0) (1) Data frame handling\nI0110 14:14:53.570535    1518 log.go:172] (0xc0008a65a0) (1) Data frame sent\nI0110 14:14:53.570757    1518 log.go:172] (0xc000a2c2c0) (0xc0008a65a0) Stream removed, broadcasting: 1\nI0110 14:14:53.572045    1518 log.go:172] (0xc000a2c2c0) (0xc0008a66e0) Stream removed, broadcasting: 5\nI0110 14:14:53.572089    1518 log.go:172] (0xc000a2c2c0) (0xc00061a320) Stream removed, broadcasting: 3\nI0110 14:14:53.572130    1518 log.go:172] (0xc000a2c2c0) (0xc0008a65a0) Stream removed, broadcasting: 1\nI0110 14:14:53.572148    1518 log.go:172] (0xc000a2c2c0) (0xc00061a320) Stream removed, broadcasting: 3\nI0110 14:14:53.572180    1518 log.go:172] (0xc000a2c2c0) (0xc0008a66e0) Stream removed, broadcasting: 5\n"
Jan 10 14:14:53.580: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 10 14:14:53.581: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 10 14:14:53.604: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan 10 14:15:03.670: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 10 14:15:03.671: INFO: Waiting for statefulset status.replicas updated to 0
Jan 10 14:15:03.787: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999396s
Jan 10 14:15:04.806: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.907602255s
Jan 10 14:15:05.824: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.88821501s
Jan 10 14:15:06.835: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.870570684s
Jan 10 14:15:07.853: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.859646759s
Jan 10 14:15:08.869: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.840897639s
Jan 10 14:15:09.881: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.8253066s
Jan 10 14:15:10.894: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.813524789s
Jan 10 14:15:11.909: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.800952983s
Jan 10 14:15:12.920: INFO: Verifying statefulset ss doesn't scale past 1 for another 786.045486ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3028
Jan 10 14:15:13.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 14:15:14.472: INFO: stderr: "I0110 14:15:14.147774    1540 log.go:172] (0xc000334420) (0xc00022e780) Create stream\nI0110 14:15:14.147953    1540 log.go:172] (0xc000334420) (0xc00022e780) Stream added, broadcasting: 1\nI0110 14:15:14.154385    1540 log.go:172] (0xc000334420) Reply frame received for 1\nI0110 14:15:14.154414    1540 log.go:172] (0xc000334420) (0xc0004581e0) Create stream\nI0110 14:15:14.154421    1540 log.go:172] (0xc000334420) (0xc0004581e0) Stream added, broadcasting: 3\nI0110 14:15:14.156789    1540 log.go:172] (0xc000334420) Reply frame received for 3\nI0110 14:15:14.156833    1540 log.go:172] (0xc000334420) (0xc00022e820) Create stream\nI0110 14:15:14.156845    1540 log.go:172] (0xc000334420) (0xc00022e820) Stream added, broadcasting: 5\nI0110 14:15:14.158403    1540 log.go:172] (0xc000334420) Reply frame received for 5\nI0110 14:15:14.260208    1540 log.go:172] (0xc000334420) Data frame received for 5\nI0110 14:15:14.260267    1540 log.go:172] (0xc00022e820) (5) Data frame handling\nI0110 14:15:14.260283    1540 log.go:172] (0xc00022e820) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0110 14:15:14.260609    1540 log.go:172] (0xc000334420) Data frame received for 3\nI0110 14:15:14.260627    1540 log.go:172] (0xc0004581e0) (3) Data frame handling\nI0110 14:15:14.260642    1540 log.go:172] (0xc0004581e0) (3) Data frame sent\nI0110 14:15:14.467022    1540 log.go:172] (0xc000334420) Data frame received for 1\nI0110 14:15:14.467150    1540 log.go:172] (0xc00022e780) (1) Data frame handling\nI0110 14:15:14.467180    1540 log.go:172] (0xc00022e780) (1) Data frame sent\nI0110 14:15:14.467192    1540 log.go:172] (0xc000334420) (0xc00022e780) Stream removed, broadcasting: 1\nI0110 14:15:14.467582    1540 log.go:172] (0xc000334420) (0xc00022e820) Stream removed, broadcasting: 5\nI0110 14:15:14.467702    1540 log.go:172] (0xc000334420) (0xc0004581e0) Stream removed, broadcasting: 3\nI0110 14:15:14.467752    1540 log.go:172] (0xc000334420) Go away received\nI0110 14:15:14.467776    1540 log.go:172] (0xc000334420) (0xc00022e780) Stream removed, broadcasting: 1\nI0110 14:15:14.467790    1540 log.go:172] (0xc000334420) (0xc0004581e0) Stream removed, broadcasting: 3\nI0110 14:15:14.467794    1540 log.go:172] (0xc000334420) (0xc00022e820) Stream removed, broadcasting: 5\n"
Jan 10 14:15:14.473: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 10 14:15:14.473: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 10 14:15:14.507: INFO: Found 2 stateful pods, waiting for 3
Jan 10 14:15:24.541: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 10 14:15:24.541: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 10 14:15:24.541: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 10 14:15:34.527: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 10 14:15:34.527: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 10 14:15:34.527: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Jan 10 14:15:34.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 10 14:15:35.048: INFO: stderr: "I0110 14:15:34.715877    1561 log.go:172] (0xc0007d60b0) (0xc00039e640) Create stream\nI0110 14:15:34.715974    1561 log.go:172] (0xc0007d60b0) (0xc00039e640) Stream added, broadcasting: 1\nI0110 14:15:34.721311    1561 log.go:172] (0xc0007d60b0) Reply frame received for 1\nI0110 14:15:34.721336    1561 log.go:172] (0xc0007d60b0) (0xc0007ae000) Create stream\nI0110 14:15:34.721343    1561 log.go:172] (0xc0007d60b0) (0xc0007ae000) Stream added, broadcasting: 3\nI0110 14:15:34.723401    1561 log.go:172] (0xc0007d60b0) Reply frame received for 3\nI0110 14:15:34.723426    1561 log.go:172] (0xc0007d60b0) (0xc00039e6e0) Create stream\nI0110 14:15:34.723437    1561 log.go:172] (0xc0007d60b0) (0xc00039e6e0) Stream added, broadcasting: 5\nI0110 14:15:34.725718    1561 log.go:172] (0xc0007d60b0) Reply frame received for 5\nI0110 14:15:34.902022    1561 log.go:172] (0xc0007d60b0) Data frame received for 5\nI0110 14:15:34.902425    1561 log.go:172] (0xc00039e6e0) (5) Data frame handling\nI0110 14:15:34.902462    1561 log.go:172] (0xc00039e6e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0110 14:15:34.902488    1561 log.go:172] (0xc0007d60b0) Data frame received for 3\nI0110 14:15:34.902510    1561 log.go:172] (0xc0007ae000) (3) Data frame handling\nI0110 14:15:34.902521    1561 log.go:172] (0xc0007ae000) (3) Data frame sent\nI0110 14:15:35.043537    1561 log.go:172] (0xc0007d60b0) Data frame received for 1\nI0110 14:15:35.043834    1561 log.go:172] (0xc00039e640) (1) Data frame handling\nI0110 14:15:35.043853    1561 log.go:172] (0xc00039e640) (1) Data frame sent\nI0110 14:15:35.043866    1561 log.go:172] (0xc0007d60b0) (0xc00039e6e0) Stream removed, broadcasting: 5\nI0110 14:15:35.043924    1561 log.go:172] (0xc0007d60b0) (0xc0007ae000) Stream removed, broadcasting: 3\nI0110 14:15:35.043948    1561 log.go:172] (0xc0007d60b0) (0xc00039e640) Stream removed, broadcasting: 1\nI0110 14:15:35.044199    1561 log.go:172] (0xc0007d60b0) (0xc00039e640) Stream removed, broadcasting: 1\nI0110 14:15:35.044208    1561 log.go:172] (0xc0007d60b0) (0xc0007ae000) Stream removed, broadcasting: 3\nI0110 14:15:35.044214    1561 log.go:172] (0xc0007d60b0) (0xc00039e6e0) Stream removed, broadcasting: 5\nI0110 14:15:35.044283    1561 log.go:172] (0xc0007d60b0) Go away received\n"
Jan 10 14:15:35.049: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 10 14:15:35.049: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 10 14:15:35.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 10 14:15:35.462: INFO: stderr: "I0110 14:15:35.199576    1574 log.go:172] (0xc0008ba0b0) (0xc0008b0640) Create stream\nI0110 14:15:35.199658    1574 log.go:172] (0xc0008ba0b0) (0xc0008b0640) Stream added, broadcasting: 1\nI0110 14:15:35.203180    1574 log.go:172] (0xc0008ba0b0) Reply frame received for 1\nI0110 14:15:35.203211    1574 log.go:172] (0xc0008ba0b0) (0xc000806000) Create stream\nI0110 14:15:35.203224    1574 log.go:172] (0xc0008ba0b0) (0xc000806000) Stream added, broadcasting: 3\nI0110 14:15:35.204174    1574 log.go:172] (0xc0008ba0b0) Reply frame received for 3\nI0110 14:15:35.204201    1574 log.go:172] (0xc0008ba0b0) (0xc0006601e0) Create stream\nI0110 14:15:35.204210    1574 log.go:172] (0xc0008ba0b0) (0xc0006601e0) Stream added, broadcasting: 5\nI0110 14:15:35.205268    1574 log.go:172] (0xc0008ba0b0) Reply frame received for 5\nI0110 14:15:35.292339    1574 log.go:172] (0xc0008ba0b0) Data frame received for 5\nI0110 14:15:35.292390    1574 log.go:172] (0xc0006601e0) (5) Data frame handling\nI0110 14:15:35.292415    1574 log.go:172] (0xc0006601e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0110 14:15:35.332826    1574 log.go:172] (0xc0008ba0b0) Data frame received for 3\nI0110 14:15:35.332907    1574 log.go:172] (0xc000806000) (3) Data frame handling\nI0110 14:15:35.332933    1574 log.go:172] (0xc000806000) (3) Data frame sent\nI0110 14:15:35.455414    1574 log.go:172] (0xc0008ba0b0) (0xc000806000) Stream removed, broadcasting: 3\nI0110 14:15:35.455587    1574 log.go:172] (0xc0008ba0b0) Data frame received for 1\nI0110 14:15:35.455624    1574 log.go:172] (0xc0008ba0b0) (0xc0006601e0) Stream removed, broadcasting: 5\nI0110 14:15:35.455716    1574 log.go:172] (0xc0008b0640) (1) Data frame handling\nI0110 14:15:35.455790    1574 log.go:172] (0xc0008b0640) (1) Data frame sent\nI0110 14:15:35.455832    1574 log.go:172] (0xc0008ba0b0) (0xc0008b0640) Stream removed, broadcasting: 1\nI0110 14:15:35.455864    1574 log.go:172] (0xc0008ba0b0) Go away received\nI0110 14:15:35.456230    1574 log.go:172] (0xc0008ba0b0) (0xc0008b0640) Stream removed, broadcasting: 1\nI0110 14:15:35.456244    1574 log.go:172] (0xc0008ba0b0) (0xc000806000) Stream removed, broadcasting: 3\nI0110 14:15:35.456252    1574 log.go:172] (0xc0008ba0b0) (0xc0006601e0) Stream removed, broadcasting: 5\n"
Jan 10 14:15:35.462: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 10 14:15:35.462: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 10 14:15:35.463: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 10 14:15:36.134: INFO: stderr: "I0110 14:15:35.775578    1593 log.go:172] (0xc000130dc0) (0xc00035c820) Create stream\nI0110 14:15:35.776379    1593 log.go:172] (0xc000130dc0) (0xc00035c820) Stream added, broadcasting: 1\nI0110 14:15:35.796762    1593 log.go:172] (0xc000130dc0) Reply frame received for 1\nI0110 14:15:35.796842    1593 log.go:172] (0xc000130dc0) (0xc0007a8000) Create stream\nI0110 14:15:35.797928    1593 log.go:172] (0xc000130dc0) (0xc0007a8000) Stream added, broadcasting: 3\nI0110 14:15:35.806611    1593 log.go:172] (0xc000130dc0) Reply frame received for 3\nI0110 14:15:35.807127    1593 log.go:172] (0xc000130dc0) (0xc0007621e0) Create stream\nI0110 14:15:35.807183    1593 log.go:172] (0xc000130dc0) (0xc0007621e0) Stream added, broadcasting: 5\nI0110 14:15:35.809374    1593 log.go:172] (0xc000130dc0) Reply frame received for 5\nI0110 14:15:35.980220    1593 log.go:172] (0xc000130dc0) Data frame received for 5\nI0110 14:15:35.980715    1593 log.go:172] (0xc0007621e0) (5) Data frame handling\nI0110 14:15:35.980788    1593 log.go:172] (0xc0007621e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0110 14:15:36.012647    1593 log.go:172] (0xc000130dc0) Data frame received for 3\nI0110 14:15:36.012713    1593 log.go:172] (0xc0007a8000) (3) Data frame handling\nI0110 14:15:36.012731    1593 log.go:172] (0xc0007a8000) (3) Data frame sent\nI0110 14:15:36.127292    1593 log.go:172] (0xc000130dc0) (0xc0007a8000) Stream removed, broadcasting: 3\nI0110 14:15:36.127502    1593 log.go:172] (0xc000130dc0) (0xc0007621e0) Stream removed, broadcasting: 5\nI0110 14:15:36.127555    1593 log.go:172] (0xc000130dc0) Data frame received for 1\nI0110 14:15:36.127576    1593 log.go:172] (0xc00035c820) (1) Data frame handling\nI0110 14:15:36.127598    1593 log.go:172] (0xc00035c820) (1) Data frame sent\nI0110 14:15:36.127621    1593 log.go:172] (0xc000130dc0) (0xc00035c820) Stream removed, broadcasting: 1\nI0110 14:15:36.127717    1593 log.go:172] (0xc000130dc0) Go away received\nI0110 14:15:36.128152    1593 log.go:172] (0xc000130dc0) (0xc00035c820) Stream removed, broadcasting: 1\nI0110 14:15:36.128215    1593 log.go:172] (0xc000130dc0) (0xc0007a8000) Stream removed, broadcasting: 3\nI0110 14:15:36.128229    1593 log.go:172] (0xc000130dc0) (0xc0007621e0) Stream removed, broadcasting: 5\n"
Jan 10 14:15:36.134: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 10 14:15:36.134: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 10 14:15:36.134: INFO: Waiting for statefulset status.replicas updated to 0
Jan 10 14:15:36.142: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Jan 10 14:15:46.162: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 10 14:15:46.162: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan 10 14:15:46.163: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan 10 14:15:46.244: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999613s
Jan 10 14:15:47.254: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.935158311s
Jan 10 14:15:48.281: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.924530805s
Jan 10 14:15:49.297: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.897240363s
Jan 10 14:15:50.309: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.88228159s
Jan 10 14:15:51.317: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.870356515s
Jan 10 14:15:52.337: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.861553453s
Jan 10 14:15:53.349: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.841857939s
Jan 10 14:15:54.361: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.829474329s
Jan 10 14:15:55.374: INFO: Verifying statefulset ss doesn't scale past 3 for another 817.548741ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3028
Jan 10 14:15:56.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 14:15:57.088: INFO: stderr: "I0110 14:15:56.705083    1613 log.go:172] (0xc000116d10) (0xc00065aa00) Create stream\nI0110 14:15:56.705325    1613 log.go:172] (0xc000116d10) (0xc00065aa00) Stream added, broadcasting: 1\nI0110 14:15:56.741315    1613 log.go:172] (0xc000116d10) Reply frame received for 1\nI0110 14:15:56.741561    1613 log.go:172] (0xc000116d10) (0xc000898000) Create stream\nI0110 14:15:56.741574    1613 log.go:172] (0xc000116d10) (0xc000898000) Stream added, broadcasting: 3\nI0110 14:15:56.752706    1613 log.go:172] (0xc000116d10) Reply frame received for 3\nI0110 14:15:56.752789    1613 log.go:172] (0xc000116d10) (0xc00065aaa0) Create stream\nI0110 14:15:56.752807    1613 log.go:172] (0xc000116d10) (0xc00065aaa0) Stream added, broadcasting: 5\nI0110 14:15:56.758412    1613 log.go:172] (0xc000116d10) Reply frame received for 5\nI0110 14:15:56.918883    1613 log.go:172] (0xc000116d10) Data frame received for 5\nI0110 14:15:56.918960    1613 log.go:172] (0xc00065aaa0) (5) Data frame handling\nI0110 14:15:56.918980    1613 log.go:172] (0xc00065aaa0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0110 14:15:56.923424    1613 log.go:172] (0xc000116d10) Data frame received for 3\nI0110 14:15:56.923444    1613 log.go:172] (0xc000898000) (3) Data frame handling\nI0110 14:15:56.923456    1613 log.go:172] (0xc000898000) (3) Data frame sent\nI0110 14:15:57.079505    1613 log.go:172] (0xc000116d10) Data frame received for 1\nI0110 14:15:57.079611    1613 log.go:172] (0xc000116d10) (0xc000898000) Stream removed, broadcasting: 3\nI0110 14:15:57.079667    1613 log.go:172] (0xc00065aa00) (1) Data frame handling\nI0110 14:15:57.079681    1613 log.go:172] (0xc00065aa00) (1) Data frame sent\nI0110 14:15:57.079692    1613 log.go:172] (0xc000116d10) (0xc00065aa00) Stream removed, broadcasting: 1\nI0110 14:15:57.080042    1613 log.go:172] (0xc000116d10) (0xc00065aaa0) Stream removed, broadcasting: 5\nI0110 14:15:57.080065    1613 log.go:172] (0xc000116d10) (0xc00065aa00) Stream removed, broadcasting: 1\nI0110 14:15:57.080074    1613 log.go:172] (0xc000116d10) (0xc000898000) Stream removed, broadcasting: 3\nI0110 14:15:57.080080    1613 log.go:172] (0xc000116d10) (0xc00065aaa0) Stream removed, broadcasting: 5\nI0110 14:15:57.080355    1613 log.go:172] (0xc000116d10) Go away received\n"
Jan 10 14:15:57.089: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 10 14:15:57.089: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 10 14:15:57.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 14:15:57.437: INFO: stderr: "I0110 14:15:57.222867    1628 log.go:172] (0xc0008e6370) (0xc000598780) Create stream\nI0110 14:15:57.222974    1628 log.go:172] (0xc0008e6370) (0xc000598780) Stream added, broadcasting: 1\nI0110 14:15:57.225357    1628 log.go:172] (0xc0008e6370) Reply frame received for 1\nI0110 14:15:57.225389    1628 log.go:172] (0xc0008e6370) (0xc0008ae000) Create stream\nI0110 14:15:57.225401    1628 log.go:172] (0xc0008e6370) (0xc0008ae000) Stream added, broadcasting: 3\nI0110 14:15:57.226359    1628 log.go:172] (0xc0008e6370) Reply frame received for 3\nI0110 14:15:57.226385    1628 log.go:172] (0xc0008e6370) (0xc000686000) Create stream\nI0110 14:15:57.226397    1628 log.go:172] (0xc0008e6370) (0xc000686000) Stream added, broadcasting: 5\nI0110 14:15:57.227654    1628 log.go:172] (0xc0008e6370) Reply frame received for 5\nI0110 14:15:57.341828    1628 log.go:172] (0xc0008e6370) Data frame received for 3\nI0110 14:15:57.341934    1628 log.go:172] (0xc0008ae000) (3) Data frame handling\nI0110 14:15:57.341945    1628 log.go:172] (0xc0008ae000) (3) Data frame sent\nI0110 14:15:57.341971    1628 log.go:172] (0xc0008e6370) Data frame received for 5\nI0110 14:15:57.341975    1628 log.go:172] (0xc000686000) (5) Data frame handling\nI0110 14:15:57.341984    1628 log.go:172] (0xc000686000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0110 14:15:57.430614    1628 log.go:172] (0xc0008e6370) Data frame received for 1\nI0110 14:15:57.430680    1628 log.go:172] (0xc000598780) (1) Data frame handling\nI0110 14:15:57.430691    1628 log.go:172] (0xc000598780) (1) Data frame sent\nI0110 14:15:57.430701    1628 log.go:172] (0xc0008e6370) (0xc000598780) Stream removed, broadcasting: 1\nI0110 14:15:57.430975    1628 log.go:172] (0xc0008e6370) (0xc0008ae000) Stream removed, broadcasting: 3\nI0110 14:15:57.431035    1628 log.go:172] (0xc0008e6370) (0xc000686000) Stream removed, broadcasting: 5\nI0110 14:15:57.431098    1628 log.go:172] (0xc0008e6370) Go away received\nI0110 14:15:57.431291    1628 log.go:172] (0xc0008e6370) (0xc000598780) Stream removed, broadcasting: 1\nI0110 14:15:57.431311    1628 log.go:172] (0xc0008e6370) (0xc0008ae000) Stream removed, broadcasting: 3\nI0110 14:15:57.431322    1628 log.go:172] (0xc0008e6370) (0xc000686000) Stream removed, broadcasting: 5\n"
Jan 10 14:15:57.438: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 10 14:15:57.438: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 10 14:15:57.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 14:15:57.899: INFO: rc: 126
Jan 10 14:15:57.900: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []   OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown
 I0110 14:15:57.643577    1646 log.go:172] (0xc00090e4d0) (0xc0004dcaa0) Create stream
I0110 14:15:57.643744    1646 log.go:172] (0xc00090e4d0) (0xc0004dcaa0) Stream added, broadcasting: 1
I0110 14:15:57.655884    1646 log.go:172] (0xc00090e4d0) Reply frame received for 1
I0110 14:15:57.655937    1646 log.go:172] (0xc00090e4d0) (0xc000734000) Create stream
I0110 14:15:57.655945    1646 log.go:172] (0xc00090e4d0) (0xc000734000) Stream added, broadcasting: 3
I0110 14:15:57.657385    1646 log.go:172] (0xc00090e4d0) Reply frame received for 3
I0110 14:15:57.657412    1646 log.go:172] (0xc00090e4d0) (0xc0007fa000) Create stream
I0110 14:15:57.657424    1646 log.go:172] (0xc00090e4d0) (0xc0007fa000) Stream added, broadcasting: 5
I0110 14:15:57.658467    1646 log.go:172] (0xc00090e4d0) Reply frame received for 5
I0110 14:15:57.891563    1646 log.go:172] (0xc00090e4d0) Data frame received for 3
I0110 14:15:57.891633    1646 log.go:172] (0xc000734000) (3) Data frame handling
I0110 14:15:57.891650    1646 log.go:172] (0xc000734000) (3) Data frame sent
I0110 14:15:57.895109    1646 log.go:172] (0xc00090e4d0) (0xc000734000) Stream removed, broadcasting: 3
I0110 14:15:57.895327    1646 log.go:172] (0xc00090e4d0) Data frame received for 1
I0110 14:15:57.895361    1646 log.go:172] (0xc00090e4d0) (0xc0007fa000) Stream removed, broadcasting: 5
I0110 14:15:57.895383    1646 log.go:172] (0xc0004dcaa0) (1) Data frame handling
I0110 14:15:57.895392    1646 log.go:172] (0xc0004dcaa0) (1) Data frame sent
I0110 14:15:57.895403    1646 log.go:172] (0xc00090e4d0) (0xc0004dcaa0) Stream removed, broadcasting: 1
I0110 14:15:57.895413    1646 log.go:172] (0xc00090e4d0) Go away received
I0110 14:15:57.895875    1646 log.go:172] (0xc00090e4d0) (0xc0004dcaa0) Stream removed, broadcasting: 1
I0110 14:15:57.895888    1646 log.go:172] (0xc00090e4d0) (0xc000734000) Stream removed, broadcasting: 3
I0110 14:15:57.895895    1646 log.go:172] (0xc00090e4d0) (0xc0007fa000) Stream removed, broadcasting: 5
command terminated with exit code 126
 []  0xc0024fa840 exit status 126   true [0xc001f4a260 0xc001f4a278 0xc001f4a290] [0xc001f4a260 0xc001f4a278 0xc001f4a290] [0xc001f4a270 0xc001f4a288] [0xba6c50 0xba6c50] 0xc00160cc00 }:
Command stdout:
OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown

stderr:
I0110 14:15:57.643577    1646 log.go:172] (0xc00090e4d0) (0xc0004dcaa0) Create stream
I0110 14:15:57.643744    1646 log.go:172] (0xc00090e4d0) (0xc0004dcaa0) Stream added, broadcasting: 1
I0110 14:15:57.655884    1646 log.go:172] (0xc00090e4d0) Reply frame received for 1
I0110 14:15:57.655937    1646 log.go:172] (0xc00090e4d0) (0xc000734000) Create stream
I0110 14:15:57.655945    1646 log.go:172] (0xc00090e4d0) (0xc000734000) Stream added, broadcasting: 3
I0110 14:15:57.657385    1646 log.go:172] (0xc00090e4d0) Reply frame received for 3
I0110 14:15:57.657412    1646 log.go:172] (0xc00090e4d0) (0xc0007fa000) Create stream
I0110 14:15:57.657424    1646 log.go:172] (0xc00090e4d0) (0xc0007fa000) Stream added, broadcasting: 5
I0110 14:15:57.658467    1646 log.go:172] (0xc00090e4d0) Reply frame received for 5
I0110 14:15:57.891563    1646 log.go:172] (0xc00090e4d0) Data frame received for 3
I0110 14:15:57.891633    1646 log.go:172] (0xc000734000) (3) Data frame handling
I0110 14:15:57.891650    1646 log.go:172] (0xc000734000) (3) Data frame sent
I0110 14:15:57.895109    1646 log.go:172] (0xc00090e4d0) (0xc000734000) Stream removed, broadcasting: 3
I0110 14:15:57.895327    1646 log.go:172] (0xc00090e4d0) Data frame received for 1
I0110 14:15:57.895361    1646 log.go:172] (0xc00090e4d0) (0xc0007fa000) Stream removed, broadcasting: 5
I0110 14:15:57.895383    1646 log.go:172] (0xc0004dcaa0) (1) Data frame handling
I0110 14:15:57.895392    1646 log.go:172] (0xc0004dcaa0) (1) Data frame sent
I0110 14:15:57.895403    1646 log.go:172] (0xc00090e4d0) (0xc0004dcaa0) Stream removed, broadcasting: 1
I0110 14:15:57.895413    1646 log.go:172] (0xc00090e4d0) Go away received
I0110 14:15:57.895875    1646 log.go:172] (0xc00090e4d0) (0xc0004dcaa0) Stream removed, broadcasting: 1
I0110 14:15:57.895888    1646 log.go:172] (0xc00090e4d0) (0xc000734000) Stream removed, broadcasting: 3
I0110 14:15:57.895895    1646 log.go:172] (0xc00090e4d0) (0xc0007fa000) Stream removed, broadcasting: 5
command terminated with exit code 126

error:
exit status 126
Jan 10 14:16:07.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 14:16:08.067: INFO: rc: 1
Jan 10 14:16:08.068: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0016ed8f0 exit status 1   true [0xc0006d91f8 0xc0006d92c0 0xc0006d94c8] [0xc0006d91f8 0xc0006d92c0 0xc0006d94c8] [0xc0006d9298 0xc0006d93e8] [0xba6c50 0xba6c50] 0xc001cd6d80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 10 14:16:18.069: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 14:16:18.160: INFO: rc: 1
Jan 10 14:16:18.160: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0029447e0 exit status 1   true [0xc0028dc438 0xc0028dc478 0xc0028dc4a8] [0xc0028dc438 0xc0028dc478 0xc0028dc4a8] [0xc0028dc458 0xc0028dc498] [0xba6c50 0xba6c50] 0xc0022d4120 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 10 14:16:28.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 14:16:28.322: INFO: rc: 1
Jan 10 14:16:28.322: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0024fa960 exit status 1   true [0xc001f4a298 0xc001f4a2b0 0xc001f4a2c8] [0xc001f4a298 0xc001f4a2b0 0xc001f4a2c8] [0xc001f4a2a8 0xc001f4a2c0] [0xba6c50 0xba6c50] 0xc00160d020 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 10 14:16:38.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 14:16:38.468: INFO: rc: 1
Jan 10 14:16:38.468: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0029448d0 exit status 1   true [0xc0028dc4b8 0xc0028dc4e8 0xc0028dc518] [0xc0028dc4b8 0xc0028dc4e8 0xc0028dc518] [0xc0028dc4d8 0xc0028dc508] [0xba6c50 0xba6c50] 0xc0022d46c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 10 14:16:48.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 14:16:48.598: INFO: rc: 1
Jan 10 14:16:48.599: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0024faa50 exit status 1   true [0xc001f4a2d0 0xc001f4a2e8 0xc001f4a300] [0xc001f4a2d0 0xc001f4a2e8 0xc001f4a300] [0xc001f4a2e0 0xc001f4a2f8] [0xba6c50 0xba6c50] 0xc00160d7a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 10 14:16:58.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 14:16:58.731: INFO: rc: 1
Jan 10 14:16:58.731: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0029449c0 exit status 1   true [0xc0028dc528 0xc0028dc580 0xc0028dc5b0] [0xc0028dc528 0xc0028dc580 0xc0028dc5b0] [0xc0028dc560 0xc0028dc5a0] [0xba6c50 0xba6c50] 0xc0022d4c00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 10 14:17:08.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 14:17:08.893: INFO: rc: 1
Jan 10 14:17:08.894: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001fc00c0 exit status 1   true [0xc0001862d8 0xc0028dc020 0xc0028dc058] [0xc0001862d8 0xc0028dc020 0xc0028dc058] [0xc0028dc010 0xc0028dc048] [0xba6c50 0xba6c50] 0xc00250c5a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 10 14:17:18.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 14:17:19.022: INFO: rc: 1
Jan 10 14:17:19.023: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001c5e090 exit status 1   true [0xc0027c0000 0xc0027c0018 0xc0027c0030] [0xc0027c0000 0xc0027c0018 0xc0027c0030] [0xc0027c0010 0xc0027c0028] [0xba6c50 0xba6c50] 0xc001c192c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 10 14:17:29.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 14:17:29.178: INFO: rc: 1
Jan 10 14:17:29.179: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001c5e150 exit status 1   true [0xc0027c0038 0xc0027c0050 0xc0027c0068] [0xc0027c0038 0xc0027c0050 0xc0027c0068] [0xc0027c0048 0xc0027c0060] [0xba6c50 0xba6c50] 0xc002254060 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 10 14:17:39.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 14:17:39.322: INFO: rc: 1
Jan 10 14:17:39.323: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001fc01b0 exit status 1   true [0xc0028dc078 0xc0028dc0a8 0xc0028dc0e0] [0xc0028dc078 0xc0028dc0a8 0xc0028dc0e0] [0xc0028dc098 0xc0028dc0c8] [0xba6c50 0xba6c50] 0xc00250cde0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 10 14:17:49.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 14:17:49.469: INFO: rc: 1
Jan 10 14:17:49.469: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0032cc0f0 exit status 1   true [0xc001f4a000 0xc001f4a018 0xc001f4a030] [0xc001f4a000 0xc001f4a018 0xc001f4a030] [0xc001f4a010 0xc001f4a028] [0xba6c50 0xba6c50] 0xc0025f4300 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 10 14:17:59.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 14:17:59.608: INFO: rc: 1
Jan 10 14:17:59.608: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001c5e240 exit status 1   true [0xc0027c0070 0xc0027c0088 0xc0027c00a0] [0xc0027c0070 0xc0027c0088 0xc0027c00a0] [0xc0027c0080 0xc0027c0098] [0xba6c50 0xba6c50] 0xc0022545a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 10 14:18:09.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 14:18:09.775: INFO: rc: 1
Jan 10 14:18:09.775: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001c5e300 exit status 1   true [0xc0027c00a8 0xc0027c00c0 0xc0027c00d8] [0xc0027c00a8 0xc0027c00c0 0xc0027c00d8] [0xc0027c00b8 0xc0027c00d0] [0xba6c50 0xba6c50] 0xc002254a20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 10 14:18:19.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 14:18:19.887: INFO: rc: 1
Jan 10 14:18:19.887: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001fc02a0 exit status 1   true [0xc0028dc0f0 0xc0028dc140 0xc0028dc170] [0xc0028dc0f0 0xc0028dc140 0xc0028dc170] [0xc0028dc128 0xc0028dc160] [0xba6c50 0xba6c50] 0xc00250d620 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 10 14:18:29.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 14:18:29.984: INFO: rc: 1
Jan 10 14:18:29.985: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0032cc210 exit status 1   true [0xc001f4a038 0xc001f4a050 0xc001f4a068] [0xc001f4a038 0xc001f4a050 0xc001f4a068] [0xc001f4a048 0xc001f4a060] [0xba6c50 0xba6c50] 0xc0025f48a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 10 14:18:39.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 14:18:40.137: INFO: rc: 1
Jan 10 14:18:40.138: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0032cc2d0 exit status 1   true [0xc001f4a070 0xc001f4a088 0xc001f4a0a0] [0xc001f4a070 0xc001f4a088 0xc001f4a0a0] [0xc001f4a080 0xc001f4a098] [0xba6c50 0xba6c50] 0xc0025f4d20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 10 14:18:50.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 14:18:50.264: INFO: rc: 1
Jan 10 14:18:50.265: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001c5e420 exit status 1   true [0xc0027c00e0 0xc0027c00f8 0xc0027c0110] [0xc0027c00e0 0xc0027c00f8 0xc0027c0110] [0xc0027c00f0 0xc0027c0108] [0xba6c50 0xba6c50] 0xc002254d80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 10 14:19:00.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 14:19:00.380: INFO: rc: 1
Jan 10 14:19:00.381: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0032cc390 exit status 1   true [0xc001f4a0a8 0xc001f4a0c0 0xc001f4a0d8] [0xc001f4a0a8 0xc001f4a0c0 0xc001f4a0d8] [0xc001f4a0b8 0xc001f4a0d0] [0xba6c50 0xba6c50] 0xc0025f51a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 10 14:19:10.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 14:19:10.525: INFO: rc: 1
Jan 10 14:19:10.526: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0028441b0 exit status 1   true [0xc0001862d8 0xc001f4a010 0xc001f4a028] [0xc0001862d8 0xc001f4a010 0xc001f4a028] [0xc001f4a008 0xc001f4a020] [0xba6c50 0xba6c50] 0xc001c192c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 10 14:19:20.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 14:19:20.622: INFO: rc: 1
Jan 10 14:19:20.623: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0032cc0c0 exit status 1   true [0xc0027c0000 0xc0027c0018 0xc0027c0030] [0xc0027c0000 0xc0027c0018 0xc0027c0030] [0xc0027c0010 0xc0027c0028] [0xba6c50 0xba6c50] 0xc0025f4300 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 10 14:19:30.625: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 14:19:30.750: INFO: rc: 1
Jan 10 14:19:30.751: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001fc00f0 exit status 1   true [0xc0028dc000 0xc0028dc030 0xc0028dc078] [0xc0028dc000 0xc0028dc030 0xc0028dc078] [0xc0028dc020 0xc0028dc058] [0xba6c50 0xba6c50] 0xc002254420 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 10 14:19:40.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 14:19:40.903: INFO: rc: 1
Jan 10 14:19:40.904: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001c5e0f0 exit status 1   true [0xc0006d8000 0xc0006d80b8 0xc0006d82a8] [0xc0006d8000 0xc0006d80b8 0xc0006d82a8] [0xc0006d8098 0xc0006d81b0] [0xba6c50 0xba6c50] 0xc00250c5a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 10 14:19:50.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 14:19:53.367: INFO: rc: 1
Jan 10 14:19:53.368: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001fc0240 exit status 1   true [0xc0028dc088 0xc0028dc0b8 0xc0028dc0f0] [0xc0028dc088 0xc0028dc0b8 0xc0028dc0f0] [0xc0028dc0a8 0xc0028dc0e0] [0xba6c50 0xba6c50] 0xc002254900 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 10 14:20:03.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 14:20:03.532: INFO: rc: 1
Jan 10 14:20:03.532: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002844270 exit status 1   true [0xc001f4a030 0xc001f4a048 0xc001f4a060] [0xc001f4a030 0xc001f4a048 0xc001f4a060] [0xc001f4a040 0xc001f4a058] [0xba6c50 0xba6c50] 0xc0022d4120 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 10 14:20:13.533: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 14:20:13.698: INFO: rc: 1
Jan 10 14:20:13.699: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001fc0360 exit status 1   true [0xc0028dc118 0xc0028dc150 0xc0028dc188] [0xc0028dc118 0xc0028dc150 0xc0028dc188] [0xc0028dc140 0xc0028dc170] [0xba6c50 0xba6c50] 0xc002254c00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 10 14:20:23.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 14:20:23.839: INFO: rc: 1
Jan 10 14:20:23.840: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0032cc240 exit status 1   true [0xc0027c0038 0xc0027c0050 0xc0027c0068] [0xc0027c0038 0xc0027c0050 0xc0027c0068] [0xc0027c0048 0xc0027c0060] [0xba6c50 0xba6c50] 0xc0025f48a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 10 14:20:33.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 14:20:33.981: INFO: rc: 1
Jan 10 14:20:33.981: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0032cc330 exit status 1   true [0xc0027c0070 0xc0027c0088 0xc0027c00a0] [0xc0027c0070 0xc0027c0088 0xc0027c00a0] [0xc0027c0080 0xc0027c0098] [0xba6c50 0xba6c50] 0xc0025f4d20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 10 14:20:43.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 14:20:44.094: INFO: rc: 1
Jan 10 14:20:44.094: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0032cc450 exit status 1   true [0xc0027c00a8 0xc0027c00c0 0xc0027c00d8] [0xc0027c00a8 0xc0027c00c0 0xc0027c00d8] [0xc0027c00b8 0xc0027c00d0] [0xba6c50 0xba6c50] 0xc0025f51a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 10 14:20:54.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 14:20:54.215: INFO: rc: 1
Jan 10 14:20:54.216: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002844360 exit status 1   true [0xc001f4a068 0xc001f4a080 0xc001f4a098] [0xc001f4a068 0xc001f4a080 0xc001f4a098] [0xc001f4a078 0xc001f4a090] [0xba6c50 0xba6c50] 0xc0022d46c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 10 14:21:04.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 14:21:04.372: INFO: rc: 1
Jan 10 14:21:04.372: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: 
Jan 10 14:21:04.372: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jan 10 14:21:04.393: INFO: Deleting all statefulset in ns statefulset-3028
Jan 10 14:21:04.398: INFO: Scaling statefulset ss to 0
Jan 10 14:21:04.410: INFO: Waiting for statefulset status.replicas updated to 0
Jan 10 14:21:04.414: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:21:04.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-3028" for this suite.
Jan 10 14:21:10.500: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:21:10.649: INFO: namespace statefulset-3028 deletion completed in 6.203206637s

• [SLOW TEST:387.777 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:21:10.650: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-cf5a4141-cd0b-4688-9fec-52ad501c8796
STEP: Creating a pod to test consume configMaps
Jan 10 14:21:10.784: INFO: Waiting up to 5m0s for pod "pod-configmaps-9b0d55fa-ff5e-449b-b174-28d6a92b8223" in namespace "configmap-1679" to be "success or failure"
Jan 10 14:21:10.789: INFO: Pod "pod-configmaps-9b0d55fa-ff5e-449b-b174-28d6a92b8223": Phase="Pending", Reason="", readiness=false. Elapsed: 3.898151ms
Jan 10 14:21:12.818: INFO: Pod "pod-configmaps-9b0d55fa-ff5e-449b-b174-28d6a92b8223": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033660543s
Jan 10 14:21:14.829: INFO: Pod "pod-configmaps-9b0d55fa-ff5e-449b-b174-28d6a92b8223": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044594503s
Jan 10 14:21:16.859: INFO: Pod "pod-configmaps-9b0d55fa-ff5e-449b-b174-28d6a92b8223": Phase="Pending", Reason="", readiness=false. Elapsed: 6.074435991s
Jan 10 14:21:18.879: INFO: Pod "pod-configmaps-9b0d55fa-ff5e-449b-b174-28d6a92b8223": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.094233903s
STEP: Saw pod success
Jan 10 14:21:18.879: INFO: Pod "pod-configmaps-9b0d55fa-ff5e-449b-b174-28d6a92b8223" satisfied condition "success or failure"
Jan 10 14:21:18.898: INFO: Trying to get logs from node iruya-node pod pod-configmaps-9b0d55fa-ff5e-449b-b174-28d6a92b8223 container configmap-volume-test: 
STEP: delete the pod
Jan 10 14:21:19.093: INFO: Waiting for pod pod-configmaps-9b0d55fa-ff5e-449b-b174-28d6a92b8223 to disappear
Jan 10 14:21:19.139: INFO: Pod pod-configmaps-9b0d55fa-ff5e-449b-b174-28d6a92b8223 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:21:19.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1679" for this suite.
Jan 10 14:21:25.182: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:21:25.329: INFO: namespace configmap-1679 deletion completed in 6.17802569s

• [SLOW TEST:14.680 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:21:25.331: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-7581/configmap-test-fa2b0062-5862-4b40-90d1-cac79e908b56
STEP: Creating a pod to test consume configMaps
Jan 10 14:21:25.488: INFO: Waiting up to 5m0s for pod "pod-configmaps-7433b703-d373-4a51-86e3-9dd45a5e572b" in namespace "configmap-7581" to be "success or failure"
Jan 10 14:21:25.501: INFO: Pod "pod-configmaps-7433b703-d373-4a51-86e3-9dd45a5e572b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.987409ms
Jan 10 14:21:27.513: INFO: Pod "pod-configmaps-7433b703-d373-4a51-86e3-9dd45a5e572b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025032489s
Jan 10 14:21:29.528: INFO: Pod "pod-configmaps-7433b703-d373-4a51-86e3-9dd45a5e572b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039657224s
Jan 10 14:21:31.537: INFO: Pod "pod-configmaps-7433b703-d373-4a51-86e3-9dd45a5e572b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048406571s
Jan 10 14:21:33.544: INFO: Pod "pod-configmaps-7433b703-d373-4a51-86e3-9dd45a5e572b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.056176322s
Jan 10 14:21:35.559: INFO: Pod "pod-configmaps-7433b703-d373-4a51-86e3-9dd45a5e572b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.070472312s
STEP: Saw pod success
Jan 10 14:21:35.559: INFO: Pod "pod-configmaps-7433b703-d373-4a51-86e3-9dd45a5e572b" satisfied condition "success or failure"
Jan 10 14:21:35.565: INFO: Trying to get logs from node iruya-node pod pod-configmaps-7433b703-d373-4a51-86e3-9dd45a5e572b container env-test: 
STEP: delete the pod
Jan 10 14:21:35.666: INFO: Waiting for pod pod-configmaps-7433b703-d373-4a51-86e3-9dd45a5e572b to disappear
Jan 10 14:21:35.674: INFO: Pod pod-configmaps-7433b703-d373-4a51-86e3-9dd45a5e572b no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:21:35.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7581" for this suite.
Jan 10 14:21:41.780: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:21:41.962: INFO: namespace configmap-7581 deletion completed in 6.277910749s

• [SLOW TEST:16.631 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:21:41.963: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 10 14:21:42.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-2481'
Jan 10 14:21:42.274: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 10 14:21:42.275: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Jan 10 14:21:42.354: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Jan 10 14:21:42.363: INFO: scanned /root for discovery docs: 
Jan 10 14:21:42.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-2481'
Jan 10 14:22:04.449: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan 10 14:22:04.450: INFO: stdout: "Created e2e-test-nginx-rc-906471d18b79ca19c62c7f93b87e6687\nScaling up e2e-test-nginx-rc-906471d18b79ca19c62c7f93b87e6687 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-906471d18b79ca19c62c7f93b87e6687 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-906471d18b79ca19c62c7f93b87e6687 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Jan 10 14:22:04.450: INFO: stdout: "Created e2e-test-nginx-rc-906471d18b79ca19c62c7f93b87e6687\nScaling up e2e-test-nginx-rc-906471d18b79ca19c62c7f93b87e6687 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-906471d18b79ca19c62c7f93b87e6687 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-906471d18b79ca19c62c7f93b87e6687 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Jan 10 14:22:04.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-2481'
Jan 10 14:22:04.604: INFO: stderr: ""
Jan 10 14:22:04.605: INFO: stdout: "e2e-test-nginx-rc-906471d18b79ca19c62c7f93b87e6687-dx7d5 e2e-test-nginx-rc-zz7xw "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jan 10 14:22:09.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-2481'
Jan 10 14:22:09.761: INFO: stderr: ""
Jan 10 14:22:09.761: INFO: stdout: "e2e-test-nginx-rc-906471d18b79ca19c62c7f93b87e6687-dx7d5 "
Jan 10 14:22:09.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-906471d18b79ca19c62c7f93b87e6687-dx7d5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2481'
Jan 10 14:22:09.896: INFO: stderr: ""
Jan 10 14:22:09.896: INFO: stdout: "true"
Jan 10 14:22:09.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-906471d18b79ca19c62c7f93b87e6687-dx7d5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2481'
Jan 10 14:22:09.975: INFO: stderr: ""
Jan 10 14:22:09.975: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Jan 10 14:22:09.975: INFO: e2e-test-nginx-rc-906471d18b79ca19c62c7f93b87e6687-dx7d5 is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522
Jan 10 14:22:09.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-2481'
Jan 10 14:22:10.056: INFO: stderr: ""
Jan 10 14:22:10.057: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:22:10.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2481" for this suite.
Jan 10 14:22:16.209: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:22:16.380: INFO: namespace kubectl-2481 deletion completed in 6.207636059s

• [SLOW TEST:34.418 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:22:16.382: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-f40a63e7-5265-4c2e-837b-a49722b7e361
STEP: Creating secret with name s-test-opt-upd-2e2e2591-94d8-47a4-987a-b904743e1dcc
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-f40a63e7-5265-4c2e-837b-a49722b7e361
STEP: Updating secret s-test-opt-upd-2e2e2591-94d8-47a4-987a-b904743e1dcc
STEP: Creating secret with name s-test-opt-create-7dea3ea8-18ef-4cf9-a3eb-f6449b014535
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:23:46.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8539" for this suite.
Jan 10 14:24:10.499: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:24:10.613: INFO: namespace projected-8539 deletion completed in 24.196011826s

• [SLOW TEST:114.232 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:24:10.614: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-secret-46dn
STEP: Creating a pod to test atomic-volume-subpath
Jan 10 14:24:10.731: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-46dn" in namespace "subpath-9281" to be "success or failure"
Jan 10 14:24:10.742: INFO: Pod "pod-subpath-test-secret-46dn": Phase="Pending", Reason="", readiness=false. Elapsed: 11.239416ms
Jan 10 14:24:12.751: INFO: Pod "pod-subpath-test-secret-46dn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019885632s
Jan 10 14:24:14.761: INFO: Pod "pod-subpath-test-secret-46dn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030037334s
Jan 10 14:24:16.773: INFO: Pod "pod-subpath-test-secret-46dn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042027103s
Jan 10 14:24:18.782: INFO: Pod "pod-subpath-test-secret-46dn": Phase="Running", Reason="", readiness=true. Elapsed: 8.051666753s
Jan 10 14:24:20.792: INFO: Pod "pod-subpath-test-secret-46dn": Phase="Running", Reason="", readiness=true. Elapsed: 10.061633147s
Jan 10 14:24:22.801: INFO: Pod "pod-subpath-test-secret-46dn": Phase="Running", Reason="", readiness=true. Elapsed: 12.070443547s
Jan 10 14:24:24.810: INFO: Pod "pod-subpath-test-secret-46dn": Phase="Running", Reason="", readiness=true. Elapsed: 14.078786497s
Jan 10 14:24:26.820: INFO: Pod "pod-subpath-test-secret-46dn": Phase="Running", Reason="", readiness=true. Elapsed: 16.089681835s
Jan 10 14:24:28.830: INFO: Pod "pod-subpath-test-secret-46dn": Phase="Running", Reason="", readiness=true. Elapsed: 18.09901732s
Jan 10 14:24:30.843: INFO: Pod "pod-subpath-test-secret-46dn": Phase="Running", Reason="", readiness=true. Elapsed: 20.112482131s
Jan 10 14:24:32.853: INFO: Pod "pod-subpath-test-secret-46dn": Phase="Running", Reason="", readiness=true. Elapsed: 22.12253332s
Jan 10 14:24:34.865: INFO: Pod "pod-subpath-test-secret-46dn": Phase="Running", Reason="", readiness=true. Elapsed: 24.134622427s
Jan 10 14:24:37.342: INFO: Pod "pod-subpath-test-secret-46dn": Phase="Running", Reason="", readiness=true. Elapsed: 26.610825379s
Jan 10 14:24:39.352: INFO: Pod "pod-subpath-test-secret-46dn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.621137337s
STEP: Saw pod success
Jan 10 14:24:39.352: INFO: Pod "pod-subpath-test-secret-46dn" satisfied condition "success or failure"
Jan 10 14:24:39.359: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-secret-46dn container test-container-subpath-secret-46dn: 
STEP: delete the pod
Jan 10 14:24:39.573: INFO: Waiting for pod pod-subpath-test-secret-46dn to disappear
Jan 10 14:24:39.656: INFO: Pod pod-subpath-test-secret-46dn no longer exists
STEP: Deleting pod pod-subpath-test-secret-46dn
Jan 10 14:24:39.656: INFO: Deleting pod "pod-subpath-test-secret-46dn" in namespace "subpath-9281"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:24:39.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-9281" for this suite.
Jan 10 14:24:45.696: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:24:45.828: INFO: namespace subpath-9281 deletion completed in 6.160799322s

• [SLOW TEST:35.214 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:24:45.830: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 10 14:24:45.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-5681'
Jan 10 14:24:46.078: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 10 14:24:46.078: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Jan 10 14:24:46.132: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-zwlp5]
Jan 10 14:24:46.132: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-zwlp5" in namespace "kubectl-5681" to be "running and ready"
Jan 10 14:24:46.160: INFO: Pod "e2e-test-nginx-rc-zwlp5": Phase="Pending", Reason="", readiness=false. Elapsed: 27.610522ms
Jan 10 14:24:48.169: INFO: Pod "e2e-test-nginx-rc-zwlp5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037254008s
Jan 10 14:24:50.181: INFO: Pod "e2e-test-nginx-rc-zwlp5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049331891s
Jan 10 14:24:52.201: INFO: Pod "e2e-test-nginx-rc-zwlp5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068911947s
Jan 10 14:24:54.212: INFO: Pod "e2e-test-nginx-rc-zwlp5": Phase="Running", Reason="", readiness=true. Elapsed: 8.079942475s
Jan 10 14:24:54.212: INFO: Pod "e2e-test-nginx-rc-zwlp5" satisfied condition "running and ready"
Jan 10 14:24:54.212: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-zwlp5]
Jan 10 14:24:54.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-5681'
Jan 10 14:24:54.362: INFO: stderr: ""
Jan 10 14:24:54.363: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461
Jan 10 14:24:54.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-5681'
Jan 10 14:24:54.458: INFO: stderr: ""
Jan 10 14:24:54.458: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:24:54.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5681" for this suite.
Jan 10 14:25:16.519: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:25:16.619: INFO: namespace kubectl-5681 deletion completed in 22.152843413s

• [SLOW TEST:30.790 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:25:16.620: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Jan 10 14:25:16.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7972'
Jan 10 14:25:17.153: INFO: stderr: ""
Jan 10 14:25:17.153: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 10 14:25:17.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7972'
Jan 10 14:25:17.278: INFO: stderr: ""
Jan 10 14:25:17.278: INFO: stdout: "update-demo-nautilus-j56fw update-demo-nautilus-pxhmz "
Jan 10 14:25:17.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j56fw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7972'
Jan 10 14:25:17.394: INFO: stderr: ""
Jan 10 14:25:17.394: INFO: stdout: ""
Jan 10 14:25:17.394: INFO: update-demo-nautilus-j56fw is created but not running
Jan 10 14:25:22.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7972'
Jan 10 14:25:22.480: INFO: stderr: ""
Jan 10 14:25:22.481: INFO: stdout: "update-demo-nautilus-j56fw update-demo-nautilus-pxhmz "
Jan 10 14:25:22.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j56fw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7972'
Jan 10 14:25:22.592: INFO: stderr: ""
Jan 10 14:25:22.592: INFO: stdout: ""
Jan 10 14:25:22.592: INFO: update-demo-nautilus-j56fw is created but not running
Jan 10 14:25:27.593: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7972'
Jan 10 14:25:27.752: INFO: stderr: ""
Jan 10 14:25:27.752: INFO: stdout: "update-demo-nautilus-j56fw update-demo-nautilus-pxhmz "
Jan 10 14:25:27.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j56fw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7972'
Jan 10 14:25:27.904: INFO: stderr: ""
Jan 10 14:25:27.904: INFO: stdout: ""
Jan 10 14:25:27.904: INFO: update-demo-nautilus-j56fw is created but not running
Jan 10 14:25:32.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7972'
Jan 10 14:25:33.105: INFO: stderr: ""
Jan 10 14:25:33.105: INFO: stdout: "update-demo-nautilus-j56fw update-demo-nautilus-pxhmz "
Jan 10 14:25:33.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j56fw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7972'
Jan 10 14:25:33.204: INFO: stderr: ""
Jan 10 14:25:33.204: INFO: stdout: "true"
Jan 10 14:25:33.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j56fw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7972'
Jan 10 14:25:33.284: INFO: stderr: ""
Jan 10 14:25:33.284: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 10 14:25:33.285: INFO: validating pod update-demo-nautilus-j56fw
Jan 10 14:25:33.294: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 10 14:25:33.295: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 10 14:25:33.295: INFO: update-demo-nautilus-j56fw is verified up and running
Jan 10 14:25:33.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pxhmz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7972'
Jan 10 14:25:33.408: INFO: stderr: ""
Jan 10 14:25:33.408: INFO: stdout: "true"
Jan 10 14:25:33.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pxhmz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7972'
Jan 10 14:25:33.527: INFO: stderr: ""
Jan 10 14:25:33.527: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 10 14:25:33.527: INFO: validating pod update-demo-nautilus-pxhmz
Jan 10 14:25:33.558: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 10 14:25:33.558: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 10 14:25:33.558: INFO: update-demo-nautilus-pxhmz is verified up and running
STEP: scaling down the replication controller
Jan 10 14:25:33.561: INFO: scanned /root for discovery docs: 
Jan 10 14:25:33.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-7972'
Jan 10 14:25:34.777: INFO: stderr: ""
Jan 10 14:25:34.777: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 10 14:25:34.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7972'
Jan 10 14:25:34.886: INFO: stderr: ""
Jan 10 14:25:34.887: INFO: stdout: "update-demo-nautilus-j56fw update-demo-nautilus-pxhmz "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan 10 14:25:39.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7972'
Jan 10 14:25:40.265: INFO: stderr: ""
Jan 10 14:25:40.265: INFO: stdout: "update-demo-nautilus-j56fw update-demo-nautilus-pxhmz "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan 10 14:25:45.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7972'
Jan 10 14:25:45.457: INFO: stderr: ""
Jan 10 14:25:45.457: INFO: stdout: "update-demo-nautilus-j56fw update-demo-nautilus-pxhmz "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan 10 14:25:50.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7972'
Jan 10 14:25:50.586: INFO: stderr: ""
Jan 10 14:25:50.586: INFO: stdout: "update-demo-nautilus-j56fw "
Jan 10 14:25:50.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j56fw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7972'
Jan 10 14:25:50.698: INFO: stderr: ""
Jan 10 14:25:50.698: INFO: stdout: "true"
Jan 10 14:25:50.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j56fw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7972'
Jan 10 14:25:50.782: INFO: stderr: ""
Jan 10 14:25:50.782: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 10 14:25:50.782: INFO: validating pod update-demo-nautilus-j56fw
Jan 10 14:25:50.787: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 10 14:25:50.787: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 10 14:25:50.788: INFO: update-demo-nautilus-j56fw is verified up and running
STEP: scaling up the replication controller
Jan 10 14:25:50.790: INFO: scanned /root for discovery docs: 
Jan 10 14:25:50.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-7972'
Jan 10 14:25:51.997: INFO: stderr: ""
Jan 10 14:25:51.997: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 10 14:25:51.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7972'
Jan 10 14:25:52.102: INFO: stderr: ""
Jan 10 14:25:52.102: INFO: stdout: "update-demo-nautilus-c4vl5 update-demo-nautilus-j56fw "
Jan 10 14:25:52.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c4vl5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7972'
Jan 10 14:25:52.260: INFO: stderr: ""
Jan 10 14:25:52.260: INFO: stdout: ""
Jan 10 14:25:52.260: INFO: update-demo-nautilus-c4vl5 is created but not running
Jan 10 14:25:57.261: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7972'
Jan 10 14:25:57.433: INFO: stderr: ""
Jan 10 14:25:57.434: INFO: stdout: "update-demo-nautilus-c4vl5 update-demo-nautilus-j56fw "
Jan 10 14:25:57.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c4vl5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7972'
Jan 10 14:25:57.617: INFO: stderr: ""
Jan 10 14:25:57.617: INFO: stdout: ""
Jan 10 14:25:57.618: INFO: update-demo-nautilus-c4vl5 is created but not running
Jan 10 14:26:02.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7972'
Jan 10 14:26:02.759: INFO: stderr: ""
Jan 10 14:26:02.760: INFO: stdout: "update-demo-nautilus-c4vl5 update-demo-nautilus-j56fw "
Jan 10 14:26:02.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c4vl5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7972'
Jan 10 14:26:02.877: INFO: stderr: ""
Jan 10 14:26:02.878: INFO: stdout: "true"
Jan 10 14:26:02.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c4vl5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7972'
Jan 10 14:26:03.019: INFO: stderr: ""
Jan 10 14:26:03.019: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 10 14:26:03.019: INFO: validating pod update-demo-nautilus-c4vl5
Jan 10 14:26:03.033: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 10 14:26:03.033: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 10 14:26:03.033: INFO: update-demo-nautilus-c4vl5 is verified up and running
Jan 10 14:26:03.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j56fw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7972'
Jan 10 14:26:03.161: INFO: stderr: ""
Jan 10 14:26:03.161: INFO: stdout: "true"
Jan 10 14:26:03.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j56fw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7972'
Jan 10 14:26:03.236: INFO: stderr: ""
Jan 10 14:26:03.236: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 10 14:26:03.236: INFO: validating pod update-demo-nautilus-j56fw
Jan 10 14:26:03.241: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 10 14:26:03.241: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 10 14:26:03.241: INFO: update-demo-nautilus-j56fw is verified up and running
STEP: using delete to clean up resources
Jan 10 14:26:03.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7972'
Jan 10 14:26:03.318: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 10 14:26:03.318: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan 10 14:26:03.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7972'
Jan 10 14:26:03.474: INFO: stderr: "No resources found.\n"
Jan 10 14:26:03.474: INFO: stdout: ""
Jan 10 14:26:03.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7972 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 10 14:26:03.710: INFO: stderr: ""
Jan 10 14:26:03.710: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:26:03.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7972" for this suite.
Jan 10 14:26:25.861: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:26:25.995: INFO: namespace kubectl-7972 deletion completed in 22.265946816s

• [SLOW TEST:69.376 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:26:25.996: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:27:26.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-181" for this suite.
Jan 10 14:27:48.136: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:27:48.290: INFO: namespace container-probe-181 deletion completed in 22.178691576s

• [SLOW TEST:82.294 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:27:48.290: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 10 14:27:48.361: INFO: Waiting up to 5m0s for pod "downwardapi-volume-627bc51e-ed96-48f2-b7cd-39ae9b3f037c" in namespace "projected-8458" to be "success or failure"
Jan 10 14:27:48.380: INFO: Pod "downwardapi-volume-627bc51e-ed96-48f2-b7cd-39ae9b3f037c": Phase="Pending", Reason="", readiness=false. Elapsed: 19.13587ms
Jan 10 14:27:50.394: INFO: Pod "downwardapi-volume-627bc51e-ed96-48f2-b7cd-39ae9b3f037c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032441145s
Jan 10 14:27:52.402: INFO: Pod "downwardapi-volume-627bc51e-ed96-48f2-b7cd-39ae9b3f037c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040606798s
Jan 10 14:27:54.411: INFO: Pod "downwardapi-volume-627bc51e-ed96-48f2-b7cd-39ae9b3f037c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049549314s
Jan 10 14:27:56.420: INFO: Pod "downwardapi-volume-627bc51e-ed96-48f2-b7cd-39ae9b3f037c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.059215971s
STEP: Saw pod success
Jan 10 14:27:56.421: INFO: Pod "downwardapi-volume-627bc51e-ed96-48f2-b7cd-39ae9b3f037c" satisfied condition "success or failure"
Jan 10 14:27:56.423: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-627bc51e-ed96-48f2-b7cd-39ae9b3f037c container client-container: 
STEP: delete the pod
Jan 10 14:27:56.474: INFO: Waiting for pod downwardapi-volume-627bc51e-ed96-48f2-b7cd-39ae9b3f037c to disappear
Jan 10 14:27:56.481: INFO: Pod downwardapi-volume-627bc51e-ed96-48f2-b7cd-39ae9b3f037c no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:27:56.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8458" for this suite.
Jan 10 14:28:02.524: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:28:02.686: INFO: namespace projected-8458 deletion completed in 6.196026554s

• [SLOW TEST:14.396 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:28:02.687: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 10 14:28:02.858: INFO: Waiting up to 5m0s for pod "downwardapi-volume-506a7b35-728c-48e5-be6a-82e889c844d0" in namespace "downward-api-4729" to be "success or failure"
Jan 10 14:28:02.869: INFO: Pod "downwardapi-volume-506a7b35-728c-48e5-be6a-82e889c844d0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.977894ms
Jan 10 14:28:04.879: INFO: Pod "downwardapi-volume-506a7b35-728c-48e5-be6a-82e889c844d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021231198s
Jan 10 14:28:06.894: INFO: Pod "downwardapi-volume-506a7b35-728c-48e5-be6a-82e889c844d0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036075346s
Jan 10 14:28:08.912: INFO: Pod "downwardapi-volume-506a7b35-728c-48e5-be6a-82e889c844d0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054050695s
Jan 10 14:28:10.926: INFO: Pod "downwardapi-volume-506a7b35-728c-48e5-be6a-82e889c844d0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.067877014s
Jan 10 14:28:12.940: INFO: Pod "downwardapi-volume-506a7b35-728c-48e5-be6a-82e889c844d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.082088537s
STEP: Saw pod success
Jan 10 14:28:12.940: INFO: Pod "downwardapi-volume-506a7b35-728c-48e5-be6a-82e889c844d0" satisfied condition "success or failure"
Jan 10 14:28:12.946: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-506a7b35-728c-48e5-be6a-82e889c844d0 container client-container: 
STEP: delete the pod
Jan 10 14:28:13.033: INFO: Waiting for pod downwardapi-volume-506a7b35-728c-48e5-be6a-82e889c844d0 to disappear
Jan 10 14:28:13.040: INFO: Pod downwardapi-volume-506a7b35-728c-48e5-be6a-82e889c844d0 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:28:13.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4729" for this suite.
Jan 10 14:28:19.189: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:28:19.372: INFO: namespace downward-api-4729 deletion completed in 6.324905819s

• [SLOW TEST:16.685 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:28:19.373: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 10 14:28:19.471: INFO: Creating ReplicaSet my-hostname-basic-20886df7-ea4d-4146-be74-ae01fd035133
Jan 10 14:28:19.490: INFO: Pod name my-hostname-basic-20886df7-ea4d-4146-be74-ae01fd035133: Found 0 pods out of 1
Jan 10 14:28:24.510: INFO: Pod name my-hostname-basic-20886df7-ea4d-4146-be74-ae01fd035133: Found 1 pods out of 1
Jan 10 14:28:24.510: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-20886df7-ea4d-4146-be74-ae01fd035133" is running
Jan 10 14:28:28.536: INFO: Pod "my-hostname-basic-20886df7-ea4d-4146-be74-ae01fd035133-m8m98" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-10 14:28:19 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-10 14:28:19 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-20886df7-ea4d-4146-be74-ae01fd035133]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-10 14:28:19 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-20886df7-ea4d-4146-be74-ae01fd035133]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-10 14:28:19 +0000 UTC Reason: Message:}])
Jan 10 14:28:28.536: INFO: Trying to dial the pod
Jan 10 14:28:33.588: INFO: Controller my-hostname-basic-20886df7-ea4d-4146-be74-ae01fd035133: Got expected result from replica 1 [my-hostname-basic-20886df7-ea4d-4146-be74-ae01fd035133-m8m98]: "my-hostname-basic-20886df7-ea4d-4146-be74-ae01fd035133-m8m98", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:28:33.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-8397" for this suite.
Jan 10 14:28:39.629: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:28:39.773: INFO: namespace replicaset-8397 deletion completed in 6.17442592s

• [SLOW TEST:20.400 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:28:39.774: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Jan 10 14:28:39.932: INFO: Pod name pod-release: Found 0 pods out of 1
Jan 10 14:28:44.971: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:28:46.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-5158" for this suite.
Jan 10 14:28:52.247: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:28:52.544: INFO: namespace replication-controller-5158 deletion completed in 6.342599058s

• [SLOW TEST:12.771 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:28:52.547: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-0ab3ba01-ec16-45bb-8a28-bda6267a1373
STEP: Creating a pod to test consume configMaps
Jan 10 14:28:53.041: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-64e40d45-f4f2-4c9e-b21a-5023870d97df" in namespace "projected-8492" to be "success or failure"
Jan 10 14:28:53.051: INFO: Pod "pod-projected-configmaps-64e40d45-f4f2-4c9e-b21a-5023870d97df": Phase="Pending", Reason="", readiness=false. Elapsed: 9.798686ms
Jan 10 14:28:55.061: INFO: Pod "pod-projected-configmaps-64e40d45-f4f2-4c9e-b21a-5023870d97df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019324187s
Jan 10 14:28:57.072: INFO: Pod "pod-projected-configmaps-64e40d45-f4f2-4c9e-b21a-5023870d97df": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030245957s
Jan 10 14:28:59.104: INFO: Pod "pod-projected-configmaps-64e40d45-f4f2-4c9e-b21a-5023870d97df": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062341949s
Jan 10 14:29:01.123: INFO: Pod "pod-projected-configmaps-64e40d45-f4f2-4c9e-b21a-5023870d97df": Phase="Pending", Reason="", readiness=false. Elapsed: 8.081802428s
Jan 10 14:29:03.135: INFO: Pod "pod-projected-configmaps-64e40d45-f4f2-4c9e-b21a-5023870d97df": Phase="Pending", Reason="", readiness=false. Elapsed: 10.093751878s
Jan 10 14:29:05.149: INFO: Pod "pod-projected-configmaps-64e40d45-f4f2-4c9e-b21a-5023870d97df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.107729571s
STEP: Saw pod success
Jan 10 14:29:05.149: INFO: Pod "pod-projected-configmaps-64e40d45-f4f2-4c9e-b21a-5023870d97df" satisfied condition "success or failure"
Jan 10 14:29:05.173: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-64e40d45-f4f2-4c9e-b21a-5023870d97df container projected-configmap-volume-test: 
STEP: delete the pod
Jan 10 14:29:05.256: INFO: Waiting for pod pod-projected-configmaps-64e40d45-f4f2-4c9e-b21a-5023870d97df to disappear
Jan 10 14:29:05.280: INFO: Pod pod-projected-configmaps-64e40d45-f4f2-4c9e-b21a-5023870d97df no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:29:05.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8492" for this suite.
Jan 10 14:29:11.345: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:29:11.492: INFO: namespace projected-8492 deletion completed in 6.203212978s

• [SLOW TEST:18.945 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:29:11.493: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan 10 14:29:11.637: INFO: Number of nodes with available pods: 0
Jan 10 14:29:11.638: INFO: Node iruya-node is running more than one daemon pod
Jan 10 14:29:12.656: INFO: Number of nodes with available pods: 0
Jan 10 14:29:12.656: INFO: Node iruya-node is running more than one daemon pod
Jan 10 14:29:13.981: INFO: Number of nodes with available pods: 0
Jan 10 14:29:13.981: INFO: Node iruya-node is running more than one daemon pod
Jan 10 14:29:14.661: INFO: Number of nodes with available pods: 0
Jan 10 14:29:14.661: INFO: Node iruya-node is running more than one daemon pod
Jan 10 14:29:15.654: INFO: Number of nodes with available pods: 0
Jan 10 14:29:15.654: INFO: Node iruya-node is running more than one daemon pod
Jan 10 14:29:17.786: INFO: Number of nodes with available pods: 0
Jan 10 14:29:17.786: INFO: Node iruya-node is running more than one daemon pod
Jan 10 14:29:20.042: INFO: Number of nodes with available pods: 0
Jan 10 14:29:20.042: INFO: Node iruya-node is running more than one daemon pod
Jan 10 14:29:20.655: INFO: Number of nodes with available pods: 0
Jan 10 14:29:20.655: INFO: Node iruya-node is running more than one daemon pod
Jan 10 14:29:21.656: INFO: Number of nodes with available pods: 2
Jan 10 14:29:21.656: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Jan 10 14:29:21.724: INFO: Number of nodes with available pods: 1
Jan 10 14:29:21.725: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 10 14:29:22.740: INFO: Number of nodes with available pods: 1
Jan 10 14:29:22.740: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 10 14:29:23.754: INFO: Number of nodes with available pods: 1
Jan 10 14:29:23.755: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 10 14:29:24.742: INFO: Number of nodes with available pods: 1
Jan 10 14:29:24.742: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 10 14:29:25.762: INFO: Number of nodes with available pods: 1
Jan 10 14:29:25.762: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 10 14:29:26.744: INFO: Number of nodes with available pods: 1
Jan 10 14:29:26.744: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 10 14:29:27.742: INFO: Number of nodes with available pods: 1
Jan 10 14:29:27.742: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 10 14:29:28.742: INFO: Number of nodes with available pods: 1
Jan 10 14:29:28.742: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 10 14:29:29.737: INFO: Number of nodes with available pods: 1
Jan 10 14:29:29.737: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 10 14:29:30.743: INFO: Number of nodes with available pods: 1
Jan 10 14:29:30.743: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 10 14:29:32.764: INFO: Number of nodes with available pods: 1
Jan 10 14:29:32.764: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 10 14:29:33.755: INFO: Number of nodes with available pods: 1
Jan 10 14:29:33.755: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 10 14:29:34.739: INFO: Number of nodes with available pods: 2
Jan 10 14:29:34.740: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9960, will wait for the garbage collector to delete the pods
Jan 10 14:29:34.808: INFO: Deleting DaemonSet.extensions daemon-set took: 12.12522ms
Jan 10 14:29:35.108: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.570738ms
Jan 10 14:29:42.123: INFO: Number of nodes with available pods: 0
Jan 10 14:29:42.123: INFO: Number of running nodes: 0, number of available pods: 0
Jan 10 14:29:42.127: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9960/daemonsets","resourceVersion":"20035536"},"items":null}

Jan 10 14:29:42.130: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9960/pods","resourceVersion":"20035536"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:29:42.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-9960" for this suite.
Jan 10 14:29:48.173: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:29:48.305: INFO: namespace daemonsets-9960 deletion completed in 6.159921903s

• [SLOW TEST:36.812 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:29:48.306: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:29:48.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9601" for this suite.
Jan 10 14:30:12.472: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:30:12.638: INFO: namespace pods-9601 deletion completed in 24.187270271s

• [SLOW TEST:24.332 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:30:12.638: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating replication controller svc-latency-rc in namespace svc-latency-4675
I0110 14:30:12.709717       8 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-4675, replica count: 1
I0110 14:30:13.761777       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0110 14:30:14.763152       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0110 14:30:15.763932       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0110 14:30:16.764671       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0110 14:30:17.765565       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0110 14:30:18.766293       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0110 14:30:19.766930       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 10 14:30:19.933: INFO: Created: latency-svc-8s46f
Jan 10 14:30:19.938: INFO: Got endpoints: latency-svc-8s46f [71.064544ms]
Jan 10 14:30:20.023: INFO: Created: latency-svc-5mzrg
Jan 10 14:30:20.051: INFO: Got endpoints: latency-svc-5mzrg [112.733001ms]
Jan 10 14:30:20.084: INFO: Created: latency-svc-f48bz
Jan 10 14:30:20.105: INFO: Got endpoints: latency-svc-f48bz [166.871646ms]
Jan 10 14:30:20.195: INFO: Created: latency-svc-r2t4h
Jan 10 14:30:20.203: INFO: Got endpoints: latency-svc-r2t4h [264.974341ms]
Jan 10 14:30:20.245: INFO: Created: latency-svc-5gwkp
Jan 10 14:30:20.262: INFO: Got endpoints: latency-svc-5gwkp [323.721166ms]
Jan 10 14:30:20.428: INFO: Created: latency-svc-csz4t
Jan 10 14:30:20.442: INFO: Got endpoints: latency-svc-csz4t [503.134629ms]
Jan 10 14:30:20.491: INFO: Created: latency-svc-kzrwx
Jan 10 14:30:20.505: INFO: Got endpoints: latency-svc-kzrwx [565.711432ms]
Jan 10 14:30:20.603: INFO: Created: latency-svc-j99w4
Jan 10 14:30:20.618: INFO: Got endpoints: latency-svc-j99w4 [678.481149ms]
Jan 10 14:30:20.660: INFO: Created: latency-svc-n7v7x
Jan 10 14:30:20.667: INFO: Got endpoints: latency-svc-n7v7x [727.547907ms]
Jan 10 14:30:20.764: INFO: Created: latency-svc-q7kbq
Jan 10 14:30:20.773: INFO: Got endpoints: latency-svc-q7kbq [833.690911ms]
Jan 10 14:30:20.816: INFO: Created: latency-svc-pvd9n
Jan 10 14:30:20.823: INFO: Got endpoints: latency-svc-pvd9n [882.918578ms]
Jan 10 14:30:20.915: INFO: Created: latency-svc-r5z6x
Jan 10 14:30:20.956: INFO: Got endpoints: latency-svc-r5z6x [1.01657619s]
Jan 10 14:30:20.970: INFO: Created: latency-svc-rx22b
Jan 10 14:30:20.980: INFO: Got endpoints: latency-svc-rx22b [1.04096802s]
Jan 10 14:30:21.085: INFO: Created: latency-svc-qpq7r
Jan 10 14:30:21.089: INFO: Got endpoints: latency-svc-qpq7r [1.149556002s]
Jan 10 14:30:21.151: INFO: Created: latency-svc-spwj6
Jan 10 14:30:21.167: INFO: Got endpoints: latency-svc-spwj6 [1.22782484s]
Jan 10 14:30:21.289: INFO: Created: latency-svc-bt8px
Jan 10 14:30:21.298: INFO: Got endpoints: latency-svc-bt8px [1.358741333s]
Jan 10 14:30:21.362: INFO: Created: latency-svc-5mb7t
Jan 10 14:30:21.493: INFO: Got endpoints: latency-svc-5mb7t [1.441804987s]
Jan 10 14:30:21.502: INFO: Created: latency-svc-75z8c
Jan 10 14:30:21.542: INFO: Got endpoints: latency-svc-75z8c [1.436225468s]
Jan 10 14:30:21.689: INFO: Created: latency-svc-7wbqn
Jan 10 14:30:21.702: INFO: Got endpoints: latency-svc-7wbqn [1.498638379s]
Jan 10 14:30:21.751: INFO: Created: latency-svc-fjdll
Jan 10 14:30:21.765: INFO: Got endpoints: latency-svc-fjdll [1.502526184s]
Jan 10 14:30:21.911: INFO: Created: latency-svc-f6lbj
Jan 10 14:30:21.915: INFO: Got endpoints: latency-svc-f6lbj [1.472695001s]
Jan 10 14:30:22.009: INFO: Created: latency-svc-kjhz2
Jan 10 14:30:22.050: INFO: Got endpoints: latency-svc-kjhz2 [1.544868325s]
Jan 10 14:30:22.082: INFO: Created: latency-svc-tr4pg
Jan 10 14:30:22.094: INFO: Got endpoints: latency-svc-tr4pg [1.476126897s]
Jan 10 14:30:22.148: INFO: Created: latency-svc-4dgjw
Jan 10 14:30:22.213: INFO: Got endpoints: latency-svc-4dgjw [1.54537523s]
Jan 10 14:30:22.256: INFO: Created: latency-svc-8qfxp
Jan 10 14:30:22.268: INFO: Got endpoints: latency-svc-8qfxp [1.494007437s]
Jan 10 14:30:22.447: INFO: Created: latency-svc-9r6nw
Jan 10 14:30:22.450: INFO: Got endpoints: latency-svc-9r6nw [1.627119863s]
Jan 10 14:30:22.507: INFO: Created: latency-svc-rkvd8
Jan 10 14:30:22.529: INFO: Got endpoints: latency-svc-rkvd8 [1.572741905s]
Jan 10 14:30:22.626: INFO: Created: latency-svc-dt9fz
Jan 10 14:30:22.636: INFO: Got endpoints: latency-svc-dt9fz [1.655468417s]
Jan 10 14:30:22.700: INFO: Created: latency-svc-4gps8
Jan 10 14:30:22.779: INFO: Created: latency-svc-4cr96
Jan 10 14:30:22.780: INFO: Got endpoints: latency-svc-4gps8 [1.690752215s]
Jan 10 14:30:22.826: INFO: Got endpoints: latency-svc-4cr96 [1.657768484s]
Jan 10 14:30:22.867: INFO: Created: latency-svc-kpfbm
Jan 10 14:30:22.928: INFO: Got endpoints: latency-svc-kpfbm [1.628657526s]
Jan 10 14:30:22.961: INFO: Created: latency-svc-vwhbl
Jan 10 14:30:22.961: INFO: Got endpoints: latency-svc-vwhbl [1.467459845s]
Jan 10 14:30:23.008: INFO: Created: latency-svc-htmhd
Jan 10 14:30:23.022: INFO: Got endpoints: latency-svc-htmhd [1.479639063s]
Jan 10 14:30:23.098: INFO: Created: latency-svc-dz4rc
Jan 10 14:30:23.104: INFO: Got endpoints: latency-svc-dz4rc [1.401394758s]
Jan 10 14:30:23.141: INFO: Created: latency-svc-v5k7l
Jan 10 14:30:23.151: INFO: Got endpoints: latency-svc-v5k7l [1.385495801s]
Jan 10 14:30:23.245: INFO: Created: latency-svc-xlzvd
Jan 10 14:30:23.271: INFO: Got endpoints: latency-svc-xlzvd [1.355461113s]
Jan 10 14:30:23.295: INFO: Created: latency-svc-7xpwv
Jan 10 14:30:23.299: INFO: Got endpoints: latency-svc-7xpwv [1.248731068s]
Jan 10 14:30:23.331: INFO: Created: latency-svc-2jbxz
Jan 10 14:30:23.340: INFO: Got endpoints: latency-svc-2jbxz [1.24528935s]
Jan 10 14:30:23.403: INFO: Created: latency-svc-pgwzg
Jan 10 14:30:23.416: INFO: Got endpoints: latency-svc-pgwzg [1.202551398s]
Jan 10 14:30:23.455: INFO: Created: latency-svc-vwsmq
Jan 10 14:30:23.461: INFO: Got endpoints: latency-svc-vwsmq [1.19330764s]
Jan 10 14:30:23.634: INFO: Created: latency-svc-xbzf5
Jan 10 14:30:23.641: INFO: Got endpoints: latency-svc-xbzf5 [1.190979296s]
Jan 10 14:30:23.686: INFO: Created: latency-svc-phzp4
Jan 10 14:30:23.690: INFO: Got endpoints: latency-svc-phzp4 [1.160773499s]
Jan 10 14:30:23.817: INFO: Created: latency-svc-w8fzd
Jan 10 14:30:23.845: INFO: Got endpoints: latency-svc-w8fzd [1.208569648s]
Jan 10 14:30:23.970: INFO: Created: latency-svc-bq6tl
Jan 10 14:30:23.974: INFO: Got endpoints: latency-svc-bq6tl [1.193988957s]
Jan 10 14:30:24.038: INFO: Created: latency-svc-ld599
Jan 10 14:30:24.047: INFO: Got endpoints: latency-svc-ld599 [1.221512168s]
Jan 10 14:30:24.140: INFO: Created: latency-svc-qt7t7
Jan 10 14:30:24.156: INFO: Got endpoints: latency-svc-qt7t7 [1.227559006s]
Jan 10 14:30:24.203: INFO: Created: latency-svc-4qzq2
Jan 10 14:30:24.219: INFO: Got endpoints: latency-svc-4qzq2 [171.617274ms]
Jan 10 14:30:24.314: INFO: Created: latency-svc-xqlm8
Jan 10 14:30:24.316: INFO: Got endpoints: latency-svc-xqlm8 [1.354906236s]
Jan 10 14:30:24.485: INFO: Created: latency-svc-kwwxg
Jan 10 14:30:24.514: INFO: Got endpoints: latency-svc-kwwxg [1.490879849s]
Jan 10 14:30:24.517: INFO: Created: latency-svc-gd6bb
Jan 10 14:30:24.524: INFO: Got endpoints: latency-svc-gd6bb [1.419980047s]
Jan 10 14:30:24.695: INFO: Created: latency-svc-xlqzr
Jan 10 14:30:24.716: INFO: Got endpoints: latency-svc-xlqzr [1.564468362s]
Jan 10 14:30:24.751: INFO: Created: latency-svc-lzkdq
Jan 10 14:30:24.751: INFO: Got endpoints: latency-svc-lzkdq [1.479967632s]
Jan 10 14:30:24.841: INFO: Created: latency-svc-bkc48
Jan 10 14:30:24.857: INFO: Got endpoints: latency-svc-bkc48 [1.558080048s]
Jan 10 14:30:24.923: INFO: Created: latency-svc-swvbv
Jan 10 14:30:25.012: INFO: Got endpoints: latency-svc-swvbv [1.672214655s]
Jan 10 14:30:25.041: INFO: Created: latency-svc-q9k42
Jan 10 14:30:25.062: INFO: Got endpoints: latency-svc-q9k42 [1.646047287s]
Jan 10 14:30:25.198: INFO: Created: latency-svc-g4b2q
Jan 10 14:30:25.209: INFO: Got endpoints: latency-svc-g4b2q [1.747413552s]
Jan 10 14:30:25.420: INFO: Created: latency-svc-cxlhg
Jan 10 14:30:25.455: INFO: Got endpoints: latency-svc-cxlhg [1.813604612s]
Jan 10 14:30:25.513: INFO: Created: latency-svc-fm8pr
Jan 10 14:30:25.658: INFO: Created: latency-svc-lsmwd
Jan 10 14:30:25.659: INFO: Got endpoints: latency-svc-fm8pr [1.968310566s]
Jan 10 14:30:25.669: INFO: Got endpoints: latency-svc-lsmwd [1.824080359s]
Jan 10 14:30:25.874: INFO: Created: latency-svc-d5v9c
Jan 10 14:30:25.883: INFO: Got endpoints: latency-svc-d5v9c [1.908361018s]
Jan 10 14:30:26.002: INFO: Created: latency-svc-qgbjp
Jan 10 14:30:26.014: INFO: Got endpoints: latency-svc-qgbjp [1.857966105s]
Jan 10 14:30:26.044: INFO: Created: latency-svc-rmwhj
Jan 10 14:30:26.047: INFO: Got endpoints: latency-svc-rmwhj [1.827412238s]
Jan 10 14:30:26.106: INFO: Created: latency-svc-54b6l
Jan 10 14:30:26.176: INFO: Got endpoints: latency-svc-54b6l [1.859542104s]
Jan 10 14:30:26.180: INFO: Created: latency-svc-smjcq
Jan 10 14:30:26.197: INFO: Got endpoints: latency-svc-smjcq [1.682417726s]
Jan 10 14:30:26.232: INFO: Created: latency-svc-mltl9
Jan 10 14:30:26.239: INFO: Got endpoints: latency-svc-mltl9 [1.714700873s]
Jan 10 14:30:26.340: INFO: Created: latency-svc-7tg5g
Jan 10 14:30:26.353: INFO: Got endpoints: latency-svc-7tg5g [1.63722768s]
Jan 10 14:30:26.403: INFO: Created: latency-svc-ts9sg
Jan 10 14:30:26.408: INFO: Got endpoints: latency-svc-ts9sg [1.657290494s]
Jan 10 14:30:26.512: INFO: Created: latency-svc-rfb5g
Jan 10 14:30:26.515: INFO: Got endpoints: latency-svc-rfb5g [1.657032102s]
Jan 10 14:30:26.660: INFO: Created: latency-svc-frh98
Jan 10 14:30:26.675: INFO: Got endpoints: latency-svc-frh98 [1.662227711s]
Jan 10 14:30:26.704: INFO: Created: latency-svc-xf8z2
Jan 10 14:30:26.719: INFO: Got endpoints: latency-svc-xf8z2 [1.656247035s]
Jan 10 14:30:26.825: INFO: Created: latency-svc-88c9g
Jan 10 14:30:26.828: INFO: Got endpoints: latency-svc-88c9g [1.618717855s]
Jan 10 14:30:26.891: INFO: Created: latency-svc-5cqwm
Jan 10 14:30:26.892: INFO: Got endpoints: latency-svc-5cqwm [1.436433988s]
Jan 10 14:30:26.985: INFO: Created: latency-svc-htxp6
Jan 10 14:30:26.991: INFO: Got endpoints: latency-svc-htxp6 [1.331934757s]
Jan 10 14:30:27.035: INFO: Created: latency-svc-vwvp7
Jan 10 14:30:27.046: INFO: Got endpoints: latency-svc-vwvp7 [1.376072033s]
Jan 10 14:30:27.152: INFO: Created: latency-svc-hrw6g
Jan 10 14:30:27.177: INFO: Created: latency-svc-2z59s
Jan 10 14:30:27.181: INFO: Got endpoints: latency-svc-hrw6g [1.298222555s]
Jan 10 14:30:27.249: INFO: Created: latency-svc-k78gh
Jan 10 14:30:27.249: INFO: Got endpoints: latency-svc-2z59s [1.233936813s]
Jan 10 14:30:27.345: INFO: Got endpoints: latency-svc-k78gh [1.297645142s]
Jan 10 14:30:27.385: INFO: Created: latency-svc-m9jtm
Jan 10 14:30:27.396: INFO: Got endpoints: latency-svc-m9jtm [1.219832072s]
Jan 10 14:30:27.583: INFO: Created: latency-svc-tx4qw
Jan 10 14:30:27.626: INFO: Got endpoints: latency-svc-tx4qw [1.428610989s]
Jan 10 14:30:27.790: INFO: Created: latency-svc-lm4nm
Jan 10 14:30:27.878: INFO: Created: latency-svc-fl597
Jan 10 14:30:27.881: INFO: Got endpoints: latency-svc-lm4nm [1.640904913s]
Jan 10 14:30:28.018: INFO: Got endpoints: latency-svc-fl597 [1.664536537s]
Jan 10 14:30:28.048: INFO: Created: latency-svc-9drsv
Jan 10 14:30:28.055: INFO: Got endpoints: latency-svc-9drsv [1.646691051s]
Jan 10 14:30:28.303: INFO: Created: latency-svc-pxhm2
Jan 10 14:30:28.306: INFO: Got endpoints: latency-svc-pxhm2 [1.790751823s]
Jan 10 14:30:28.510: INFO: Created: latency-svc-wz8rh
Jan 10 14:30:28.520: INFO: Got endpoints: latency-svc-wz8rh [1.845242157s]
Jan 10 14:30:28.584: INFO: Created: latency-svc-fckbp
Jan 10 14:30:28.600: INFO: Got endpoints: latency-svc-fckbp [1.881092825s]
Jan 10 14:30:28.788: INFO: Created: latency-svc-xkm4w
Jan 10 14:30:28.797: INFO: Got endpoints: latency-svc-xkm4w [1.969476443s]
Jan 10 14:30:28.886: INFO: Created: latency-svc-x79t4
Jan 10 14:30:28.897: INFO: Got endpoints: latency-svc-x79t4 [2.00536463s]
Jan 10 14:30:28.939: INFO: Created: latency-svc-p2nqf
Jan 10 14:30:28.940: INFO: Got endpoints: latency-svc-p2nqf [1.948617521s]
Jan 10 14:30:28.973: INFO: Created: latency-svc-knbtp
Jan 10 14:30:29.054: INFO: Got endpoints: latency-svc-knbtp [2.007763243s]
Jan 10 14:30:29.071: INFO: Created: latency-svc-wqcf4
Jan 10 14:30:29.079: INFO: Got endpoints: latency-svc-wqcf4 [1.897299501s]
Jan 10 14:30:29.120: INFO: Created: latency-svc-5s5dn
Jan 10 14:30:29.140: INFO: Got endpoints: latency-svc-5s5dn [1.8909276s]
Jan 10 14:30:29.297: INFO: Created: latency-svc-rrjtl
Jan 10 14:30:29.311: INFO: Got endpoints: latency-svc-rrjtl [1.965303686s]
Jan 10 14:30:29.370: INFO: Created: latency-svc-ks5xr
Jan 10 14:30:29.441: INFO: Got endpoints: latency-svc-ks5xr [2.044611271s]
Jan 10 14:30:29.455: INFO: Created: latency-svc-94mnr
Jan 10 14:30:29.485: INFO: Got endpoints: latency-svc-94mnr [1.858518956s]
Jan 10 14:30:29.513: INFO: Created: latency-svc-fw82d
Jan 10 14:30:29.523: INFO: Got endpoints: latency-svc-fw82d [1.64178302s]
Jan 10 14:30:29.660: INFO: Created: latency-svc-qdh29
Jan 10 14:30:29.662: INFO: Got endpoints: latency-svc-qdh29 [1.643535827s]
Jan 10 14:30:29.834: INFO: Created: latency-svc-fgpdb
Jan 10 14:30:29.835: INFO: Got endpoints: latency-svc-fgpdb [1.779665174s]
Jan 10 14:30:29.899: INFO: Created: latency-svc-x9jtz
Jan 10 14:30:29.961: INFO: Got endpoints: latency-svc-x9jtz [1.655108022s]
Jan 10 14:30:29.982: INFO: Created: latency-svc-hwm4m
Jan 10 14:30:30.001: INFO: Got endpoints: latency-svc-hwm4m [1.481003027s]
Jan 10 14:30:30.047: INFO: Created: latency-svc-n4dqf
Jan 10 14:30:30.136: INFO: Got endpoints: latency-svc-n4dqf [1.535486752s]
Jan 10 14:30:30.152: INFO: Created: latency-svc-jhsbt
Jan 10 14:30:30.155: INFO: Got endpoints: latency-svc-jhsbt [1.356914082s]
Jan 10 14:30:30.228: INFO: Created: latency-svc-4pqdt
Jan 10 14:30:30.344: INFO: Got endpoints: latency-svc-4pqdt [1.445995453s]
Jan 10 14:30:30.364: INFO: Created: latency-svc-hdrhn
Jan 10 14:30:30.413: INFO: Got endpoints: latency-svc-hdrhn [1.473509872s]
Jan 10 14:30:30.420: INFO: Created: latency-svc-gshxg
Jan 10 14:30:30.510: INFO: Got endpoints: latency-svc-gshxg [1.455981587s]
Jan 10 14:30:30.523: INFO: Created: latency-svc-6vtq4
Jan 10 14:30:30.540: INFO: Got endpoints: latency-svc-6vtq4 [1.461177461s]
Jan 10 14:30:30.691: INFO: Created: latency-svc-c4gws
Jan 10 14:30:30.717: INFO: Got endpoints: latency-svc-c4gws [1.576893284s]
Jan 10 14:30:30.877: INFO: Created: latency-svc-8ndjr
Jan 10 14:30:30.896: INFO: Got endpoints: latency-svc-8ndjr [1.584633518s]
Jan 10 14:30:30.937: INFO: Created: latency-svc-9fcj8
Jan 10 14:30:30.950: INFO: Got endpoints: latency-svc-9fcj8 [1.508983375s]
Jan 10 14:30:31.055: INFO: Created: latency-svc-xdwsc
Jan 10 14:30:31.093: INFO: Got endpoints: latency-svc-xdwsc [1.608131308s]
Jan 10 14:30:31.098: INFO: Created: latency-svc-x9pb7
Jan 10 14:30:31.110: INFO: Got endpoints: latency-svc-x9pb7 [1.586675058s]
Jan 10 14:30:31.230: INFO: Created: latency-svc-xlptp
Jan 10 14:30:31.241: INFO: Got endpoints: latency-svc-xlptp [1.578509428s]
Jan 10 14:30:31.278: INFO: Created: latency-svc-xtjg4
Jan 10 14:30:31.282: INFO: Got endpoints: latency-svc-xtjg4 [1.446876854s]
Jan 10 14:30:31.325: INFO: Created: latency-svc-95grm
Jan 10 14:30:31.433: INFO: Got endpoints: latency-svc-95grm [1.470909441s]
Jan 10 14:30:31.441: INFO: Created: latency-svc-9fdhc
Jan 10 14:30:31.463: INFO: Got endpoints: latency-svc-9fdhc [1.461462634s]
Jan 10 14:30:31.514: INFO: Created: latency-svc-fg9j5
Jan 10 14:30:31.607: INFO: Got endpoints: latency-svc-fg9j5 [1.470362483s]
Jan 10 14:30:31.660: INFO: Created: latency-svc-ng85x
Jan 10 14:30:31.779: INFO: Created: latency-svc-94tvj
Jan 10 14:30:31.780: INFO: Got endpoints: latency-svc-ng85x [1.624862091s]
Jan 10 14:30:31.797: INFO: Got endpoints: latency-svc-94tvj [1.453059457s]
Jan 10 14:30:31.830: INFO: Created: latency-svc-xgx6k
Jan 10 14:30:31.835: INFO: Got endpoints: latency-svc-xgx6k [1.420961918s]
Jan 10 14:30:31.881: INFO: Created: latency-svc-8p68f
Jan 10 14:30:31.980: INFO: Got endpoints: latency-svc-8p68f [1.4694694s]
Jan 10 14:30:31.984: INFO: Created: latency-svc-qhqrx
Jan 10 14:30:31.993: INFO: Got endpoints: latency-svc-qhqrx [1.451521753s]
Jan 10 14:30:32.037: INFO: Created: latency-svc-v4lhz
Jan 10 14:30:32.042: INFO: Got endpoints: latency-svc-v4lhz [1.323597965s]
Jan 10 14:30:32.077: INFO: Created: latency-svc-c6m4h
Jan 10 14:30:32.167: INFO: Got endpoints: latency-svc-c6m4h [1.270483914s]
Jan 10 14:30:32.185: INFO: Created: latency-svc-44jfh
Jan 10 14:30:32.197: INFO: Got endpoints: latency-svc-44jfh [1.246250605s]
Jan 10 14:30:32.233: INFO: Created: latency-svc-lrrwr
Jan 10 14:30:32.247: INFO: Got endpoints: latency-svc-lrrwr [1.153757179s]
Jan 10 14:30:32.351: INFO: Created: latency-svc-49zl9
Jan 10 14:30:32.362: INFO: Got endpoints: latency-svc-49zl9 [1.251870021s]
Jan 10 14:30:32.420: INFO: Created: latency-svc-dpgtm
Jan 10 14:30:32.430: INFO: Got endpoints: latency-svc-dpgtm [1.188439679s]
Jan 10 14:30:32.641: INFO: Created: latency-svc-kdlfr
Jan 10 14:30:32.648: INFO: Got endpoints: latency-svc-kdlfr [1.36591245s]
Jan 10 14:30:32.846: INFO: Created: latency-svc-wngc7
Jan 10 14:30:32.863: INFO: Got endpoints: latency-svc-wngc7 [1.430147661s]
Jan 10 14:30:32.923: INFO: Created: latency-svc-f9mjh
Jan 10 14:30:32.928: INFO: Got endpoints: latency-svc-f9mjh [1.4644074s]
Jan 10 14:30:33.027: INFO: Created: latency-svc-rl2lj
Jan 10 14:30:33.042: INFO: Got endpoints: latency-svc-rl2lj [1.433604474s]
Jan 10 14:30:33.083: INFO: Created: latency-svc-ksk8x
Jan 10 14:30:33.083: INFO: Got endpoints: latency-svc-ksk8x [1.30290008s]
Jan 10 14:30:33.282: INFO: Created: latency-svc-9pp25
Jan 10 14:30:33.282: INFO: Got endpoints: latency-svc-9pp25 [1.484729164s]
Jan 10 14:30:33.458: INFO: Created: latency-svc-jdm4g
Jan 10 14:30:33.481: INFO: Got endpoints: latency-svc-jdm4g [1.645839654s]
Jan 10 14:30:33.573: INFO: Created: latency-svc-f6hn7
Jan 10 14:30:33.633: INFO: Got endpoints: latency-svc-f6hn7 [1.653096045s]
Jan 10 14:30:33.677: INFO: Created: latency-svc-td7z5
Jan 10 14:30:33.835: INFO: Got endpoints: latency-svc-td7z5 [1.841831092s]
Jan 10 14:30:33.839: INFO: Created: latency-svc-vmtkf
Jan 10 14:30:33.849: INFO: Got endpoints: latency-svc-vmtkf [1.807165401s]
Jan 10 14:30:33.932: INFO: Created: latency-svc-d76jm
Jan 10 14:30:34.016: INFO: Got endpoints: latency-svc-d76jm [1.848319874s]
Jan 10 14:30:34.020: INFO: Created: latency-svc-qmmxv
Jan 10 14:30:34.030: INFO: Got endpoints: latency-svc-qmmxv [1.833114726s]
Jan 10 14:30:34.081: INFO: Created: latency-svc-8fhn5
Jan 10 14:30:34.100: INFO: Got endpoints: latency-svc-8fhn5 [1.85308249s]
Jan 10 14:30:34.231: INFO: Created: latency-svc-dc7gn
Jan 10 14:30:34.250: INFO: Got endpoints: latency-svc-dc7gn [1.887463724s]
Jan 10 14:30:34.297: INFO: Created: latency-svc-vcr2g
Jan 10 14:30:34.306: INFO: Got endpoints: latency-svc-vcr2g [1.876182044s]
Jan 10 14:30:34.397: INFO: Created: latency-svc-hcjqp
Jan 10 14:30:34.405: INFO: Got endpoints: latency-svc-hcjqp [1.756891797s]
Jan 10 14:30:34.454: INFO: Created: latency-svc-q4xbl
Jan 10 14:30:34.460: INFO: Got endpoints: latency-svc-q4xbl [1.596669403s]
Jan 10 14:30:34.568: INFO: Created: latency-svc-5278q
Jan 10 14:30:34.618: INFO: Got endpoints: latency-svc-5278q [1.690217354s]
Jan 10 14:30:34.626: INFO: Created: latency-svc-s9lkk
Jan 10 14:30:34.652: INFO: Got endpoints: latency-svc-s9lkk [1.610672319s]
Jan 10 14:30:34.733: INFO: Created: latency-svc-gjb4w
Jan 10 14:30:34.745: INFO: Got endpoints: latency-svc-gjb4w [1.662435961s]
Jan 10 14:30:34.780: INFO: Created: latency-svc-tkbj2
Jan 10 14:30:34.789: INFO: Got endpoints: latency-svc-tkbj2 [1.506552385s]
Jan 10 14:30:34.952: INFO: Created: latency-svc-zmvwm
Jan 10 14:30:34.961: INFO: Got endpoints: latency-svc-zmvwm [1.480234581s]
Jan 10 14:30:35.010: INFO: Created: latency-svc-9d86c
Jan 10 14:30:35.026: INFO: Got endpoints: latency-svc-9d86c [1.392649729s]
Jan 10 14:30:35.100: INFO: Created: latency-svc-h4jsc
Jan 10 14:30:35.119: INFO: Got endpoints: latency-svc-h4jsc [1.283681748s]
Jan 10 14:30:35.148: INFO: Created: latency-svc-pshw4
Jan 10 14:30:35.188: INFO: Got endpoints: latency-svc-pshw4 [1.338355531s]
Jan 10 14:30:35.298: INFO: Created: latency-svc-ckd7j
Jan 10 14:30:35.308: INFO: Got endpoints: latency-svc-ckd7j [1.292149718s]
Jan 10 14:30:35.356: INFO: Created: latency-svc-6tcn2
Jan 10 14:30:35.369: INFO: Got endpoints: latency-svc-6tcn2 [1.338905056s]
Jan 10 14:30:35.509: INFO: Created: latency-svc-ts2fz
Jan 10 14:30:35.521: INFO: Got endpoints: latency-svc-ts2fz [1.419878893s]
Jan 10 14:30:35.574: INFO: Created: latency-svc-kr4wh
Jan 10 14:30:35.580: INFO: Got endpoints: latency-svc-kr4wh [1.33036587s]
Jan 10 14:30:35.674: INFO: Created: latency-svc-pf2wr
Jan 10 14:30:35.682: INFO: Got endpoints: latency-svc-pf2wr [1.375903024s]
Jan 10 14:30:35.735: INFO: Created: latency-svc-j8dxz
Jan 10 14:30:35.748: INFO: Got endpoints: latency-svc-j8dxz [1.34323064s]
Jan 10 14:30:35.903: INFO: Created: latency-svc-wdb2m
Jan 10 14:30:35.915: INFO: Got endpoints: latency-svc-wdb2m [1.454276966s]
Jan 10 14:30:35.961: INFO: Created: latency-svc-sns56
Jan 10 14:30:36.030: INFO: Got endpoints: latency-svc-sns56 [1.410912468s]
Jan 10 14:30:36.035: INFO: Created: latency-svc-2lsw7
Jan 10 14:30:36.057: INFO: Got endpoints: latency-svc-2lsw7 [1.404521813s]
Jan 10 14:30:36.086: INFO: Created: latency-svc-m2szt
Jan 10 14:30:36.110: INFO: Got endpoints: latency-svc-m2szt [1.364326416s]
Jan 10 14:30:36.187: INFO: Created: latency-svc-dsrbg
Jan 10 14:30:36.192: INFO: Got endpoints: latency-svc-dsrbg [1.402494734s]
Jan 10 14:30:36.239: INFO: Created: latency-svc-xm6fv
Jan 10 14:30:36.285: INFO: Got endpoints: latency-svc-xm6fv [1.32378376s]
Jan 10 14:30:36.294: INFO: Created: latency-svc-2b4dm
Jan 10 14:30:36.352: INFO: Got endpoints: latency-svc-2b4dm [1.325838049s]
Jan 10 14:30:36.367: INFO: Created: latency-svc-4fc89
Jan 10 14:30:36.387: INFO: Got endpoints: latency-svc-4fc89 [1.26798719s]
Jan 10 14:30:36.444: INFO: Created: latency-svc-g9qjr
Jan 10 14:30:36.529: INFO: Got endpoints: latency-svc-g9qjr [1.339997234s]
Jan 10 14:30:36.574: INFO: Created: latency-svc-f8mp8
Jan 10 14:30:36.654: INFO: Got endpoints: latency-svc-f8mp8 [1.345266898s]
Jan 10 14:30:36.654: INFO: Created: latency-svc-9wqq6
Jan 10 14:30:36.670: INFO: Got endpoints: latency-svc-9wqq6 [1.299974141s]
Jan 10 14:30:36.708: INFO: Created: latency-svc-9lwh9
Jan 10 14:30:36.727: INFO: Got endpoints: latency-svc-9lwh9 [1.206305283s]
Jan 10 14:30:36.759: INFO: Created: latency-svc-5b7m5
Jan 10 14:30:36.850: INFO: Got endpoints: latency-svc-5b7m5 [1.269783984s]
Jan 10 14:30:36.885: INFO: Created: latency-svc-f2n5l
Jan 10 14:30:36.914: INFO: Got endpoints: latency-svc-f2n5l [1.230899959s]
Jan 10 14:30:36.920: INFO: Created: latency-svc-jz6qd
Jan 10 14:30:36.928: INFO: Got endpoints: latency-svc-jz6qd [1.179797181s]
Jan 10 14:30:37.011: INFO: Created: latency-svc-xvfzb
Jan 10 14:30:37.019: INFO: Got endpoints: latency-svc-xvfzb [1.103881228s]
Jan 10 14:30:37.053: INFO: Created: latency-svc-9jbxm
Jan 10 14:30:37.107: INFO: Got endpoints: latency-svc-9jbxm [1.076761312s]
Jan 10 14:30:37.117: INFO: Created: latency-svc-5lcsq
Jan 10 14:30:37.209: INFO: Got endpoints: latency-svc-5lcsq [1.152133927s]
Jan 10 14:30:37.225: INFO: Created: latency-svc-s7pfn
Jan 10 14:30:37.241: INFO: Got endpoints: latency-svc-s7pfn [1.130685567s]
Jan 10 14:30:37.271: INFO: Created: latency-svc-8z7p6
Jan 10 14:30:37.274: INFO: Got endpoints: latency-svc-8z7p6 [1.081816599s]
Jan 10 14:30:37.356: INFO: Created: latency-svc-rz2rh
Jan 10 14:30:37.359: INFO: Got endpoints: latency-svc-rz2rh [1.073508566s]
Jan 10 14:30:37.396: INFO: Created: latency-svc-2wqls
Jan 10 14:30:37.401: INFO: Got endpoints: latency-svc-2wqls [1.048735972s]
Jan 10 14:30:37.531: INFO: Created: latency-svc-sbjhp
Jan 10 14:30:37.533: INFO: Got endpoints: latency-svc-sbjhp [1.14488494s]
Jan 10 14:30:37.573: INFO: Created: latency-svc-2nmzl
Jan 10 14:30:37.581: INFO: Got endpoints: latency-svc-2nmzl [1.051644451s]
Jan 10 14:30:37.680: INFO: Created: latency-svc-9smr9
Jan 10 14:30:37.693: INFO: Got endpoints: latency-svc-9smr9 [1.037882446s]
Jan 10 14:30:37.739: INFO: Created: latency-svc-tmhqz
Jan 10 14:30:37.772: INFO: Got endpoints: latency-svc-tmhqz [1.102441169s]
Jan 10 14:30:37.782: INFO: Created: latency-svc-lwfrv
Jan 10 14:30:37.900: INFO: Got endpoints: latency-svc-lwfrv [1.172747573s]
Jan 10 14:30:37.939: INFO: Created: latency-svc-j6mgf
Jan 10 14:30:37.958: INFO: Got endpoints: latency-svc-j6mgf [1.10699077s]
Jan 10 14:30:38.125: INFO: Created: latency-svc-zz7sq
Jan 10 14:30:38.143: INFO: Got endpoints: latency-svc-zz7sq [1.228987083s]
Jan 10 14:30:38.363: INFO: Created: latency-svc-6pm89
Jan 10 14:30:38.378: INFO: Got endpoints: latency-svc-6pm89 [1.449451857s]
Jan 10 14:30:38.455: INFO: Created: latency-svc-pq9cb
Jan 10 14:30:38.456: INFO: Got endpoints: latency-svc-pq9cb [1.436457696s]
Jan 10 14:30:38.569: INFO: Created: latency-svc-zpvn6
Jan 10 14:30:38.607: INFO: Got endpoints: latency-svc-zpvn6 [1.498545544s]
Jan 10 14:30:38.640: INFO: Created: latency-svc-k4dqf
Jan 10 14:30:38.711: INFO: Got endpoints: latency-svc-k4dqf [1.501396218s]
Jan 10 14:30:38.773: INFO: Created: latency-svc-p82mx
Jan 10 14:30:38.791: INFO: Got endpoints: latency-svc-p82mx [1.549452509s]
Jan 10 14:30:38.873: INFO: Created: latency-svc-ztb2g
Jan 10 14:30:38.916: INFO: Got endpoints: latency-svc-ztb2g [1.642154909s]
Jan 10 14:30:38.925: INFO: Created: latency-svc-2bqtt
Jan 10 14:30:38.929: INFO: Got endpoints: latency-svc-2bqtt [1.569634031s]
Jan 10 14:30:39.058: INFO: Created: latency-svc-z9jnm
Jan 10 14:30:39.070: INFO: Got endpoints: latency-svc-z9jnm [1.668017057s]
Jan 10 14:30:39.117: INFO: Created: latency-svc-l22c6
Jan 10 14:30:39.261: INFO: Got endpoints: latency-svc-l22c6 [1.728625164s]
Jan 10 14:30:39.286: INFO: Created: latency-svc-zv766
Jan 10 14:30:39.317: INFO: Got endpoints: latency-svc-zv766 [1.736228009s]
Jan 10 14:30:39.349: INFO: Created: latency-svc-pd8lx
Jan 10 14:30:39.351: INFO: Got endpoints: latency-svc-pd8lx [1.657838079s]
Jan 10 14:30:39.434: INFO: Created: latency-svc-vtj46
Jan 10 14:30:39.442: INFO: Got endpoints: latency-svc-vtj46 [1.669476403s]
Jan 10 14:30:39.477: INFO: Created: latency-svc-8jk7b
Jan 10 14:30:39.485: INFO: Got endpoints: latency-svc-8jk7b [1.584338974s]
Jan 10 14:30:39.517: INFO: Created: latency-svc-6x699
Jan 10 14:30:39.586: INFO: Got endpoints: latency-svc-6x699 [1.627742762s]
Jan 10 14:30:39.639: INFO: Created: latency-svc-47dwx
Jan 10 14:30:39.646: INFO: Got endpoints: latency-svc-47dwx [1.503157404s]
Jan 10 14:30:39.647: INFO: Latencies: [112.733001ms 166.871646ms 171.617274ms 264.974341ms 323.721166ms 503.134629ms 565.711432ms 678.481149ms 727.547907ms 833.690911ms 882.918578ms 1.01657619s 1.037882446s 1.04096802s 1.048735972s 1.051644451s 1.073508566s 1.076761312s 1.081816599s 1.102441169s 1.103881228s 1.10699077s 1.130685567s 1.14488494s 1.149556002s 1.152133927s 1.153757179s 1.160773499s 1.172747573s 1.179797181s 1.188439679s 1.190979296s 1.19330764s 1.193988957s 1.202551398s 1.206305283s 1.208569648s 1.219832072s 1.221512168s 1.227559006s 1.22782484s 1.228987083s 1.230899959s 1.233936813s 1.24528935s 1.246250605s 1.248731068s 1.251870021s 1.26798719s 1.269783984s 1.270483914s 1.283681748s 1.292149718s 1.297645142s 1.298222555s 1.299974141s 1.30290008s 1.323597965s 1.32378376s 1.325838049s 1.33036587s 1.331934757s 1.338355531s 1.338905056s 1.339997234s 1.34323064s 1.345266898s 1.354906236s 1.355461113s 1.356914082s 1.358741333s 1.364326416s 1.36591245s 1.375903024s 1.376072033s 1.385495801s 1.392649729s 1.401394758s 1.402494734s 1.404521813s 1.410912468s 1.419878893s 1.419980047s 1.420961918s 1.428610989s 1.430147661s 1.433604474s 1.436225468s 1.436433988s 1.436457696s 1.441804987s 1.445995453s 1.446876854s 1.449451857s 1.451521753s 1.453059457s 1.454276966s 1.455981587s 1.461177461s 1.461462634s 1.4644074s 1.467459845s 1.4694694s 1.470362483s 1.470909441s 1.472695001s 1.473509872s 1.476126897s 1.479639063s 1.479967632s 1.480234581s 1.481003027s 1.484729164s 1.490879849s 1.494007437s 1.498545544s 1.498638379s 1.501396218s 1.502526184s 1.503157404s 1.506552385s 1.508983375s 1.535486752s 1.544868325s 1.54537523s 1.549452509s 1.558080048s 1.564468362s 1.569634031s 1.572741905s 1.576893284s 1.578509428s 1.584338974s 1.584633518s 1.586675058s 1.596669403s 1.608131308s 1.610672319s 1.618717855s 1.624862091s 1.627119863s 1.627742762s 1.628657526s 1.63722768s 1.640904913s 1.64178302s 1.642154909s 1.643535827s 1.645839654s 1.646047287s 1.646691051s 1.653096045s 1.655108022s 1.655468417s 1.656247035s 1.657032102s 1.657290494s 1.657768484s 1.657838079s 1.662227711s 1.662435961s 1.664536537s 1.668017057s 1.669476403s 1.672214655s 1.682417726s 1.690217354s 1.690752215s 1.714700873s 1.728625164s 1.736228009s 1.747413552s 1.756891797s 1.779665174s 1.790751823s 1.807165401s 1.813604612s 1.824080359s 1.827412238s 1.833114726s 1.841831092s 1.845242157s 1.848319874s 1.85308249s 1.857966105s 1.858518956s 1.859542104s 1.876182044s 1.881092825s 1.887463724s 1.8909276s 1.897299501s 1.908361018s 1.948617521s 1.965303686s 1.968310566s 1.969476443s 2.00536463s 2.007763243s 2.044611271s]
Jan 10 14:30:39.647: INFO: 50 %ile: 1.4644074s
Jan 10 14:30:39.647: INFO: 90 %ile: 1.841831092s
Jan 10 14:30:39.647: INFO: 99 %ile: 2.007763243s
Jan 10 14:30:39.647: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:30:39.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-4675" for this suite.
Jan 10 14:31:11.850: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:31:11.972: INFO: namespace svc-latency-4675 deletion completed in 32.195731088s

• [SLOW TEST:59.335 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:31:11.973: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-f09e378d-cc5c-4421-9f58-05b4825f7a6a
STEP: Creating a pod to test consume secrets
Jan 10 14:31:12.078: INFO: Waiting up to 5m0s for pod "pod-secrets-1f07fcef-b54a-47b9-807c-017df742182f" in namespace "secrets-8593" to be "success or failure"
Jan 10 14:31:12.105: INFO: Pod "pod-secrets-1f07fcef-b54a-47b9-807c-017df742182f": Phase="Pending", Reason="", readiness=false. Elapsed: 26.028521ms
Jan 10 14:31:14.116: INFO: Pod "pod-secrets-1f07fcef-b54a-47b9-807c-017df742182f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037016805s
Jan 10 14:31:16.130: INFO: Pod "pod-secrets-1f07fcef-b54a-47b9-807c-017df742182f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051618324s
Jan 10 14:31:18.141: INFO: Pod "pod-secrets-1f07fcef-b54a-47b9-807c-017df742182f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062456705s
Jan 10 14:31:20.154: INFO: Pod "pod-secrets-1f07fcef-b54a-47b9-807c-017df742182f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.075565074s
STEP: Saw pod success
Jan 10 14:31:20.154: INFO: Pod "pod-secrets-1f07fcef-b54a-47b9-807c-017df742182f" satisfied condition "success or failure"
Jan 10 14:31:20.159: INFO: Trying to get logs from node iruya-node pod pod-secrets-1f07fcef-b54a-47b9-807c-017df742182f container secret-volume-test: 
STEP: delete the pod
Jan 10 14:31:20.367: INFO: Waiting for pod pod-secrets-1f07fcef-b54a-47b9-807c-017df742182f to disappear
Jan 10 14:31:20.383: INFO: Pod pod-secrets-1f07fcef-b54a-47b9-807c-017df742182f no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:31:20.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8593" for this suite.
Jan 10 14:31:26.543: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:31:26.667: INFO: namespace secrets-8593 deletion completed in 6.274731759s

• [SLOW TEST:14.694 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:31:26.668: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Jan 10 14:31:36.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-c219ae78-ea9d-4811-b5d1-3b84ce75cefd -c busybox-main-container --namespace=emptydir-9225 -- cat /usr/share/volumeshare/shareddata.txt'
Jan 10 14:31:39.468: INFO: stderr: "I0110 14:31:39.137418    2971 log.go:172] (0xc0007e6420) (0xc0004bae60) Create stream\nI0110 14:31:39.137552    2971 log.go:172] (0xc0007e6420) (0xc0004bae60) Stream added, broadcasting: 1\nI0110 14:31:39.147687    2971 log.go:172] (0xc0007e6420) Reply frame received for 1\nI0110 14:31:39.147731    2971 log.go:172] (0xc0007e6420) (0xc000990000) Create stream\nI0110 14:31:39.147744    2971 log.go:172] (0xc0007e6420) (0xc000990000) Stream added, broadcasting: 3\nI0110 14:31:39.149948    2971 log.go:172] (0xc0007e6420) Reply frame received for 3\nI0110 14:31:39.150025    2971 log.go:172] (0xc0007e6420) (0xc000a1a000) Create stream\nI0110 14:31:39.150046    2971 log.go:172] (0xc0007e6420) (0xc000a1a000) Stream added, broadcasting: 5\nI0110 14:31:39.152426    2971 log.go:172] (0xc0007e6420) Reply frame received for 5\nI0110 14:31:39.286471    2971 log.go:172] (0xc0007e6420) Data frame received for 3\nI0110 14:31:39.286543    2971 log.go:172] (0xc000990000) (3) Data frame handling\nI0110 14:31:39.286618    2971 log.go:172] (0xc000990000) (3) Data frame sent\nI0110 14:31:39.457596    2971 log.go:172] (0xc0007e6420) Data frame received for 1\nI0110 14:31:39.457651    2971 log.go:172] (0xc0004bae60) (1) Data frame handling\nI0110 14:31:39.457667    2971 log.go:172] (0xc0004bae60) (1) Data frame sent\nI0110 14:31:39.458096    2971 log.go:172] (0xc0007e6420) (0xc0004bae60) Stream removed, broadcasting: 1\nI0110 14:31:39.458508    2971 log.go:172] (0xc0007e6420) (0xc000990000) Stream removed, broadcasting: 3\nI0110 14:31:39.459126    2971 log.go:172] (0xc0007e6420) (0xc000a1a000) Stream removed, broadcasting: 5\nI0110 14:31:39.459194    2971 log.go:172] (0xc0007e6420) (0xc0004bae60) Stream removed, broadcasting: 1\nI0110 14:31:39.459218    2971 log.go:172] (0xc0007e6420) (0xc000990000) Stream removed, broadcasting: 3\nI0110 14:31:39.459239    2971 log.go:172] (0xc0007e6420) (0xc000a1a000) Stream removed, broadcasting: 5\n"
Jan 10 14:31:39.469: INFO: stdout: "Hello from the busy-box sub-container\n"
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:31:39.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9225" for this suite.
Jan 10 14:31:45.523: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:31:45.688: INFO: namespace emptydir-9225 deletion completed in 6.209334644s

• [SLOW TEST:19.020 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:31:45.689: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating replication controller my-hostname-basic-4cd0ae0a-731d-45c2-86c9-0752da4f981b
Jan 10 14:31:45.783: INFO: Pod name my-hostname-basic-4cd0ae0a-731d-45c2-86c9-0752da4f981b: Found 0 pods out of 1
Jan 10 14:31:50.805: INFO: Pod name my-hostname-basic-4cd0ae0a-731d-45c2-86c9-0752da4f981b: Found 1 pods out of 1
Jan 10 14:31:50.805: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-4cd0ae0a-731d-45c2-86c9-0752da4f981b" are running
Jan 10 14:31:54.822: INFO: Pod "my-hostname-basic-4cd0ae0a-731d-45c2-86c9-0752da4f981b-b8jws" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-10 14:31:45 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-10 14:31:45 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-4cd0ae0a-731d-45c2-86c9-0752da4f981b]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-10 14:31:45 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-4cd0ae0a-731d-45c2-86c9-0752da4f981b]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-10 14:31:45 +0000 UTC Reason: Message:}])
Jan 10 14:31:54.823: INFO: Trying to dial the pod
Jan 10 14:31:59.892: INFO: Controller my-hostname-basic-4cd0ae0a-731d-45c2-86c9-0752da4f981b: Got expected result from replica 1 [my-hostname-basic-4cd0ae0a-731d-45c2-86c9-0752da4f981b-b8jws]: "my-hostname-basic-4cd0ae0a-731d-45c2-86c9-0752da4f981b-b8jws", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:31:59.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-917" for this suite.
Jan 10 14:32:05.964: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:32:06.067: INFO: namespace replication-controller-917 deletion completed in 6.161960549s

• [SLOW TEST:20.378 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:32:06.067: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:32:12.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-7402" for this suite.
Jan 10 14:32:18.521: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:32:18.683: INFO: namespace namespaces-7402 deletion completed in 6.196094297s
STEP: Destroying namespace "nsdeletetest-3267" for this suite.
Jan 10 14:32:18.687: INFO: Namespace nsdeletetest-3267 was already deleted
STEP: Destroying namespace "nsdeletetest-225" for this suite.
Jan 10 14:32:24.772: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:32:24.924: INFO: namespace nsdeletetest-225 deletion completed in 6.237488629s

• [SLOW TEST:18.857 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:32:24.925: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Jan 10 14:32:25.092: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Registering the sample API server.
Jan 10 14:32:25.564: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Jan 10 14:32:27.902: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263545, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263545, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263545, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263545, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 10 14:32:29.912: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263545, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263545, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263545, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263545, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 10 14:32:31.926: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263545, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263545, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263545, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263545, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 10 14:32:33.918: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263545, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263545, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263545, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263545, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 10 14:32:40.772: INFO: Waited 4.84799637s for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:32:41.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-5005" for this suite.
Jan 10 14:32:47.292: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:32:47.426: INFO: namespace aggregator-5005 deletion completed in 6.164583094s

• [SLOW TEST:22.501 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:32:47.428: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-92766a53-e881-495d-aebe-5e972b547f45
STEP: Creating a pod to test consume configMaps
Jan 10 14:32:47.567: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-029a4e15-cfad-4cec-b5a9-c96f787a46dd" in namespace "projected-9735" to be "success or failure"
Jan 10 14:32:47.588: INFO: Pod "pod-projected-configmaps-029a4e15-cfad-4cec-b5a9-c96f787a46dd": Phase="Pending", Reason="", readiness=false. Elapsed: 20.835039ms
Jan 10 14:32:49.605: INFO: Pod "pod-projected-configmaps-029a4e15-cfad-4cec-b5a9-c96f787a46dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038568136s
Jan 10 14:32:51.620: INFO: Pod "pod-projected-configmaps-029a4e15-cfad-4cec-b5a9-c96f787a46dd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05282777s
Jan 10 14:32:53.634: INFO: Pod "pod-projected-configmaps-029a4e15-cfad-4cec-b5a9-c96f787a46dd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067564267s
Jan 10 14:32:55.645: INFO: Pod "pod-projected-configmaps-029a4e15-cfad-4cec-b5a9-c96f787a46dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.077729375s
STEP: Saw pod success
Jan 10 14:32:55.645: INFO: Pod "pod-projected-configmaps-029a4e15-cfad-4cec-b5a9-c96f787a46dd" satisfied condition "success or failure"
Jan 10 14:32:55.649: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-029a4e15-cfad-4cec-b5a9-c96f787a46dd container projected-configmap-volume-test: 
STEP: delete the pod
Jan 10 14:32:55.708: INFO: Waiting for pod pod-projected-configmaps-029a4e15-cfad-4cec-b5a9-c96f787a46dd to disappear
Jan 10 14:32:55.714: INFO: Pod pod-projected-configmaps-029a4e15-cfad-4cec-b5a9-c96f787a46dd no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:32:55.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9735" for this suite.
Jan 10 14:33:01.997: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:33:02.182: INFO: namespace projected-9735 deletion completed in 6.232760567s

• [SLOW TEST:14.755 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:33:02.184: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test env composition
Jan 10 14:33:02.424: INFO: Waiting up to 5m0s for pod "var-expansion-caf789d6-4b27-4ee4-8f19-8aff4f743dd3" in namespace "var-expansion-2245" to be "success or failure"
Jan 10 14:33:02.436: INFO: Pod "var-expansion-caf789d6-4b27-4ee4-8f19-8aff4f743dd3": Phase="Pending", Reason="", readiness=false. Elapsed: 12.148475ms
Jan 10 14:33:04.455: INFO: Pod "var-expansion-caf789d6-4b27-4ee4-8f19-8aff4f743dd3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030691177s
Jan 10 14:33:06.765: INFO: Pod "var-expansion-caf789d6-4b27-4ee4-8f19-8aff4f743dd3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.340993512s
Jan 10 14:33:08.778: INFO: Pod "var-expansion-caf789d6-4b27-4ee4-8f19-8aff4f743dd3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.353585226s
Jan 10 14:33:10.788: INFO: Pod "var-expansion-caf789d6-4b27-4ee4-8f19-8aff4f743dd3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.363488112s
STEP: Saw pod success
Jan 10 14:33:10.788: INFO: Pod "var-expansion-caf789d6-4b27-4ee4-8f19-8aff4f743dd3" satisfied condition "success or failure"
Jan 10 14:33:10.796: INFO: Trying to get logs from node iruya-node pod var-expansion-caf789d6-4b27-4ee4-8f19-8aff4f743dd3 container dapi-container: 
STEP: delete the pod
Jan 10 14:33:10.982: INFO: Waiting for pod var-expansion-caf789d6-4b27-4ee4-8f19-8aff4f743dd3 to disappear
Jan 10 14:33:10.989: INFO: Pod var-expansion-caf789d6-4b27-4ee4-8f19-8aff4f743dd3 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:33:10.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2245" for this suite.
Jan 10 14:33:17.161: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:33:17.323: INFO: namespace var-expansion-2245 deletion completed in 6.32507312s

• [SLOW TEST:15.139 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:33:17.324: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 10 14:33:25.621: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:33:25.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6614" for this suite.
Jan 10 14:33:31.871: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:33:32.045: INFO: namespace container-runtime-6614 deletion completed in 6.226141234s

• [SLOW TEST:14.721 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:33:32.045: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Jan 10 14:33:40.802: INFO: Successfully updated pod "labelsupdatef9d0370f-54c5-494d-9756-b0585aab27b6"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:33:42.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3729" for this suite.
Jan 10 14:34:04.928: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:34:05.043: INFO: namespace downward-api-3729 deletion completed in 22.132891538s

• [SLOW TEST:32.997 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:34:05.043: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan 10 14:34:13.820: INFO: Successfully updated pod "pod-update-activedeadlineseconds-780bf2f6-c769-4501-ad9e-b979428025b7"
Jan 10 14:34:13.821: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-780bf2f6-c769-4501-ad9e-b979428025b7" in namespace "pods-5709" to be "terminated due to deadline exceeded"
Jan 10 14:34:13.843: INFO: Pod "pod-update-activedeadlineseconds-780bf2f6-c769-4501-ad9e-b979428025b7": Phase="Running", Reason="", readiness=true. Elapsed: 21.430886ms
Jan 10 14:34:15.862: INFO: Pod "pod-update-activedeadlineseconds-780bf2f6-c769-4501-ad9e-b979428025b7": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.040501197s
Jan 10 14:34:15.862: INFO: Pod "pod-update-activedeadlineseconds-780bf2f6-c769-4501-ad9e-b979428025b7" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:34:15.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5709" for this suite.
Jan 10 14:34:21.965: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:34:22.055: INFO: namespace pods-5709 deletion completed in 6.180935596s

• [SLOW TEST:17.012 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:34:22.055: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210
STEP: creating the pod
Jan 10 14:34:22.177: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7445'
Jan 10 14:34:22.832: INFO: stderr: ""
Jan 10 14:34:22.833: INFO: stdout: "pod/pause created\n"
Jan 10 14:34:22.833: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Jan 10 14:34:22.834: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-7445" to be "running and ready"
Jan 10 14:34:22.865: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 31.105396ms
Jan 10 14:34:24.877: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043487031s
Jan 10 14:34:26.892: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058138141s
Jan 10 14:34:28.904: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069750688s
Jan 10 14:34:30.929: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.094832866s
Jan 10 14:34:30.929: INFO: Pod "pause" satisfied condition "running and ready"
Jan 10 14:34:30.929: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: adding the label testing-label with value testing-label-value to a pod
Jan 10 14:34:30.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-7445'
Jan 10 14:34:31.138: INFO: stderr: ""
Jan 10 14:34:31.138: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Jan 10 14:34:31.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-7445'
Jan 10 14:34:31.229: INFO: stderr: ""
Jan 10 14:34:31.229: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          9s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Jan 10 14:34:31.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-7445'
Jan 10 14:34:31.360: INFO: stderr: ""
Jan 10 14:34:31.360: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Jan 10 14:34:31.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-7445'
Jan 10 14:34:31.449: INFO: stderr: ""
Jan 10 14:34:31.449: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          9s    \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217
STEP: using delete to clean up resources
Jan 10 14:34:31.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7445'
Jan 10 14:34:31.597: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 10 14:34:31.597: INFO: stdout: "pod \"pause\" force deleted\n"
Jan 10 14:34:31.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-7445'
Jan 10 14:34:31.784: INFO: stderr: "No resources found.\n"
Jan 10 14:34:31.784: INFO: stdout: ""
Jan 10 14:34:31.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-7445 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 10 14:34:31.916: INFO: stderr: ""
Jan 10 14:34:31.916: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:34:31.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7445" for this suite.
Jan 10 14:34:37.962: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:34:38.143: INFO: namespace kubectl-7445 deletion completed in 6.216742623s

• [SLOW TEST:16.088 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:34:38.144: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0110 14:35:18.302118       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 10 14:35:18.302: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:35:18.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6794" for this suite.
Jan 10 14:35:31.599: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:35:35.630: INFO: namespace gc-6794 deletion completed in 17.321641344s

• [SLOW TEST:57.486 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:35:35.632: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 10 14:35:37.049: INFO: Waiting up to 5m0s for pod "downwardapi-volume-059cbeda-be0e-4fe9-8e9a-b4f12b863f94" in namespace "projected-6967" to be "success or failure"
Jan 10 14:35:37.568: INFO: Pod "downwardapi-volume-059cbeda-be0e-4fe9-8e9a-b4f12b863f94": Phase="Pending", Reason="", readiness=false. Elapsed: 519.195909ms
Jan 10 14:35:39.667: INFO: Pod "downwardapi-volume-059cbeda-be0e-4fe9-8e9a-b4f12b863f94": Phase="Pending", Reason="", readiness=false. Elapsed: 2.617554428s
Jan 10 14:35:41.683: INFO: Pod "downwardapi-volume-059cbeda-be0e-4fe9-8e9a-b4f12b863f94": Phase="Pending", Reason="", readiness=false. Elapsed: 4.633865661s
Jan 10 14:35:43.699: INFO: Pod "downwardapi-volume-059cbeda-be0e-4fe9-8e9a-b4f12b863f94": Phase="Pending", Reason="", readiness=false. Elapsed: 6.649907833s
Jan 10 14:35:45.717: INFO: Pod "downwardapi-volume-059cbeda-be0e-4fe9-8e9a-b4f12b863f94": Phase="Pending", Reason="", readiness=false. Elapsed: 8.667489618s
Jan 10 14:35:47.727: INFO: Pod "downwardapi-volume-059cbeda-be0e-4fe9-8e9a-b4f12b863f94": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.677984136s
STEP: Saw pod success
Jan 10 14:35:47.727: INFO: Pod "downwardapi-volume-059cbeda-be0e-4fe9-8e9a-b4f12b863f94" satisfied condition "success or failure"
Jan 10 14:35:47.733: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-059cbeda-be0e-4fe9-8e9a-b4f12b863f94 container client-container: 
STEP: delete the pod
Jan 10 14:35:47.895: INFO: Waiting for pod downwardapi-volume-059cbeda-be0e-4fe9-8e9a-b4f12b863f94 to disappear
Jan 10 14:35:47.903: INFO: Pod downwardapi-volume-059cbeda-be0e-4fe9-8e9a-b4f12b863f94 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:35:47.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6967" for this suite.
Jan 10 14:35:54.537: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:35:54.644: INFO: namespace projected-6967 deletion completed in 6.173871202s

• [SLOW TEST:19.013 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:35:54.645: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-jqbg
STEP: Creating a pod to test atomic-volume-subpath
Jan 10 14:35:54.785: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-jqbg" in namespace "subpath-3212" to be "success or failure"
Jan 10 14:35:54.795: INFO: Pod "pod-subpath-test-configmap-jqbg": Phase="Pending", Reason="", readiness=false. Elapsed: 9.116037ms
Jan 10 14:35:56.801: INFO: Pod "pod-subpath-test-configmap-jqbg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015862489s
Jan 10 14:35:58.814: INFO: Pod "pod-subpath-test-configmap-jqbg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028205587s
Jan 10 14:36:00.833: INFO: Pod "pod-subpath-test-configmap-jqbg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047221894s
Jan 10 14:36:02.846: INFO: Pod "pod-subpath-test-configmap-jqbg": Phase="Running", Reason="", readiness=true. Elapsed: 8.060174274s
Jan 10 14:36:04.860: INFO: Pod "pod-subpath-test-configmap-jqbg": Phase="Running", Reason="", readiness=true. Elapsed: 10.074300513s
Jan 10 14:36:06.874: INFO: Pod "pod-subpath-test-configmap-jqbg": Phase="Running", Reason="", readiness=true. Elapsed: 12.088844643s
Jan 10 14:36:09.085: INFO: Pod "pod-subpath-test-configmap-jqbg": Phase="Running", Reason="", readiness=true. Elapsed: 14.299291631s
Jan 10 14:36:11.095: INFO: Pod "pod-subpath-test-configmap-jqbg": Phase="Running", Reason="", readiness=true. Elapsed: 16.310031057s
Jan 10 14:36:13.106: INFO: Pod "pod-subpath-test-configmap-jqbg": Phase="Running", Reason="", readiness=true. Elapsed: 18.320550661s
Jan 10 14:36:15.116: INFO: Pod "pod-subpath-test-configmap-jqbg": Phase="Running", Reason="", readiness=true. Elapsed: 20.330737968s
Jan 10 14:36:17.125: INFO: Pod "pod-subpath-test-configmap-jqbg": Phase="Running", Reason="", readiness=true. Elapsed: 22.339379506s
Jan 10 14:36:19.140: INFO: Pod "pod-subpath-test-configmap-jqbg": Phase="Running", Reason="", readiness=true. Elapsed: 24.354246704s
Jan 10 14:36:21.157: INFO: Pod "pod-subpath-test-configmap-jqbg": Phase="Running", Reason="", readiness=true. Elapsed: 26.371130226s
Jan 10 14:36:23.170: INFO: Pod "pod-subpath-test-configmap-jqbg": Phase="Running", Reason="", readiness=true. Elapsed: 28.3843643s
Jan 10 14:36:25.183: INFO: Pod "pod-subpath-test-configmap-jqbg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.397822104s
STEP: Saw pod success
Jan 10 14:36:25.184: INFO: Pod "pod-subpath-test-configmap-jqbg" satisfied condition "success or failure"
Jan 10 14:36:25.190: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-jqbg container test-container-subpath-configmap-jqbg: 
STEP: delete the pod
Jan 10 14:36:25.275: INFO: Waiting for pod pod-subpath-test-configmap-jqbg to disappear
Jan 10 14:36:25.306: INFO: Pod pod-subpath-test-configmap-jqbg no longer exists
STEP: Deleting pod pod-subpath-test-configmap-jqbg
Jan 10 14:36:25.306: INFO: Deleting pod "pod-subpath-test-configmap-jqbg" in namespace "subpath-3212"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:36:25.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-3212" for this suite.
Jan 10 14:36:31.343: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:36:31.471: INFO: namespace subpath-3212 deletion completed in 6.153441709s

• [SLOW TEST:36.826 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:36:31.473: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 10 14:36:31.623: INFO: Pod name rollover-pod: Found 0 pods out of 1
Jan 10 14:36:36.645: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 10 14:36:40.667: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Jan 10 14:36:42.694: INFO: Creating deployment "test-rollover-deployment"
Jan 10 14:36:42.727: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Jan 10 14:36:44.744: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Jan 10 14:36:44.754: INFO: Ensure that both replica sets have 1 created replica
Jan 10 14:36:44.760: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Jan 10 14:36:44.770: INFO: Updating deployment test-rollover-deployment
Jan 10 14:36:44.770: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Jan 10 14:36:46.787: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Jan 10 14:36:46.795: INFO: Make sure deployment "test-rollover-deployment" is complete
Jan 10 14:36:46.802: INFO: all replica sets need to contain the pod-template-hash label
Jan 10 14:36:46.803: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263802, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263802, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263805, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263802, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 10 14:36:48.832: INFO: all replica sets need to contain the pod-template-hash label
Jan 10 14:36:48.833: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263802, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263802, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263805, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263802, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 10 14:36:50.827: INFO: all replica sets need to contain the pod-template-hash label
Jan 10 14:36:50.827: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263802, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263802, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263805, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263802, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 10 14:36:52.815: INFO: all replica sets need to contain the pod-template-hash label
Jan 10 14:36:52.815: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263802, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263802, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263805, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263802, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 10 14:36:54.829: INFO: all replica sets need to contain the pod-template-hash label
Jan 10 14:36:54.829: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263802, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263802, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263813, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263802, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 10 14:36:56.817: INFO: all replica sets need to contain the pod-template-hash label
Jan 10 14:36:56.818: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263802, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263802, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263813, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263802, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 10 14:36:58.829: INFO: all replica sets need to contain the pod-template-hash label
Jan 10 14:36:58.829: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263802, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263802, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263813, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263802, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 10 14:37:00.908: INFO: all replica sets need to contain the pod-template-hash label
Jan 10 14:37:00.908: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263802, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263802, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263813, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263802, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 10 14:37:02.854: INFO: all replica sets need to contain the pod-template-hash label
Jan 10 14:37:02.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263802, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263802, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263813, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263802, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 10 14:37:04.825: INFO: 
Jan 10 14:37:04.825: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jan 10 14:37:04.848: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-102,SelfLink:/apis/apps/v1/namespaces/deployment-102/deployments/test-rollover-deployment,UID:a5457711-4842-47de-b3f7-a9e54a8fab7f,ResourceVersion:20038041,Generation:2,CreationTimestamp:2020-01-10 14:36:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-10 14:36:42 +0000 UTC 2020-01-10 14:36:42 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-10 14:37:03 +0000 UTC 2020-01-10 14:36:42 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jan 10 14:37:04.860: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-102,SelfLink:/apis/apps/v1/namespaces/deployment-102/replicasets/test-rollover-deployment-854595fc44,UID:512d2cda-66ba-4c0e-b3f7-6140f54764c7,ResourceVersion:20038030,Generation:2,CreationTimestamp:2020-01-10 14:36:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment a5457711-4842-47de-b3f7-a9e54a8fab7f 0xc001838c27 0xc001838c28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan 10 14:37:04.860: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Jan 10 14:37:04.861: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-102,SelfLink:/apis/apps/v1/namespaces/deployment-102/replicasets/test-rollover-controller,UID:0eac04bc-61b9-46c8-98b8-c0e82e091975,ResourceVersion:20038039,Generation:2,CreationTimestamp:2020-01-10 14:36:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment a5457711-4842-47de-b3f7-a9e54a8fab7f 0xc001838b57 0xc001838b58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 10 14:37:04.861: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-102,SelfLink:/apis/apps/v1/namespaces/deployment-102/replicasets/test-rollover-deployment-9b8b997cf,UID:fe8d5389-67c0-40eb-bf38-96a4ca0811db,ResourceVersion:20037996,Generation:2,CreationTimestamp:2020-01-10 14:36:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment a5457711-4842-47de-b3f7-a9e54a8fab7f 0xc001838cf0 0xc001838cf1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 10 14:37:04.908: INFO: Pod "test-rollover-deployment-854595fc44-tdvk9" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-tdvk9,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-102,SelfLink:/api/v1/namespaces/deployment-102/pods/test-rollover-deployment-854595fc44-tdvk9,UID:0017c7c0-bddb-4f69-832b-ea0a8c0c0a42,ResourceVersion:20038014,Generation:0,CreationTimestamp:2020-01-10 14:36:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 512d2cda-66ba-4c0e-b3f7-6140f54764c7 0xc003281377 0xc003281378}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-n56dj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n56dj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-n56dj true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0032813f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003281410}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:36:45 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:36:53 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:36:53 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:36:44 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-01-10 14:36:45 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-10 14:36:53 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://5503d7836541f1fd7a6db38db9558006d5bfd364d300cf830790ef41ca70fdd1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:37:04.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-102" for this suite.
Jan 10 14:37:10.956: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:37:11.416: INFO: namespace deployment-102 deletion completed in 6.496638623s

• [SLOW TEST:39.944 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:37:11.418: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Jan 10 14:37:11.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4382'
Jan 10 14:37:12.151: INFO: stderr: ""
Jan 10 14:37:12.152: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 10 14:37:12.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4382'
Jan 10 14:37:12.274: INFO: stderr: ""
Jan 10 14:37:12.275: INFO: stdout: "update-demo-nautilus-6m2vl update-demo-nautilus-bq49d "
Jan 10 14:37:12.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6m2vl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4382'
Jan 10 14:37:12.558: INFO: stderr: ""
Jan 10 14:37:12.559: INFO: stdout: ""
Jan 10 14:37:12.559: INFO: update-demo-nautilus-6m2vl is created but not running
Jan 10 14:37:17.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4382'
Jan 10 14:37:18.568: INFO: stderr: ""
Jan 10 14:37:18.568: INFO: stdout: "update-demo-nautilus-6m2vl update-demo-nautilus-bq49d "
Jan 10 14:37:18.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6m2vl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4382'
Jan 10 14:37:19.173: INFO: stderr: ""
Jan 10 14:37:19.173: INFO: stdout: ""
Jan 10 14:37:19.173: INFO: update-demo-nautilus-6m2vl is created but not running
Jan 10 14:37:24.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4382'
Jan 10 14:37:24.352: INFO: stderr: ""
Jan 10 14:37:24.352: INFO: stdout: "update-demo-nautilus-6m2vl update-demo-nautilus-bq49d "
Jan 10 14:37:24.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6m2vl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4382'
Jan 10 14:37:24.436: INFO: stderr: ""
Jan 10 14:37:24.436: INFO: stdout: "true"
Jan 10 14:37:24.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6m2vl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4382'
Jan 10 14:37:24.566: INFO: stderr: ""
Jan 10 14:37:24.566: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 10 14:37:24.566: INFO: validating pod update-demo-nautilus-6m2vl
Jan 10 14:37:24.617: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 10 14:37:24.617: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 10 14:37:24.617: INFO: update-demo-nautilus-6m2vl is verified up and running
Jan 10 14:37:24.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bq49d -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4382'
Jan 10 14:37:24.703: INFO: stderr: ""
Jan 10 14:37:24.704: INFO: stdout: "true"
Jan 10 14:37:24.704: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bq49d -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4382'
Jan 10 14:37:24.810: INFO: stderr: ""
Jan 10 14:37:24.810: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 10 14:37:24.810: INFO: validating pod update-demo-nautilus-bq49d
Jan 10 14:37:24.820: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 10 14:37:24.821: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 10 14:37:24.821: INFO: update-demo-nautilus-bq49d is verified up and running
STEP: using delete to clean up resources
Jan 10 14:37:24.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4382'
Jan 10 14:37:24.920: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 10 14:37:24.920: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan 10 14:37:24.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4382'
Jan 10 14:37:25.244: INFO: stderr: "No resources found.\n"
Jan 10 14:37:25.244: INFO: stdout: ""
Jan 10 14:37:25.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4382 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 10 14:37:25.346: INFO: stderr: ""
Jan 10 14:37:25.346: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:37:25.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4382" for this suite.
Jan 10 14:37:48.970: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:37:49.082: INFO: namespace kubectl-4382 deletion completed in 23.726870216s

• [SLOW TEST:37.664 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:37:49.082: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9224.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-9224.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9224.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9224.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-9224.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9224.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 10 14:38:01.233: INFO: Unable to read wheezy_udp@PodARecord from pod dns-9224/dns-test-e94574b9-a8ca-4d86-9a59-c7f000362bd7: the server could not find the requested resource (get pods dns-test-e94574b9-a8ca-4d86-9a59-c7f000362bd7)
Jan 10 14:38:01.237: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-9224/dns-test-e94574b9-a8ca-4d86-9a59-c7f000362bd7: the server could not find the requested resource (get pods dns-test-e94574b9-a8ca-4d86-9a59-c7f000362bd7)
Jan 10 14:38:01.243: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-9224.svc.cluster.local from pod dns-9224/dns-test-e94574b9-a8ca-4d86-9a59-c7f000362bd7: the server could not find the requested resource (get pods dns-test-e94574b9-a8ca-4d86-9a59-c7f000362bd7)
Jan 10 14:38:01.248: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-9224/dns-test-e94574b9-a8ca-4d86-9a59-c7f000362bd7: the server could not find the requested resource (get pods dns-test-e94574b9-a8ca-4d86-9a59-c7f000362bd7)
Jan 10 14:38:01.265: INFO: Unable to read jessie_udp@PodARecord from pod dns-9224/dns-test-e94574b9-a8ca-4d86-9a59-c7f000362bd7: the server could not find the requested resource (get pods dns-test-e94574b9-a8ca-4d86-9a59-c7f000362bd7)
Jan 10 14:38:01.278: INFO: Unable to read jessie_tcp@PodARecord from pod dns-9224/dns-test-e94574b9-a8ca-4d86-9a59-c7f000362bd7: the server could not find the requested resource (get pods dns-test-e94574b9-a8ca-4d86-9a59-c7f000362bd7)
Jan 10 14:38:01.278: INFO: Lookups using dns-9224/dns-test-e94574b9-a8ca-4d86-9a59-c7f000362bd7 failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-9224.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Jan 10 14:38:06.358: INFO: DNS probes using dns-9224/dns-test-e94574b9-a8ca-4d86-9a59-c7f000362bd7 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:38:06.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9224" for this suite.
Jan 10 14:38:12.526: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:38:12.635: INFO: namespace dns-9224 deletion completed in 6.136688783s

• [SLOW TEST:23.553 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:38:12.636: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting the proxy server
Jan 10 14:38:12.760: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:38:12.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6705" for this suite.
Jan 10 14:38:18.899: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:38:19.058: INFO: namespace kubectl-6705 deletion completed in 6.192359769s

• [SLOW TEST:6.422 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:38:19.059: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap that has name configmap-test-emptyKey-9456ffae-7507-4d3f-8eec-b5105df1d316
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:38:19.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7446" for this suite.
Jan 10 14:38:25.262: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:38:25.402: INFO: namespace configmap-7446 deletion completed in 6.17088801s

• [SLOW TEST:6.344 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:38:25.405: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan 10 14:38:25.507: INFO: Waiting up to 5m0s for pod "pod-342a4f8d-561d-453b-849c-44cff4408cbd" in namespace "emptydir-966" to be "success or failure"
Jan 10 14:38:25.576: INFO: Pod "pod-342a4f8d-561d-453b-849c-44cff4408cbd": Phase="Pending", Reason="", readiness=false. Elapsed: 69.238882ms
Jan 10 14:38:27.594: INFO: Pod "pod-342a4f8d-561d-453b-849c-44cff4408cbd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086770208s
Jan 10 14:38:29.606: INFO: Pod "pod-342a4f8d-561d-453b-849c-44cff4408cbd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.099371213s
Jan 10 14:38:31.620: INFO: Pod "pod-342a4f8d-561d-453b-849c-44cff4408cbd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.113443016s
Jan 10 14:38:33.632: INFO: Pod "pod-342a4f8d-561d-453b-849c-44cff4408cbd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.125372607s
STEP: Saw pod success
Jan 10 14:38:33.632: INFO: Pod "pod-342a4f8d-561d-453b-849c-44cff4408cbd" satisfied condition "success or failure"
Jan 10 14:38:33.638: INFO: Trying to get logs from node iruya-node pod pod-342a4f8d-561d-453b-849c-44cff4408cbd container test-container: 
STEP: delete the pod
Jan 10 14:38:33.723: INFO: Waiting for pod pod-342a4f8d-561d-453b-849c-44cff4408cbd to disappear
Jan 10 14:38:33.738: INFO: Pod pod-342a4f8d-561d-453b-849c-44cff4408cbd no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:38:33.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-966" for this suite.
Jan 10 14:38:39.852: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:38:39.982: INFO: namespace emptydir-966 deletion completed in 6.231035008s

• [SLOW TEST:14.578 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:38:39.983: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 10 14:38:40.145: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Jan 10 14:38:40.252: INFO: Pod name sample-pod: Found 0 pods out of 1
Jan 10 14:38:45.264: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 10 14:38:49.279: INFO: Creating deployment "test-rolling-update-deployment"
Jan 10 14:38:49.289: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Jan 10 14:38:49.304: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Jan 10 14:38:51.325: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Jan 10 14:38:51.330: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263929, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263929, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263929, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263929, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 10 14:38:53.339: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263929, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263929, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263929, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263929, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 10 14:38:55.350: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263929, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263929, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263929, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263929, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 10 14:38:57.368: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263929, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263929, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263929, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714263929, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 10 14:38:59.341: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jan 10 14:38:59.357: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-3278,SelfLink:/apis/apps/v1/namespaces/deployment-3278/deployments/test-rolling-update-deployment,UID:991fa5d5-cb6a-4a4e-83cb-95534c63a4b6,ResourceVersion:20038402,Generation:1,CreationTimestamp:2020-01-10 14:38:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-10 14:38:49 +0000 UTC 2020-01-10 14:38:49 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-10 14:38:58 +0000 UTC 2020-01-10 14:38:49 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jan 10 14:38:59.363: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-3278,SelfLink:/apis/apps/v1/namespaces/deployment-3278/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:8e66ebfe-c9ab-4b61-a828-f97e2c47c9c3,ResourceVersion:20038393,Generation:1,CreationTimestamp:2020-01-10 14:38:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 991fa5d5-cb6a-4a4e-83cb-95534c63a4b6 0xc0016e6177 0xc0016e6178}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan 10 14:38:59.363: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Jan 10 14:38:59.364: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-3278,SelfLink:/apis/apps/v1/namespaces/deployment-3278/replicasets/test-rolling-update-controller,UID:2ac8f090-e1aa-46f5-a039-39e65c4814d1,ResourceVersion:20038401,Generation:2,CreationTimestamp:2020-01-10 14:38:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 991fa5d5-cb6a-4a4e-83cb-95534c63a4b6 0xc0016e60a7 0xc0016e60a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 10 14:38:59.370: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-f54tr" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-f54tr,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-3278,SelfLink:/api/v1/namespaces/deployment-3278/pods/test-rolling-update-deployment-79f6b9d75c-f54tr,UID:fc3964d5-b2c2-4e6a-b565-b4dbff044ec2,ResourceVersion:20038392,Generation:0,CreationTimestamp:2020-01-10 14:38:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 8e66ebfe-c9ab-4b61-a828-f97e2c47c9c3 0xc0016e6fc7 0xc0016e6fc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bgp6n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bgp6n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-bgp6n true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0016e7040} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0016e7060}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:38:49 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:38:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:38:58 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:38:49 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-01-10 14:38:49 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-10 14:38:57 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://e8c91f604da3854a26608187c3202ea9eb8dde26a5d4bebd70ff93daeed2d3e7}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:38:59.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3278" for this suite.
Jan 10 14:39:05.410: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:39:05.534: INFO: namespace deployment-3278 deletion completed in 6.154114482s

• [SLOW TEST:25.552 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:39:05.535: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-d4993a48-33e1-46eb-9241-ca52e0e9dbbf
STEP: Creating a pod to test consume secrets
Jan 10 14:39:05.714: INFO: Waiting up to 5m0s for pod "pod-secrets-31ec9ec8-ee3c-46f4-93d1-82e014d0c2f6" in namespace "secrets-2628" to be "success or failure"
Jan 10 14:39:05.728: INFO: Pod "pod-secrets-31ec9ec8-ee3c-46f4-93d1-82e014d0c2f6": Phase="Pending", Reason="", readiness=false. Elapsed: 13.775215ms
Jan 10 14:39:07.747: INFO: Pod "pod-secrets-31ec9ec8-ee3c-46f4-93d1-82e014d0c2f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032651333s
Jan 10 14:39:09.759: INFO: Pod "pod-secrets-31ec9ec8-ee3c-46f4-93d1-82e014d0c2f6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044012004s
Jan 10 14:39:11.770: INFO: Pod "pod-secrets-31ec9ec8-ee3c-46f4-93d1-82e014d0c2f6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054933815s
Jan 10 14:39:13.800: INFO: Pod "pod-secrets-31ec9ec8-ee3c-46f4-93d1-82e014d0c2f6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.085143118s
Jan 10 14:39:15.822: INFO: Pod "pod-secrets-31ec9ec8-ee3c-46f4-93d1-82e014d0c2f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.107626208s
STEP: Saw pod success
Jan 10 14:39:15.823: INFO: Pod "pod-secrets-31ec9ec8-ee3c-46f4-93d1-82e014d0c2f6" satisfied condition "success or failure"
Jan 10 14:39:15.839: INFO: Trying to get logs from node iruya-node pod pod-secrets-31ec9ec8-ee3c-46f4-93d1-82e014d0c2f6 container secret-volume-test: 
STEP: delete the pod
Jan 10 14:39:15.996: INFO: Waiting for pod pod-secrets-31ec9ec8-ee3c-46f4-93d1-82e014d0c2f6 to disappear
Jan 10 14:39:16.005: INFO: Pod pod-secrets-31ec9ec8-ee3c-46f4-93d1-82e014d0c2f6 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:39:16.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2628" for this suite.
Jan 10 14:39:22.181: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:39:22.307: INFO: namespace secrets-2628 deletion completed in 6.293548399s

• [SLOW TEST:16.773 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:39:22.310: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-projected-all-test-volume-279698ef-082d-4606-820e-8021e7f953f1
STEP: Creating secret with name secret-projected-all-test-volume-ada8a434-3b1f-4cb4-b46a-0914c6990f03
STEP: Creating a pod to test Check all projections for projected volume plugin
Jan 10 14:39:22.533: INFO: Waiting up to 5m0s for pod "projected-volume-24fa40f2-d21e-4d6c-856b-04aad180abe4" in namespace "projected-9244" to be "success or failure"
Jan 10 14:39:22.539: INFO: Pod "projected-volume-24fa40f2-d21e-4d6c-856b-04aad180abe4": Phase="Pending", Reason="", readiness=false. Elapsed: 5.488978ms
Jan 10 14:39:24.560: INFO: Pod "projected-volume-24fa40f2-d21e-4d6c-856b-04aad180abe4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026271457s
Jan 10 14:39:26.571: INFO: Pod "projected-volume-24fa40f2-d21e-4d6c-856b-04aad180abe4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037241885s
Jan 10 14:39:28.589: INFO: Pod "projected-volume-24fa40f2-d21e-4d6c-856b-04aad180abe4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055887315s
Jan 10 14:39:30.606: INFO: Pod "projected-volume-24fa40f2-d21e-4d6c-856b-04aad180abe4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.072324708s
STEP: Saw pod success
Jan 10 14:39:30.606: INFO: Pod "projected-volume-24fa40f2-d21e-4d6c-856b-04aad180abe4" satisfied condition "success or failure"
Jan 10 14:39:30.614: INFO: Trying to get logs from node iruya-node pod projected-volume-24fa40f2-d21e-4d6c-856b-04aad180abe4 container projected-all-volume-test: 
STEP: delete the pod
Jan 10 14:39:30.732: INFO: Waiting for pod projected-volume-24fa40f2-d21e-4d6c-856b-04aad180abe4 to disappear
Jan 10 14:39:30.786: INFO: Pod projected-volume-24fa40f2-d21e-4d6c-856b-04aad180abe4 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:39:30.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9244" for this suite.
Jan 10 14:39:36.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:39:36.988: INFO: namespace projected-9244 deletion completed in 6.195489868s

• [SLOW TEST:14.679 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:39:36.989: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Jan 10 14:39:37.742: INFO: Pod name wrapped-volume-race-66a8dfac-5b12-4eaf-97ac-e129afc4e1ba: Found 0 pods out of 5
Jan 10 14:39:42.807: INFO: Pod name wrapped-volume-race-66a8dfac-5b12-4eaf-97ac-e129afc4e1ba: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-66a8dfac-5b12-4eaf-97ac-e129afc4e1ba in namespace emptydir-wrapper-6616, will wait for the garbage collector to delete the pods
Jan 10 14:40:10.938: INFO: Deleting ReplicationController wrapped-volume-race-66a8dfac-5b12-4eaf-97ac-e129afc4e1ba took: 17.689512ms
Jan 10 14:40:11.240: INFO: Terminating ReplicationController wrapped-volume-race-66a8dfac-5b12-4eaf-97ac-e129afc4e1ba pods took: 301.470276ms
STEP: Creating RC which spawns configmap-volume pods
Jan 10 14:40:57.119: INFO: Pod name wrapped-volume-race-520fa2bf-7d56-4647-b199-9c2e549fbf3d: Found 0 pods out of 5
Jan 10 14:41:02.195: INFO: Pod name wrapped-volume-race-520fa2bf-7d56-4647-b199-9c2e549fbf3d: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-520fa2bf-7d56-4647-b199-9c2e549fbf3d in namespace emptydir-wrapper-6616, will wait for the garbage collector to delete the pods
Jan 10 14:41:30.299: INFO: Deleting ReplicationController wrapped-volume-race-520fa2bf-7d56-4647-b199-9c2e549fbf3d took: 19.227006ms
Jan 10 14:41:30.700: INFO: Terminating ReplicationController wrapped-volume-race-520fa2bf-7d56-4647-b199-9c2e549fbf3d pods took: 401.752298ms
STEP: Creating RC which spawns configmap-volume pods
Jan 10 14:42:17.089: INFO: Pod name wrapped-volume-race-6f1fe878-44f7-42b1-b352-b69c43d9f59a: Found 0 pods out of 5
Jan 10 14:42:22.105: INFO: Pod name wrapped-volume-race-6f1fe878-44f7-42b1-b352-b69c43d9f59a: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-6f1fe878-44f7-42b1-b352-b69c43d9f59a in namespace emptydir-wrapper-6616, will wait for the garbage collector to delete the pods
Jan 10 14:42:52.260: INFO: Deleting ReplicationController wrapped-volume-race-6f1fe878-44f7-42b1-b352-b69c43d9f59a took: 20.091687ms
Jan 10 14:42:52.761: INFO: Terminating ReplicationController wrapped-volume-race-6f1fe878-44f7-42b1-b352-b69c43d9f59a pods took: 501.533223ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:43:47.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-6616" for this suite.
Jan 10 14:44:00.046: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:44:00.169: INFO: namespace emptydir-wrapper-6616 deletion completed in 12.186803016s

• [SLOW TEST:263.180 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:44:00.170: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating pod
Jan 10 14:44:08.502: INFO: Pod pod-hostip-4105ca24-0fc9-4348-8e08-1135c42f8811 has hostIP: 10.96.3.65
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:44:08.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2327" for this suite.
Jan 10 14:44:32.545: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:44:32.695: INFO: namespace pods-2327 deletion completed in 24.182980679s

• [SLOW TEST:32.525 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:44:32.699: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-8fee4bb9-d282-4927-92c7-aa81172e23e5
STEP: Creating a pod to test consume configMaps
Jan 10 14:44:32.876: INFO: Waiting up to 5m0s for pod "pod-configmaps-faa6eca3-6447-40c7-a5ac-72b608e0743f" in namespace "configmap-5025" to be "success or failure"
Jan 10 14:44:32.894: INFO: Pod "pod-configmaps-faa6eca3-6447-40c7-a5ac-72b608e0743f": Phase="Pending", Reason="", readiness=false. Elapsed: 18.472338ms
Jan 10 14:44:34.906: INFO: Pod "pod-configmaps-faa6eca3-6447-40c7-a5ac-72b608e0743f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030164932s
Jan 10 14:44:36.918: INFO: Pod "pod-configmaps-faa6eca3-6447-40c7-a5ac-72b608e0743f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041628988s
Jan 10 14:44:38.928: INFO: Pod "pod-configmaps-faa6eca3-6447-40c7-a5ac-72b608e0743f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052279522s
Jan 10 14:44:40.938: INFO: Pod "pod-configmaps-faa6eca3-6447-40c7-a5ac-72b608e0743f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.06206714s
Jan 10 14:44:42.948: INFO: Pod "pod-configmaps-faa6eca3-6447-40c7-a5ac-72b608e0743f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.071836036s
STEP: Saw pod success
Jan 10 14:44:42.948: INFO: Pod "pod-configmaps-faa6eca3-6447-40c7-a5ac-72b608e0743f" satisfied condition "success or failure"
Jan 10 14:44:42.951: INFO: Trying to get logs from node iruya-node pod pod-configmaps-faa6eca3-6447-40c7-a5ac-72b608e0743f container configmap-volume-test: 
STEP: delete the pod
Jan 10 14:44:43.010: INFO: Waiting for pod pod-configmaps-faa6eca3-6447-40c7-a5ac-72b608e0743f to disappear
Jan 10 14:44:43.019: INFO: Pod pod-configmaps-faa6eca3-6447-40c7-a5ac-72b608e0743f no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:44:43.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5025" for this suite.
Jan 10 14:44:49.045: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:44:49.181: INFO: namespace configmap-5025 deletion completed in 6.154834619s

• [SLOW TEST:16.483 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:44:49.182: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating cluster-info
Jan 10 14:44:49.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Jan 10 14:44:51.539: INFO: stderr: ""
Jan 10 14:44:51.539: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:44:51.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5328" for this suite.
Jan 10 14:44:57.572: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:44:57.726: INFO: namespace kubectl-5328 deletion completed in 6.180571787s

• [SLOW TEST:8.544 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:44:57.727: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Jan 10 14:45:18.022: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5627 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 10 14:45:18.022: INFO: >>> kubeConfig: /root/.kube/config
I0110 14:45:18.120693       8 log.go:172] (0xc0029d4bb0) (0xc00268b9a0) Create stream
I0110 14:45:18.120857       8 log.go:172] (0xc0029d4bb0) (0xc00268b9a0) Stream added, broadcasting: 1
I0110 14:45:18.130192       8 log.go:172] (0xc0029d4bb0) Reply frame received for 1
I0110 14:45:18.130438       8 log.go:172] (0xc0029d4bb0) (0xc0005db400) Create stream
I0110 14:45:18.130473       8 log.go:172] (0xc0029d4bb0) (0xc0005db400) Stream added, broadcasting: 3
I0110 14:45:18.135167       8 log.go:172] (0xc0029d4bb0) Reply frame received for 3
I0110 14:45:18.135300       8 log.go:172] (0xc0029d4bb0) (0xc00318a0a0) Create stream
I0110 14:45:18.135360       8 log.go:172] (0xc0029d4bb0) (0xc00318a0a0) Stream added, broadcasting: 5
I0110 14:45:18.137070       8 log.go:172] (0xc0029d4bb0) Reply frame received for 5
I0110 14:45:18.253758       8 log.go:172] (0xc0029d4bb0) Data frame received for 3
I0110 14:45:18.253786       8 log.go:172] (0xc0005db400) (3) Data frame handling
I0110 14:45:18.253798       8 log.go:172] (0xc0005db400) (3) Data frame sent
I0110 14:45:18.455926       8 log.go:172] (0xc0029d4bb0) (0xc0005db400) Stream removed, broadcasting: 3
I0110 14:45:18.457424       8 log.go:172] (0xc0029d4bb0) Data frame received for 1
I0110 14:45:18.457642       8 log.go:172] (0xc00268b9a0) (1) Data frame handling
I0110 14:45:18.457715       8 log.go:172] (0xc00268b9a0) (1) Data frame sent
I0110 14:45:18.457875       8 log.go:172] (0xc0029d4bb0) (0xc00268b9a0) Stream removed, broadcasting: 1
I0110 14:45:18.469546       8 log.go:172] (0xc0029d4bb0) (0xc00318a0a0) Stream removed, broadcasting: 5
I0110 14:45:18.469821       8 log.go:172] (0xc0029d4bb0) (0xc00268b9a0) Stream removed, broadcasting: 1
I0110 14:45:18.469859       8 log.go:172] (0xc0029d4bb0) (0xc0005db400) Stream removed, broadcasting: 3
I0110 14:45:18.469871       8 log.go:172] (0xc0029d4bb0) (0xc00318a0a0) Stream removed, broadcasting: 5
Jan 10 14:45:18.470: INFO: Exec stderr: ""
Jan 10 14:45:18.470: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5627 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 10 14:45:18.470: INFO: >>> kubeConfig: /root/.kube/config
I0110 14:45:18.475039       8 log.go:172] (0xc0029d4bb0) Go away received
I0110 14:45:18.779294       8 log.go:172] (0xc000625970) (0xc001a7c0a0) Create stream
I0110 14:45:18.779585       8 log.go:172] (0xc000625970) (0xc001a7c0a0) Stream added, broadcasting: 1
I0110 14:45:18.852784       8 log.go:172] (0xc000625970) Reply frame received for 1
I0110 14:45:18.853192       8 log.go:172] (0xc000625970) (0xc0000fe820) Create stream
I0110 14:45:18.853234       8 log.go:172] (0xc000625970) (0xc0000fe820) Stream added, broadcasting: 3
I0110 14:45:18.876755       8 log.go:172] (0xc000625970) Reply frame received for 3
I0110 14:45:18.876859       8 log.go:172] (0xc000625970) (0xc000a6a000) Create stream
I0110 14:45:18.876892       8 log.go:172] (0xc000625970) (0xc000a6a000) Stream added, broadcasting: 5
I0110 14:45:18.882431       8 log.go:172] (0xc000625970) Reply frame received for 5
I0110 14:45:19.007211       8 log.go:172] (0xc000625970) Data frame received for 3
I0110 14:45:19.007380       8 log.go:172] (0xc0000fe820) (3) Data frame handling
I0110 14:45:19.007451       8 log.go:172] (0xc0000fe820) (3) Data frame sent
I0110 14:45:19.133518       8 log.go:172] (0xc000625970) Data frame received for 1
I0110 14:45:19.133873       8 log.go:172] (0xc001a7c0a0) (1) Data frame handling
I0110 14:45:19.133957       8 log.go:172] (0xc001a7c0a0) (1) Data frame sent
I0110 14:45:19.134238       8 log.go:172] (0xc000625970) (0xc001a7c0a0) Stream removed, broadcasting: 1
I0110 14:45:19.134524       8 log.go:172] (0xc000625970) (0xc0000fe820) Stream removed, broadcasting: 3
I0110 14:45:19.134850       8 log.go:172] (0xc000625970) (0xc000a6a000) Stream removed, broadcasting: 5
I0110 14:45:19.134922       8 log.go:172] (0xc000625970) Go away received
I0110 14:45:19.135121       8 log.go:172] (0xc000625970) (0xc001a7c0a0) Stream removed, broadcasting: 1
I0110 14:45:19.135203       8 log.go:172] (0xc000625970) (0xc0000fe820) Stream removed, broadcasting: 3
I0110 14:45:19.135235       8 log.go:172] (0xc000625970) (0xc000a6a000) Stream removed, broadcasting: 5
Jan 10 14:45:19.135: INFO: Exec stderr: ""
Jan 10 14:45:19.135: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5627 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 10 14:45:19.136: INFO: >>> kubeConfig: /root/.kube/config
I0110 14:45:19.209847       8 log.go:172] (0xc00098e790) (0xc0000fee60) Create stream
I0110 14:45:19.210060       8 log.go:172] (0xc00098e790) (0xc0000fee60) Stream added, broadcasting: 1
I0110 14:45:19.215509       8 log.go:172] (0xc00098e790) Reply frame received for 1
I0110 14:45:19.215645       8 log.go:172] (0xc00098e790) (0xc001f82000) Create stream
I0110 14:45:19.215664       8 log.go:172] (0xc00098e790) (0xc001f82000) Stream added, broadcasting: 3
I0110 14:45:19.219243       8 log.go:172] (0xc00098e790) Reply frame received for 3
I0110 14:45:19.219288       8 log.go:172] (0xc00098e790) (0xc001a7c280) Create stream
I0110 14:45:19.219299       8 log.go:172] (0xc00098e790) (0xc001a7c280) Stream added, broadcasting: 5
I0110 14:45:19.220724       8 log.go:172] (0xc00098e790) Reply frame received for 5
I0110 14:45:19.307015       8 log.go:172] (0xc00098e790) Data frame received for 3
I0110 14:45:19.307171       8 log.go:172] (0xc001f82000) (3) Data frame handling
I0110 14:45:19.307238       8 log.go:172] (0xc001f82000) (3) Data frame sent
I0110 14:45:19.445570       8 log.go:172] (0xc00098e790) (0xc001f82000) Stream removed, broadcasting: 3
I0110 14:45:19.445853       8 log.go:172] (0xc00098e790) Data frame received for 1
I0110 14:45:19.445904       8 log.go:172] (0xc0000fee60) (1) Data frame handling
I0110 14:45:19.445947       8 log.go:172] (0xc0000fee60) (1) Data frame sent
I0110 14:45:19.446070       8 log.go:172] (0xc00098e790) (0xc0000fee60) Stream removed, broadcasting: 1
I0110 14:45:19.446987       8 log.go:172] (0xc00098e790) (0xc001a7c280) Stream removed, broadcasting: 5
I0110 14:45:19.447390       8 log.go:172] (0xc00098e790) Go away received
I0110 14:45:19.447639       8 log.go:172] (0xc00098e790) (0xc0000fee60) Stream removed, broadcasting: 1
I0110 14:45:19.447790       8 log.go:172] (0xc00098e790) (0xc001f82000) Stream removed, broadcasting: 3
I0110 14:45:19.447826       8 log.go:172] (0xc00098e790) (0xc001a7c280) Stream removed, broadcasting: 5
Jan 10 14:45:19.447: INFO: Exec stderr: ""
Jan 10 14:45:19.448: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5627 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 10 14:45:19.448: INFO: >>> kubeConfig: /root/.kube/config
I0110 14:45:19.499976       8 log.go:172] (0xc0028300b0) (0xc000a6a280) Create stream
I0110 14:45:19.500105       8 log.go:172] (0xc0028300b0) (0xc000a6a280) Stream added, broadcasting: 1
I0110 14:45:19.507608       8 log.go:172] (0xc0028300b0) Reply frame received for 1
I0110 14:45:19.507695       8 log.go:172] (0xc0028300b0) (0xc000a6a320) Create stream
I0110 14:45:19.507707       8 log.go:172] (0xc0028300b0) (0xc000a6a320) Stream added, broadcasting: 3
I0110 14:45:19.509077       8 log.go:172] (0xc0028300b0) Reply frame received for 3
I0110 14:45:19.509117       8 log.go:172] (0xc0028300b0) (0xc003022000) Create stream
I0110 14:45:19.509125       8 log.go:172] (0xc0028300b0) (0xc003022000) Stream added, broadcasting: 5
I0110 14:45:19.510453       8 log.go:172] (0xc0028300b0) Reply frame received for 5
I0110 14:45:19.612816       8 log.go:172] (0xc0028300b0) Data frame received for 3
I0110 14:45:19.612955       8 log.go:172] (0xc000a6a320) (3) Data frame handling
I0110 14:45:19.613015       8 log.go:172] (0xc000a6a320) (3) Data frame sent
I0110 14:45:19.736250       8 log.go:172] (0xc0028300b0) Data frame received for 1
I0110 14:45:19.736410       8 log.go:172] (0xc0028300b0) (0xc003022000) Stream removed, broadcasting: 5
I0110 14:45:19.736503       8 log.go:172] (0xc000a6a280) (1) Data frame handling
I0110 14:45:19.736528       8 log.go:172] (0xc000a6a280) (1) Data frame sent
I0110 14:45:19.736573       8 log.go:172] (0xc0028300b0) (0xc000a6a320) Stream removed, broadcasting: 3
I0110 14:45:19.736615       8 log.go:172] (0xc0028300b0) (0xc000a6a280) Stream removed, broadcasting: 1
I0110 14:45:19.736644       8 log.go:172] (0xc0028300b0) Go away received
I0110 14:45:19.737118       8 log.go:172] (0xc0028300b0) (0xc000a6a280) Stream removed, broadcasting: 1
I0110 14:45:19.737144       8 log.go:172] (0xc0028300b0) (0xc000a6a320) Stream removed, broadcasting: 3
I0110 14:45:19.737155       8 log.go:172] (0xc0028300b0) (0xc003022000) Stream removed, broadcasting: 5
Jan 10 14:45:19.737: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Jan 10 14:45:19.737: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5627 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 10 14:45:19.737: INFO: >>> kubeConfig: /root/.kube/config
I0110 14:45:19.814984       8 log.go:172] (0xc002830dc0) (0xc000a6a8c0) Create stream
I0110 14:45:19.815121       8 log.go:172] (0xc002830dc0) (0xc000a6a8c0) Stream added, broadcasting: 1
I0110 14:45:19.822670       8 log.go:172] (0xc002830dc0) Reply frame received for 1
I0110 14:45:19.822903       8 log.go:172] (0xc002830dc0) (0xc001f820a0) Create stream
I0110 14:45:19.822938       8 log.go:172] (0xc002830dc0) (0xc001f820a0) Stream added, broadcasting: 3
I0110 14:45:19.824620       8 log.go:172] (0xc002830dc0) Reply frame received for 3
I0110 14:45:19.824658       8 log.go:172] (0xc002830dc0) (0xc000a6aaa0) Create stream
I0110 14:45:19.824670       8 log.go:172] (0xc002830dc0) (0xc000a6aaa0) Stream added, broadcasting: 5
I0110 14:45:19.826325       8 log.go:172] (0xc002830dc0) Reply frame received for 5
I0110 14:45:19.936600       8 log.go:172] (0xc002830dc0) Data frame received for 3
I0110 14:45:19.936879       8 log.go:172] (0xc001f820a0) (3) Data frame handling
I0110 14:45:19.936926       8 log.go:172] (0xc001f820a0) (3) Data frame sent
I0110 14:45:20.073327       8 log.go:172] (0xc002830dc0) Data frame received for 1
I0110 14:45:20.073581       8 log.go:172] (0xc002830dc0) (0xc001f820a0) Stream removed, broadcasting: 3
I0110 14:45:20.073671       8 log.go:172] (0xc000a6a8c0) (1) Data frame handling
I0110 14:45:20.073695       8 log.go:172] (0xc000a6a8c0) (1) Data frame sent
I0110 14:45:20.073743       8 log.go:172] (0xc002830dc0) (0xc000a6aaa0) Stream removed, broadcasting: 5
I0110 14:45:20.073791       8 log.go:172] (0xc002830dc0) (0xc000a6a8c0) Stream removed, broadcasting: 1
I0110 14:45:20.073804       8 log.go:172] (0xc002830dc0) Go away received
I0110 14:45:20.074416       8 log.go:172] (0xc002830dc0) (0xc000a6a8c0) Stream removed, broadcasting: 1
I0110 14:45:20.074444       8 log.go:172] (0xc002830dc0) (0xc001f820a0) Stream removed, broadcasting: 3
I0110 14:45:20.074457       8 log.go:172] (0xc002830dc0) (0xc000a6aaa0) Stream removed, broadcasting: 5
Jan 10 14:45:20.074: INFO: Exec stderr: ""
Jan 10 14:45:20.074: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5627 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 10 14:45:20.074: INFO: >>> kubeConfig: /root/.kube/config
I0110 14:45:20.140461       8 log.go:172] (0xc0017b5130) (0xc001f82820) Create stream
I0110 14:45:20.140518       8 log.go:172] (0xc0017b5130) (0xc001f82820) Stream added, broadcasting: 1
I0110 14:45:20.144963       8 log.go:172] (0xc0017b5130) Reply frame received for 1
I0110 14:45:20.144995       8 log.go:172] (0xc0017b5130) (0xc0000ff0e0) Create stream
I0110 14:45:20.145011       8 log.go:172] (0xc0017b5130) (0xc0000ff0e0) Stream added, broadcasting: 3
I0110 14:45:20.147828       8 log.go:172] (0xc0017b5130) Reply frame received for 3
I0110 14:45:20.147858       8 log.go:172] (0xc0017b5130) (0xc000a6ac80) Create stream
I0110 14:45:20.147869       8 log.go:172] (0xc0017b5130) (0xc000a6ac80) Stream added, broadcasting: 5
I0110 14:45:20.149394       8 log.go:172] (0xc0017b5130) Reply frame received for 5
I0110 14:45:20.259456       8 log.go:172] (0xc0017b5130) Data frame received for 3
I0110 14:45:20.259585       8 log.go:172] (0xc0000ff0e0) (3) Data frame handling
I0110 14:45:20.259642       8 log.go:172] (0xc0000ff0e0) (3) Data frame sent
I0110 14:45:20.387790       8 log.go:172] (0xc0017b5130) (0xc0000ff0e0) Stream removed, broadcasting: 3
I0110 14:45:20.388003       8 log.go:172] (0xc0017b5130) Data frame received for 1
I0110 14:45:20.388049       8 log.go:172] (0xc0017b5130) (0xc000a6ac80) Stream removed, broadcasting: 5
I0110 14:45:20.388093       8 log.go:172] (0xc001f82820) (1) Data frame handling
I0110 14:45:20.388133       8 log.go:172] (0xc001f82820) (1) Data frame sent
I0110 14:45:20.388153       8 log.go:172] (0xc0017b5130) (0xc001f82820) Stream removed, broadcasting: 1
I0110 14:45:20.388180       8 log.go:172] (0xc0017b5130) Go away received
I0110 14:45:20.388569       8 log.go:172] (0xc0017b5130) (0xc001f82820) Stream removed, broadcasting: 1
I0110 14:45:20.388603       8 log.go:172] (0xc0017b5130) (0xc0000ff0e0) Stream removed, broadcasting: 3
I0110 14:45:20.388624       8 log.go:172] (0xc0017b5130) (0xc000a6ac80) Stream removed, broadcasting: 5
Jan 10 14:45:20.388: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Jan 10 14:45:20.388: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5627 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 10 14:45:20.388: INFO: >>> kubeConfig: /root/.kube/config
I0110 14:45:20.466861       8 log.go:172] (0xc00098f8c0) (0xc00188e500) Create stream
I0110 14:45:20.467005       8 log.go:172] (0xc00098f8c0) (0xc00188e500) Stream added, broadcasting: 1
I0110 14:45:20.477087       8 log.go:172] (0xc00098f8c0) Reply frame received for 1
I0110 14:45:20.477125       8 log.go:172] (0xc00098f8c0) (0xc00188e780) Create stream
I0110 14:45:20.477146       8 log.go:172] (0xc00098f8c0) (0xc00188e780) Stream added, broadcasting: 3
I0110 14:45:20.479063       8 log.go:172] (0xc00098f8c0) Reply frame received for 3
I0110 14:45:20.479112       8 log.go:172] (0xc00098f8c0) (0xc001e40000) Create stream
I0110 14:45:20.479125       8 log.go:172] (0xc00098f8c0) (0xc001e40000) Stream added, broadcasting: 5
I0110 14:45:20.480545       8 log.go:172] (0xc00098f8c0) Reply frame received for 5
I0110 14:45:20.635852       8 log.go:172] (0xc00098f8c0) Data frame received for 3
I0110 14:45:20.636144       8 log.go:172] (0xc00188e780) (3) Data frame handling
I0110 14:45:20.636339       8 log.go:172] (0xc00188e780) (3) Data frame sent
I0110 14:45:20.747882       8 log.go:172] (0xc00098f8c0) Data frame received for 1
I0110 14:45:20.748034       8 log.go:172] (0xc00098f8c0) (0xc00188e780) Stream removed, broadcasting: 3
I0110 14:45:20.748113       8 log.go:172] (0xc00188e500) (1) Data frame handling
I0110 14:45:20.748174       8 log.go:172] (0xc00188e500) (1) Data frame sent
I0110 14:45:20.748259       8 log.go:172] (0xc00098f8c0) (0xc001e40000) Stream removed, broadcasting: 5
I0110 14:45:20.748331       8 log.go:172] (0xc00098f8c0) (0xc00188e500) Stream removed, broadcasting: 1
I0110 14:45:20.748379       8 log.go:172] (0xc00098f8c0) Go away received
I0110 14:45:20.748665       8 log.go:172] (0xc00098f8c0) (0xc00188e500) Stream removed, broadcasting: 1
I0110 14:45:20.748686       8 log.go:172] (0xc00098f8c0) (0xc00188e780) Stream removed, broadcasting: 3
I0110 14:45:20.748702       8 log.go:172] (0xc00098f8c0) (0xc001e40000) Stream removed, broadcasting: 5
Jan 10 14:45:20.748: INFO: Exec stderr: ""
Jan 10 14:45:20.748: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5627 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 10 14:45:20.748: INFO: >>> kubeConfig: /root/.kube/config
I0110 14:45:20.827927       8 log.go:172] (0xc00304ac60) (0xc001a7c820) Create stream
I0110 14:45:20.828040       8 log.go:172] (0xc00304ac60) (0xc001a7c820) Stream added, broadcasting: 1
I0110 14:45:20.835444       8 log.go:172] (0xc00304ac60) Reply frame received for 1
I0110 14:45:20.835510       8 log.go:172] (0xc00304ac60) (0xc001a7c8c0) Create stream
I0110 14:45:20.835523       8 log.go:172] (0xc00304ac60) (0xc001a7c8c0) Stream added, broadcasting: 3
I0110 14:45:20.837348       8 log.go:172] (0xc00304ac60) Reply frame received for 3
I0110 14:45:20.837417       8 log.go:172] (0xc00304ac60) (0xc001e401e0) Create stream
I0110 14:45:20.837430       8 log.go:172] (0xc00304ac60) (0xc001e401e0) Stream added, broadcasting: 5
I0110 14:45:20.839403       8 log.go:172] (0xc00304ac60) Reply frame received for 5
I0110 14:45:20.970090       8 log.go:172] (0xc00304ac60) Data frame received for 3
I0110 14:45:20.970176       8 log.go:172] (0xc001a7c8c0) (3) Data frame handling
I0110 14:45:20.970204       8 log.go:172] (0xc001a7c8c0) (3) Data frame sent
I0110 14:45:21.126410       8 log.go:172] (0xc00304ac60) Data frame received for 1
I0110 14:45:21.126681       8 log.go:172] (0xc00304ac60) (0xc001a7c8c0) Stream removed, broadcasting: 3
I0110 14:45:21.126743       8 log.go:172] (0xc001a7c820) (1) Data frame handling
I0110 14:45:21.126761       8 log.go:172] (0xc001a7c820) (1) Data frame sent
I0110 14:45:21.126979       8 log.go:172] (0xc00304ac60) (0xc001e401e0) Stream removed, broadcasting: 5
I0110 14:45:21.127521       8 log.go:172] (0xc00304ac60) (0xc001a7c820) Stream removed, broadcasting: 1
I0110 14:45:21.127644       8 log.go:172] (0xc00304ac60) Go away received
I0110 14:45:21.128440       8 log.go:172] (0xc00304ac60) (0xc001a7c820) Stream removed, broadcasting: 1
I0110 14:45:21.128490       8 log.go:172] (0xc00304ac60) (0xc001a7c8c0) Stream removed, broadcasting: 3
I0110 14:45:21.128634       8 log.go:172] (0xc00304ac60) (0xc001e401e0) Stream removed, broadcasting: 5
Jan 10 14:45:21.128: INFO: Exec stderr: ""
Jan 10 14:45:21.128: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5627 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 10 14:45:21.129: INFO: >>> kubeConfig: /root/.kube/config
I0110 14:45:21.224968       8 log.go:172] (0xc0017b5c30) (0xc001f82be0) Create stream
I0110 14:45:21.225309       8 log.go:172] (0xc0017b5c30) (0xc001f82be0) Stream added, broadcasting: 1
I0110 14:45:21.239929       8 log.go:172] (0xc0017b5c30) Reply frame received for 1
I0110 14:45:21.240084       8 log.go:172] (0xc0017b5c30) (0xc001e403c0) Create stream
I0110 14:45:21.240099       8 log.go:172] (0xc0017b5c30) (0xc001e403c0) Stream added, broadcasting: 3
I0110 14:45:21.241716       8 log.go:172] (0xc0017b5c30) Reply frame received for 3
I0110 14:45:21.241781       8 log.go:172] (0xc0017b5c30) (0xc0030220a0) Create stream
I0110 14:45:21.241796       8 log.go:172] (0xc0017b5c30) (0xc0030220a0) Stream added, broadcasting: 5
I0110 14:45:21.244995       8 log.go:172] (0xc0017b5c30) Reply frame received for 5
I0110 14:45:21.402739       8 log.go:172] (0xc0017b5c30) Data frame received for 3
I0110 14:45:21.402869       8 log.go:172] (0xc001e403c0) (3) Data frame handling
I0110 14:45:21.402955       8 log.go:172] (0xc001e403c0) (3) Data frame sent
I0110 14:45:21.553675       8 log.go:172] (0xc0017b5c30) (0xc001e403c0) Stream removed, broadcasting: 3
I0110 14:45:21.554366       8 log.go:172] (0xc0017b5c30) Data frame received for 1
I0110 14:45:21.554933       8 log.go:172] (0xc0017b5c30) (0xc0030220a0) Stream removed, broadcasting: 5
I0110 14:45:21.555174       8 log.go:172] (0xc001f82be0) (1) Data frame handling
I0110 14:45:21.555247       8 log.go:172] (0xc001f82be0) (1) Data frame sent
I0110 14:45:21.555308       8 log.go:172] (0xc0017b5c30) (0xc001f82be0) Stream removed, broadcasting: 1
I0110 14:45:21.555426       8 log.go:172] (0xc0017b5c30) Go away received
I0110 14:45:21.556281       8 log.go:172] (0xc0017b5c30) (0xc001f82be0) Stream removed, broadcasting: 1
I0110 14:45:21.556350       8 log.go:172] (0xc0017b5c30) (0xc001e403c0) Stream removed, broadcasting: 3
I0110 14:45:21.556364       8 log.go:172] (0xc0017b5c30) (0xc0030220a0) Stream removed, broadcasting: 5
Jan 10 14:45:21.556: INFO: Exec stderr: ""
Jan 10 14:45:21.556: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5627 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 10 14:45:21.556: INFO: >>> kubeConfig: /root/.kube/config
I0110 14:45:21.623117       8 log.go:172] (0xc001a6c790) (0xc001f82fa0) Create stream
I0110 14:45:21.623344       8 log.go:172] (0xc001a6c790) (0xc001f82fa0) Stream added, broadcasting: 1
I0110 14:45:21.636608       8 log.go:172] (0xc001a6c790) Reply frame received for 1
I0110 14:45:21.636712       8 log.go:172] (0xc001a6c790) (0xc001f83040) Create stream
I0110 14:45:21.636723       8 log.go:172] (0xc001a6c790) (0xc001f83040) Stream added, broadcasting: 3
I0110 14:45:21.638030       8 log.go:172] (0xc001a6c790) Reply frame received for 3
I0110 14:45:21.638056       8 log.go:172] (0xc001a6c790) (0xc001a7c960) Create stream
I0110 14:45:21.638083       8 log.go:172] (0xc001a6c790) (0xc001a7c960) Stream added, broadcasting: 5
I0110 14:45:21.642175       8 log.go:172] (0xc001a6c790) Reply frame received for 5
I0110 14:45:21.731570       8 log.go:172] (0xc001a6c790) Data frame received for 3
I0110 14:45:21.731779       8 log.go:172] (0xc001f83040) (3) Data frame handling
I0110 14:45:21.731841       8 log.go:172] (0xc001f83040) (3) Data frame sent
I0110 14:45:21.916647       8 log.go:172] (0xc001a6c790) (0xc001f83040) Stream removed, broadcasting: 3
I0110 14:45:21.917154       8 log.go:172] (0xc001a6c790) Data frame received for 1
I0110 14:45:21.917538       8 log.go:172] (0xc001a6c790) (0xc001a7c960) Stream removed, broadcasting: 5
I0110 14:45:21.917824       8 log.go:172] (0xc001f82fa0) (1) Data frame handling
I0110 14:45:21.917982       8 log.go:172] (0xc001f82fa0) (1) Data frame sent
I0110 14:45:21.918131       8 log.go:172] (0xc001a6c790) (0xc001f82fa0) Stream removed, broadcasting: 1
I0110 14:45:21.918193       8 log.go:172] (0xc001a6c790) Go away received
I0110 14:45:21.919215       8 log.go:172] (0xc001a6c790) (0xc001f82fa0) Stream removed, broadcasting: 1
I0110 14:45:21.919295       8 log.go:172] (0xc001a6c790) (0xc001f83040) Stream removed, broadcasting: 3
I0110 14:45:21.919314       8 log.go:172] (0xc001a6c790) (0xc001a7c960) Stream removed, broadcasting: 5
Jan 10 14:45:21.919: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:45:21.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-5627" for this suite.
Jan 10 14:46:07.971: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:46:08.088: INFO: namespace e2e-kubelet-etc-hosts-5627 deletion completed in 46.155190153s

• [SLOW TEST:70.361 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:46:08.088: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan 10 14:46:08.172: INFO: Waiting up to 5m0s for pod "downward-api-c5af4a54-15f7-4bb1-b825-eaf239891f78" in namespace "downward-api-5730" to be "success or failure"
Jan 10 14:46:08.178: INFO: Pod "downward-api-c5af4a54-15f7-4bb1-b825-eaf239891f78": Phase="Pending", Reason="", readiness=false. Elapsed: 6.144442ms
Jan 10 14:46:10.190: INFO: Pod "downward-api-c5af4a54-15f7-4bb1-b825-eaf239891f78": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017780137s
Jan 10 14:46:12.200: INFO: Pod "downward-api-c5af4a54-15f7-4bb1-b825-eaf239891f78": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028424271s
Jan 10 14:46:14.217: INFO: Pod "downward-api-c5af4a54-15f7-4bb1-b825-eaf239891f78": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045355853s
Jan 10 14:46:16.227: INFO: Pod "downward-api-c5af4a54-15f7-4bb1-b825-eaf239891f78": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.055574794s
STEP: Saw pod success
Jan 10 14:46:16.228: INFO: Pod "downward-api-c5af4a54-15f7-4bb1-b825-eaf239891f78" satisfied condition "success or failure"
Jan 10 14:46:16.233: INFO: Trying to get logs from node iruya-node pod downward-api-c5af4a54-15f7-4bb1-b825-eaf239891f78 container dapi-container: 
STEP: delete the pod
Jan 10 14:46:16.309: INFO: Waiting for pod downward-api-c5af4a54-15f7-4bb1-b825-eaf239891f78 to disappear
Jan 10 14:46:16.318: INFO: Pod downward-api-c5af4a54-15f7-4bb1-b825-eaf239891f78 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:46:16.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5730" for this suite.
Jan 10 14:46:22.347: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:46:22.428: INFO: namespace downward-api-5730 deletion completed in 6.103626633s

• [SLOW TEST:14.340 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:46:22.429: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 10 14:46:22.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-9480'
Jan 10 14:46:22.661: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 10 14:46:22.661: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Jan 10 14:46:22.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-9480'
Jan 10 14:46:22.857: INFO: stderr: ""
Jan 10 14:46:22.857: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:46:22.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9480" for this suite.
Jan 10 14:46:44.897: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:46:45.009: INFO: namespace kubectl-9480 deletion completed in 22.143448011s

• [SLOW TEST:22.581 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:46:45.010: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Jan 10 14:46:45.169: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5377,SelfLink:/api/v1/namespaces/watch-5377/configmaps/e2e-watch-test-label-changed,UID:435f24c9-05ec-4ae8-bf0f-773536197a1c,ResourceVersion:20040116,Generation:0,CreationTimestamp:2020-01-10 14:46:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 10 14:46:45.169: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5377,SelfLink:/api/v1/namespaces/watch-5377/configmaps/e2e-watch-test-label-changed,UID:435f24c9-05ec-4ae8-bf0f-773536197a1c,ResourceVersion:20040117,Generation:0,CreationTimestamp:2020-01-10 14:46:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan 10 14:46:45.170: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5377,SelfLink:/api/v1/namespaces/watch-5377/configmaps/e2e-watch-test-label-changed,UID:435f24c9-05ec-4ae8-bf0f-773536197a1c,ResourceVersion:20040118,Generation:0,CreationTimestamp:2020-01-10 14:46:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Jan 10 14:46:55.256: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5377,SelfLink:/api/v1/namespaces/watch-5377/configmaps/e2e-watch-test-label-changed,UID:435f24c9-05ec-4ae8-bf0f-773536197a1c,ResourceVersion:20040133,Generation:0,CreationTimestamp:2020-01-10 14:46:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 10 14:46:55.257: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5377,SelfLink:/api/v1/namespaces/watch-5377/configmaps/e2e-watch-test-label-changed,UID:435f24c9-05ec-4ae8-bf0f-773536197a1c,ResourceVersion:20040134,Generation:0,CreationTimestamp:2020-01-10 14:46:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Jan 10 14:46:55.257: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5377,SelfLink:/api/v1/namespaces/watch-5377/configmaps/e2e-watch-test-label-changed,UID:435f24c9-05ec-4ae8-bf0f-773536197a1c,ResourceVersion:20040135,Generation:0,CreationTimestamp:2020-01-10 14:46:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:46:55.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5377" for this suite.
Jan 10 14:47:01.411: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:47:01.602: INFO: namespace watch-5377 deletion completed in 6.329173708s

• [SLOW TEST:16.592 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:47:01.603: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-1716
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 10 14:47:01.692: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 10 14:47:35.990: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1716 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 10 14:47:35.990: INFO: >>> kubeConfig: /root/.kube/config
I0110 14:47:36.084837       8 log.go:172] (0xc0017b42c0) (0xc001114c80) Create stream
I0110 14:47:36.084914       8 log.go:172] (0xc0017b42c0) (0xc001114c80) Stream added, broadcasting: 1
I0110 14:47:36.095226       8 log.go:172] (0xc0017b42c0) Reply frame received for 1
I0110 14:47:36.095300       8 log.go:172] (0xc0017b42c0) (0xc001a685a0) Create stream
I0110 14:47:36.095327       8 log.go:172] (0xc0017b42c0) (0xc001a685a0) Stream added, broadcasting: 3
I0110 14:47:36.098199       8 log.go:172] (0xc0017b42c0) Reply frame received for 3
I0110 14:47:36.098265       8 log.go:172] (0xc0017b42c0) (0xc0030d45a0) Create stream
I0110 14:47:36.098294       8 log.go:172] (0xc0017b42c0) (0xc0030d45a0) Stream added, broadcasting: 5
I0110 14:47:36.102831       8 log.go:172] (0xc0017b42c0) Reply frame received for 5
I0110 14:47:37.259496       8 log.go:172] (0xc0017b42c0) Data frame received for 3
I0110 14:47:37.259618       8 log.go:172] (0xc001a685a0) (3) Data frame handling
I0110 14:47:37.259669       8 log.go:172] (0xc001a685a0) (3) Data frame sent
I0110 14:47:37.468229       8 log.go:172] (0xc0017b42c0) Data frame received for 1
I0110 14:47:37.468920       8 log.go:172] (0xc0017b42c0) (0xc001a685a0) Stream removed, broadcasting: 3
I0110 14:47:37.469093       8 log.go:172] (0xc001114c80) (1) Data frame handling
I0110 14:47:37.469179       8 log.go:172] (0xc0017b42c0) (0xc0030d45a0) Stream removed, broadcasting: 5
I0110 14:47:37.469260       8 log.go:172] (0xc001114c80) (1) Data frame sent
I0110 14:47:37.469354       8 log.go:172] (0xc0017b42c0) (0xc001114c80) Stream removed, broadcasting: 1
I0110 14:47:37.469403       8 log.go:172] (0xc0017b42c0) Go away received
I0110 14:47:37.469971       8 log.go:172] (0xc0017b42c0) (0xc001114c80) Stream removed, broadcasting: 1
I0110 14:47:37.470024       8 log.go:172] (0xc0017b42c0) (0xc001a685a0) Stream removed, broadcasting: 3
I0110 14:47:37.470058       8 log.go:172] (0xc0017b42c0) (0xc0030d45a0) Stream removed, broadcasting: 5
Jan 10 14:47:37.470: INFO: Found all expected endpoints: [netserver-0]
Jan 10 14:47:37.480: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1716 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 10 14:47:37.480: INFO: >>> kubeConfig: /root/.kube/config
I0110 14:47:37.557827       8 log.go:172] (0xc0015f4840) (0xc001a68e60) Create stream
I0110 14:47:37.558049       8 log.go:172] (0xc0015f4840) (0xc001a68e60) Stream added, broadcasting: 1
I0110 14:47:37.567676       8 log.go:172] (0xc0015f4840) Reply frame received for 1
I0110 14:47:37.567771       8 log.go:172] (0xc0015f4840) (0xc0001b8140) Create stream
I0110 14:47:37.567798       8 log.go:172] (0xc0015f4840) (0xc0001b8140) Stream added, broadcasting: 3
I0110 14:47:37.569933       8 log.go:172] (0xc0015f4840) Reply frame received for 3
I0110 14:47:37.569991       8 log.go:172] (0xc0015f4840) (0xc001a68f00) Create stream
I0110 14:47:37.570015       8 log.go:172] (0xc0015f4840) (0xc001a68f00) Stream added, broadcasting: 5
I0110 14:47:37.572766       8 log.go:172] (0xc0015f4840) Reply frame received for 5
I0110 14:47:38.686462       8 log.go:172] (0xc0015f4840) Data frame received for 3
I0110 14:47:38.686720       8 log.go:172] (0xc0001b8140) (3) Data frame handling
I0110 14:47:38.686861       8 log.go:172] (0xc0001b8140) (3) Data frame sent
I0110 14:47:38.878671       8 log.go:172] (0xc0015f4840) Data frame received for 1
I0110 14:47:38.878905       8 log.go:172] (0xc001a68e60) (1) Data frame handling
I0110 14:47:38.879023       8 log.go:172] (0xc001a68e60) (1) Data frame sent
I0110 14:47:38.879104       8 log.go:172] (0xc0015f4840) (0xc001a68e60) Stream removed, broadcasting: 1
I0110 14:47:38.880358       8 log.go:172] (0xc0015f4840) (0xc0001b8140) Stream removed, broadcasting: 3
I0110 14:47:38.880464       8 log.go:172] (0xc0015f4840) (0xc001a68f00) Stream removed, broadcasting: 5
I0110 14:47:38.880722       8 log.go:172] (0xc0015f4840) Go away received
I0110 14:47:38.880945       8 log.go:172] (0xc0015f4840) (0xc001a68e60) Stream removed, broadcasting: 1
I0110 14:47:38.880970       8 log.go:172] (0xc0015f4840) (0xc0001b8140) Stream removed, broadcasting: 3
I0110 14:47:38.880994       8 log.go:172] (0xc0015f4840) (0xc001a68f00) Stream removed, broadcasting: 5
Jan 10 14:47:38.881: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:47:38.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-1716" for this suite.
Jan 10 14:48:00.922: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:48:01.073: INFO: namespace pod-network-test-1716 deletion completed in 22.17941693s

• [SLOW TEST:59.471 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:48:01.075: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan 10 14:48:01.200: INFO: Waiting up to 5m0s for pod "pod-b2345a56-b26b-406c-b360-139fa191ec82" in namespace "emptydir-127" to be "success or failure"
Jan 10 14:48:01.277: INFO: Pod "pod-b2345a56-b26b-406c-b360-139fa191ec82": Phase="Pending", Reason="", readiness=false. Elapsed: 77.790909ms
Jan 10 14:48:03.340: INFO: Pod "pod-b2345a56-b26b-406c-b360-139fa191ec82": Phase="Pending", Reason="", readiness=false. Elapsed: 2.14073518s
Jan 10 14:48:05.348: INFO: Pod "pod-b2345a56-b26b-406c-b360-139fa191ec82": Phase="Pending", Reason="", readiness=false. Elapsed: 4.148767886s
Jan 10 14:48:07.357: INFO: Pod "pod-b2345a56-b26b-406c-b360-139fa191ec82": Phase="Pending", Reason="", readiness=false. Elapsed: 6.15739509s
Jan 10 14:48:09.369: INFO: Pod "pod-b2345a56-b26b-406c-b360-139fa191ec82": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.169272224s
STEP: Saw pod success
Jan 10 14:48:09.369: INFO: Pod "pod-b2345a56-b26b-406c-b360-139fa191ec82" satisfied condition "success or failure"
Jan 10 14:48:09.373: INFO: Trying to get logs from node iruya-node pod pod-b2345a56-b26b-406c-b360-139fa191ec82 container test-container: 
STEP: delete the pod
Jan 10 14:48:09.427: INFO: Waiting for pod pod-b2345a56-b26b-406c-b360-139fa191ec82 to disappear
Jan 10 14:48:09.437: INFO: Pod pod-b2345a56-b26b-406c-b360-139fa191ec82 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:48:09.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-127" for this suite.
Jan 10 14:48:15.512: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:48:15.651: INFO: namespace emptydir-127 deletion completed in 6.20351552s

• [SLOW TEST:14.576 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:48:15.652: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 10 14:48:15.755: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e998c1a1-f698-4b30-8d51-6262d3c7319a" in namespace "downward-api-201" to be "success or failure"
Jan 10 14:48:15.853: INFO: Pod "downwardapi-volume-e998c1a1-f698-4b30-8d51-6262d3c7319a": Phase="Pending", Reason="", readiness=false. Elapsed: 98.598623ms
Jan 10 14:48:17.868: INFO: Pod "downwardapi-volume-e998c1a1-f698-4b30-8d51-6262d3c7319a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113550978s
Jan 10 14:48:19.889: INFO: Pod "downwardapi-volume-e998c1a1-f698-4b30-8d51-6262d3c7319a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.134230675s
Jan 10 14:48:21.937: INFO: Pod "downwardapi-volume-e998c1a1-f698-4b30-8d51-6262d3c7319a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.182620963s
Jan 10 14:48:23.963: INFO: Pod "downwardapi-volume-e998c1a1-f698-4b30-8d51-6262d3c7319a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.208297738s
STEP: Saw pod success
Jan 10 14:48:23.964: INFO: Pod "downwardapi-volume-e998c1a1-f698-4b30-8d51-6262d3c7319a" satisfied condition "success or failure"
Jan 10 14:48:23.976: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-e998c1a1-f698-4b30-8d51-6262d3c7319a container client-container: 
STEP: delete the pod
Jan 10 14:48:24.218: INFO: Waiting for pod downwardapi-volume-e998c1a1-f698-4b30-8d51-6262d3c7319a to disappear
Jan 10 14:48:24.225: INFO: Pod downwardapi-volume-e998c1a1-f698-4b30-8d51-6262d3c7319a no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:48:24.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-201" for this suite.
Jan 10 14:48:30.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:48:30.432: INFO: namespace downward-api-201 deletion completed in 6.197190636s

• [SLOW TEST:14.780 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:48:30.432: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan 10 14:48:30.525: INFO: Waiting up to 5m0s for pod "pod-e0611fe0-92ee-4dfa-a9ad-c9788fa50092" in namespace "emptydir-6478" to be "success or failure"
Jan 10 14:48:30.612: INFO: Pod "pod-e0611fe0-92ee-4dfa-a9ad-c9788fa50092": Phase="Pending", Reason="", readiness=false. Elapsed: 87.178398ms
Jan 10 14:48:32.633: INFO: Pod "pod-e0611fe0-92ee-4dfa-a9ad-c9788fa50092": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108029661s
Jan 10 14:48:34.642: INFO: Pod "pod-e0611fe0-92ee-4dfa-a9ad-c9788fa50092": Phase="Pending", Reason="", readiness=false. Elapsed: 4.117504406s
Jan 10 14:48:36.663: INFO: Pod "pod-e0611fe0-92ee-4dfa-a9ad-c9788fa50092": Phase="Pending", Reason="", readiness=false. Elapsed: 6.138243956s
Jan 10 14:48:38.674: INFO: Pod "pod-e0611fe0-92ee-4dfa-a9ad-c9788fa50092": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.1495687s
STEP: Saw pod success
Jan 10 14:48:38.675: INFO: Pod "pod-e0611fe0-92ee-4dfa-a9ad-c9788fa50092" satisfied condition "success or failure"
Jan 10 14:48:38.680: INFO: Trying to get logs from node iruya-node pod pod-e0611fe0-92ee-4dfa-a9ad-c9788fa50092 container test-container: 
STEP: delete the pod
Jan 10 14:48:38.797: INFO: Waiting for pod pod-e0611fe0-92ee-4dfa-a9ad-c9788fa50092 to disappear
Jan 10 14:48:38.802: INFO: Pod pod-e0611fe0-92ee-4dfa-a9ad-c9788fa50092 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:48:38.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6478" for this suite.
Jan 10 14:48:44.847: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:48:45.222: INFO: namespace emptydir-6478 deletion completed in 6.413175261s

• [SLOW TEST:14.790 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:48:45.223: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's args
Jan 10 14:48:45.370: INFO: Waiting up to 5m0s for pod "var-expansion-a7c1ce75-0c6b-4b7c-b41f-e7a75ed84013" in namespace "var-expansion-847" to be "success or failure"
Jan 10 14:48:45.405: INFO: Pod "var-expansion-a7c1ce75-0c6b-4b7c-b41f-e7a75ed84013": Phase="Pending", Reason="", readiness=false. Elapsed: 34.997458ms
Jan 10 14:48:47.416: INFO: Pod "var-expansion-a7c1ce75-0c6b-4b7c-b41f-e7a75ed84013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046367438s
Jan 10 14:48:49.466: INFO: Pod "var-expansion-a7c1ce75-0c6b-4b7c-b41f-e7a75ed84013": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095720696s
Jan 10 14:48:51.492: INFO: Pod "var-expansion-a7c1ce75-0c6b-4b7c-b41f-e7a75ed84013": Phase="Pending", Reason="", readiness=false. Elapsed: 6.122101262s
Jan 10 14:48:53.508: INFO: Pod "var-expansion-a7c1ce75-0c6b-4b7c-b41f-e7a75ed84013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.138003052s
STEP: Saw pod success
Jan 10 14:48:53.509: INFO: Pod "var-expansion-a7c1ce75-0c6b-4b7c-b41f-e7a75ed84013" satisfied condition "success or failure"
Jan 10 14:48:53.517: INFO: Trying to get logs from node iruya-node pod var-expansion-a7c1ce75-0c6b-4b7c-b41f-e7a75ed84013 container dapi-container: 
STEP: delete the pod
Jan 10 14:48:53.704: INFO: Waiting for pod var-expansion-a7c1ce75-0c6b-4b7c-b41f-e7a75ed84013 to disappear
Jan 10 14:48:53.716: INFO: Pod var-expansion-a7c1ce75-0c6b-4b7c-b41f-e7a75ed84013 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:48:53.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-847" for this suite.
Jan 10 14:48:59.824: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:48:59.998: INFO: namespace var-expansion-847 deletion completed in 6.203279895s

• [SLOW TEST:14.776 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:48:59.999: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override command
Jan 10 14:49:00.080: INFO: Waiting up to 5m0s for pod "client-containers-7b3f0d34-de2a-444d-8af0-b1bc31791a3e" in namespace "containers-5084" to be "success or failure"
Jan 10 14:49:00.092: INFO: Pod "client-containers-7b3f0d34-de2a-444d-8af0-b1bc31791a3e": Phase="Pending", Reason="", readiness=false. Elapsed: 11.239659ms
Jan 10 14:49:02.107: INFO: Pod "client-containers-7b3f0d34-de2a-444d-8af0-b1bc31791a3e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026020009s
Jan 10 14:49:04.117: INFO: Pod "client-containers-7b3f0d34-de2a-444d-8af0-b1bc31791a3e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036701926s
Jan 10 14:49:06.132: INFO: Pod "client-containers-7b3f0d34-de2a-444d-8af0-b1bc31791a3e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051140534s
Jan 10 14:49:08.140: INFO: Pod "client-containers-7b3f0d34-de2a-444d-8af0-b1bc31791a3e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.059509401s
STEP: Saw pod success
Jan 10 14:49:08.140: INFO: Pod "client-containers-7b3f0d34-de2a-444d-8af0-b1bc31791a3e" satisfied condition "success or failure"
Jan 10 14:49:08.144: INFO: Trying to get logs from node iruya-node pod client-containers-7b3f0d34-de2a-444d-8af0-b1bc31791a3e container test-container: 
STEP: delete the pod
Jan 10 14:49:08.191: INFO: Waiting for pod client-containers-7b3f0d34-de2a-444d-8af0-b1bc31791a3e to disappear
Jan 10 14:49:08.233: INFO: Pod client-containers-7b3f0d34-de2a-444d-8af0-b1bc31791a3e no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:49:08.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-5084" for this suite.
Jan 10 14:49:14.262: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:49:14.390: INFO: namespace containers-5084 deletion completed in 6.15040336s

• [SLOW TEST:14.391 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:49:14.391: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-4b5f1a85-729b-4232-9eac-48d599be142d
STEP: Creating a pod to test consume secrets
Jan 10 14:49:14.542: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-13c1e160-fbc1-4791-92aa-c062f8212683" in namespace "projected-9412" to be "success or failure"
Jan 10 14:49:14.557: INFO: Pod "pod-projected-secrets-13c1e160-fbc1-4791-92aa-c062f8212683": Phase="Pending", Reason="", readiness=false. Elapsed: 15.201279ms
Jan 10 14:49:16.590: INFO: Pod "pod-projected-secrets-13c1e160-fbc1-4791-92aa-c062f8212683": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047978278s
Jan 10 14:49:18.604: INFO: Pod "pod-projected-secrets-13c1e160-fbc1-4791-92aa-c062f8212683": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062325774s
Jan 10 14:49:20.657: INFO: Pod "pod-projected-secrets-13c1e160-fbc1-4791-92aa-c062f8212683": Phase="Pending", Reason="", readiness=false. Elapsed: 6.114991114s
Jan 10 14:49:22.666: INFO: Pod "pod-projected-secrets-13c1e160-fbc1-4791-92aa-c062f8212683": Phase="Pending", Reason="", readiness=false. Elapsed: 8.124439241s
Jan 10 14:49:24.676: INFO: Pod "pod-projected-secrets-13c1e160-fbc1-4791-92aa-c062f8212683": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.133603181s
STEP: Saw pod success
Jan 10 14:49:24.676: INFO: Pod "pod-projected-secrets-13c1e160-fbc1-4791-92aa-c062f8212683" satisfied condition "success or failure"
Jan 10 14:49:24.683: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-13c1e160-fbc1-4791-92aa-c062f8212683 container projected-secret-volume-test: 
STEP: delete the pod
Jan 10 14:49:24.794: INFO: Waiting for pod pod-projected-secrets-13c1e160-fbc1-4791-92aa-c062f8212683 to disappear
Jan 10 14:49:24.800: INFO: Pod pod-projected-secrets-13c1e160-fbc1-4791-92aa-c062f8212683 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:49:24.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9412" for this suite.
Jan 10 14:49:30.831: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:49:30.980: INFO: namespace projected-9412 deletion completed in 6.173713738s

• [SLOW TEST:16.589 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:49:30.980: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 10 14:49:31.337: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a6402e44-7e77-41ff-ba6d-645db65e021b" in namespace "projected-633" to be "success or failure"
Jan 10 14:49:31.407: INFO: Pod "downwardapi-volume-a6402e44-7e77-41ff-ba6d-645db65e021b": Phase="Pending", Reason="", readiness=false. Elapsed: 69.07645ms
Jan 10 14:49:33.419: INFO: Pod "downwardapi-volume-a6402e44-7e77-41ff-ba6d-645db65e021b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08125933s
Jan 10 14:49:35.428: INFO: Pod "downwardapi-volume-a6402e44-7e77-41ff-ba6d-645db65e021b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090484093s
Jan 10 14:49:37.447: INFO: Pod "downwardapi-volume-a6402e44-7e77-41ff-ba6d-645db65e021b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.109058885s
Jan 10 14:49:39.456: INFO: Pod "downwardapi-volume-a6402e44-7e77-41ff-ba6d-645db65e021b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.118256447s
STEP: Saw pod success
Jan 10 14:49:39.456: INFO: Pod "downwardapi-volume-a6402e44-7e77-41ff-ba6d-645db65e021b" satisfied condition "success or failure"
Jan 10 14:49:39.460: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-a6402e44-7e77-41ff-ba6d-645db65e021b container client-container: 
STEP: delete the pod
Jan 10 14:49:39.517: INFO: Waiting for pod downwardapi-volume-a6402e44-7e77-41ff-ba6d-645db65e021b to disappear
Jan 10 14:49:39.608: INFO: Pod downwardapi-volume-a6402e44-7e77-41ff-ba6d-645db65e021b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:49:39.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-633" for this suite.
Jan 10 14:49:45.639: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:49:45.872: INFO: namespace projected-633 deletion completed in 6.258309575s

• [SLOW TEST:14.891 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:49:45.873: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-73e1e3f6-ab70-46c5-8bfc-26c443bc15d1
STEP: Creating a pod to test consume secrets
Jan 10 14:49:46.047: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7e0a1ac0-276b-4d92-9f98-4719683f2e7f" in namespace "projected-9430" to be "success or failure"
Jan 10 14:49:46.070: INFO: Pod "pod-projected-secrets-7e0a1ac0-276b-4d92-9f98-4719683f2e7f": Phase="Pending", Reason="", readiness=false. Elapsed: 23.239134ms
Jan 10 14:49:48.085: INFO: Pod "pod-projected-secrets-7e0a1ac0-276b-4d92-9f98-4719683f2e7f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037615953s
Jan 10 14:49:50.099: INFO: Pod "pod-projected-secrets-7e0a1ac0-276b-4d92-9f98-4719683f2e7f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051308386s
Jan 10 14:49:52.107: INFO: Pod "pod-projected-secrets-7e0a1ac0-276b-4d92-9f98-4719683f2e7f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060281059s
Jan 10 14:49:54.120: INFO: Pod "pod-projected-secrets-7e0a1ac0-276b-4d92-9f98-4719683f2e7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.072421876s
STEP: Saw pod success
Jan 10 14:49:54.120: INFO: Pod "pod-projected-secrets-7e0a1ac0-276b-4d92-9f98-4719683f2e7f" satisfied condition "success or failure"
Jan 10 14:49:54.124: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-7e0a1ac0-276b-4d92-9f98-4719683f2e7f container projected-secret-volume-test: 
STEP: delete the pod
Jan 10 14:49:54.242: INFO: Waiting for pod pod-projected-secrets-7e0a1ac0-276b-4d92-9f98-4719683f2e7f to disappear
Jan 10 14:49:54.252: INFO: Pod pod-projected-secrets-7e0a1ac0-276b-4d92-9f98-4719683f2e7f no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:49:54.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9430" for this suite.
Jan 10 14:50:00.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:50:00.418: INFO: namespace projected-9430 deletion completed in 6.159892775s

• [SLOW TEST:14.546 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:50:00.419: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 10 14:50:00.589: INFO: Waiting up to 5m0s for pod "downwardapi-volume-15a0fe1f-a10d-4370-bb70-0a488b2f5cda" in namespace "projected-4801" to be "success or failure"
Jan 10 14:50:00.605: INFO: Pod "downwardapi-volume-15a0fe1f-a10d-4370-bb70-0a488b2f5cda": Phase="Pending", Reason="", readiness=false. Elapsed: 14.867902ms
Jan 10 14:50:02.620: INFO: Pod "downwardapi-volume-15a0fe1f-a10d-4370-bb70-0a488b2f5cda": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030524188s
Jan 10 14:50:04.635: INFO: Pod "downwardapi-volume-15a0fe1f-a10d-4370-bb70-0a488b2f5cda": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04544219s
Jan 10 14:50:06.651: INFO: Pod "downwardapi-volume-15a0fe1f-a10d-4370-bb70-0a488b2f5cda": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061268892s
Jan 10 14:50:08.662: INFO: Pod "downwardapi-volume-15a0fe1f-a10d-4370-bb70-0a488b2f5cda": Phase="Pending", Reason="", readiness=false. Elapsed: 8.071857606s
Jan 10 14:50:10.668: INFO: Pod "downwardapi-volume-15a0fe1f-a10d-4370-bb70-0a488b2f5cda": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.078756495s
STEP: Saw pod success
Jan 10 14:50:10.669: INFO: Pod "downwardapi-volume-15a0fe1f-a10d-4370-bb70-0a488b2f5cda" satisfied condition "success or failure"
Jan 10 14:50:10.673: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-15a0fe1f-a10d-4370-bb70-0a488b2f5cda container client-container: 
STEP: delete the pod
Jan 10 14:50:10.714: INFO: Waiting for pod downwardapi-volume-15a0fe1f-a10d-4370-bb70-0a488b2f5cda to disappear
Jan 10 14:50:10.728: INFO: Pod downwardapi-volume-15a0fe1f-a10d-4370-bb70-0a488b2f5cda no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:50:10.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4801" for this suite.
Jan 10 14:50:16.754: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:50:16.910: INFO: namespace projected-4801 deletion completed in 6.17566384s

• [SLOW TEST:16.490 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:50:16.910: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-projected-t7vl
STEP: Creating a pod to test atomic-volume-subpath
Jan 10 14:50:17.007: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-t7vl" in namespace "subpath-1951" to be "success or failure"
Jan 10 14:50:17.050: INFO: Pod "pod-subpath-test-projected-t7vl": Phase="Pending", Reason="", readiness=false. Elapsed: 43.191786ms
Jan 10 14:50:19.059: INFO: Pod "pod-subpath-test-projected-t7vl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052192113s
Jan 10 14:50:21.115: INFO: Pod "pod-subpath-test-projected-t7vl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107764198s
Jan 10 14:50:23.125: INFO: Pod "pod-subpath-test-projected-t7vl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.117867656s
Jan 10 14:50:25.138: INFO: Pod "pod-subpath-test-projected-t7vl": Phase="Pending", Reason="", readiness=false. Elapsed: 8.131115321s
Jan 10 14:50:27.150: INFO: Pod "pod-subpath-test-projected-t7vl": Phase="Running", Reason="", readiness=true. Elapsed: 10.142823559s
Jan 10 14:50:29.162: INFO: Pod "pod-subpath-test-projected-t7vl": Phase="Running", Reason="", readiness=true. Elapsed: 12.155099605s
Jan 10 14:50:31.175: INFO: Pod "pod-subpath-test-projected-t7vl": Phase="Running", Reason="", readiness=true. Elapsed: 14.168369581s
Jan 10 14:50:33.186: INFO: Pod "pod-subpath-test-projected-t7vl": Phase="Running", Reason="", readiness=true. Elapsed: 16.179345469s
Jan 10 14:50:35.196: INFO: Pod "pod-subpath-test-projected-t7vl": Phase="Running", Reason="", readiness=true. Elapsed: 18.189069686s
Jan 10 14:50:37.206: INFO: Pod "pod-subpath-test-projected-t7vl": Phase="Running", Reason="", readiness=true. Elapsed: 20.199133416s
Jan 10 14:50:39.216: INFO: Pod "pod-subpath-test-projected-t7vl": Phase="Running", Reason="", readiness=true. Elapsed: 22.209466171s
Jan 10 14:50:41.226: INFO: Pod "pod-subpath-test-projected-t7vl": Phase="Running", Reason="", readiness=true. Elapsed: 24.218931673s
Jan 10 14:50:43.236: INFO: Pod "pod-subpath-test-projected-t7vl": Phase="Running", Reason="", readiness=true. Elapsed: 26.229063947s
Jan 10 14:50:45.257: INFO: Pod "pod-subpath-test-projected-t7vl": Phase="Running", Reason="", readiness=true. Elapsed: 28.250140805s
Jan 10 14:50:47.277: INFO: Pod "pod-subpath-test-projected-t7vl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.270127576s
STEP: Saw pod success
Jan 10 14:50:47.277: INFO: Pod "pod-subpath-test-projected-t7vl" satisfied condition "success or failure"
Jan 10 14:50:47.284: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-projected-t7vl container test-container-subpath-projected-t7vl: 
STEP: delete the pod
Jan 10 14:50:47.368: INFO: Waiting for pod pod-subpath-test-projected-t7vl to disappear
Jan 10 14:50:47.378: INFO: Pod pod-subpath-test-projected-t7vl no longer exists
STEP: Deleting pod pod-subpath-test-projected-t7vl
Jan 10 14:50:47.378: INFO: Deleting pod "pod-subpath-test-projected-t7vl" in namespace "subpath-1951"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:50:47.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-1951" for this suite.
Jan 10 14:50:53.411: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:50:53.538: INFO: namespace subpath-1951 deletion completed in 6.152958118s

• [SLOW TEST:36.628 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:50:53.540: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-c20740b8-2b0d-4a68-b525-9596fe7d2985 in namespace container-probe-7996
Jan 10 14:51:01.697: INFO: Started pod busybox-c20740b8-2b0d-4a68-b525-9596fe7d2985 in namespace container-probe-7996
STEP: checking the pod's current state and verifying that restartCount is present
Jan 10 14:51:01.703: INFO: Initial restart count of pod busybox-c20740b8-2b0d-4a68-b525-9596fe7d2985 is 0
Jan 10 14:51:58.127: INFO: Restart count of pod container-probe-7996/busybox-c20740b8-2b0d-4a68-b525-9596fe7d2985 is now 1 (56.424045739s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:51:58.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7996" for this suite.
Jan 10 14:52:04.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:52:04.424: INFO: namespace container-probe-7996 deletion completed in 6.159443981s

• [SLOW TEST:70.885 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:52:04.425: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Jan 10 14:52:04.502: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:52:17.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-3718" for this suite.
Jan 10 14:52:23.847: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:52:24.150: INFO: namespace init-container-3718 deletion completed in 6.342567545s

• [SLOW TEST:19.726 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:52:24.152: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-85f7ca3f-e9ef-4e88-b710-250dd630fca1
STEP: Creating a pod to test consume configMaps
Jan 10 14:52:24.345: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a99c7511-177b-4890-83da-20771dbdc601" in namespace "projected-4070" to be "success or failure"
Jan 10 14:52:24.469: INFO: Pod "pod-projected-configmaps-a99c7511-177b-4890-83da-20771dbdc601": Phase="Pending", Reason="", readiness=false. Elapsed: 123.999189ms
Jan 10 14:52:26.494: INFO: Pod "pod-projected-configmaps-a99c7511-177b-4890-83da-20771dbdc601": Phase="Pending", Reason="", readiness=false. Elapsed: 2.148277381s
Jan 10 14:52:28.508: INFO: Pod "pod-projected-configmaps-a99c7511-177b-4890-83da-20771dbdc601": Phase="Pending", Reason="", readiness=false. Elapsed: 4.162699937s
Jan 10 14:52:30.525: INFO: Pod "pod-projected-configmaps-a99c7511-177b-4890-83da-20771dbdc601": Phase="Pending", Reason="", readiness=false. Elapsed: 6.179265341s
Jan 10 14:52:33.233: INFO: Pod "pod-projected-configmaps-a99c7511-177b-4890-83da-20771dbdc601": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.887791585s
STEP: Saw pod success
Jan 10 14:52:33.234: INFO: Pod "pod-projected-configmaps-a99c7511-177b-4890-83da-20771dbdc601" satisfied condition "success or failure"
Jan 10 14:52:33.241: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-a99c7511-177b-4890-83da-20771dbdc601 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 10 14:52:33.323: INFO: Waiting for pod pod-projected-configmaps-a99c7511-177b-4890-83da-20771dbdc601 to disappear
Jan 10 14:52:33.367: INFO: Pod pod-projected-configmaps-a99c7511-177b-4890-83da-20771dbdc601 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:52:33.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4070" for this suite.
Jan 10 14:52:39.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:52:39.587: INFO: namespace projected-4070 deletion completed in 6.211355479s

• [SLOW TEST:15.436 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:52:39.588: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating secret secrets-6969/secret-test-be8da1a8-91fd-402f-aad5-45e76c0e3149
STEP: Creating a pod to test consume secrets
Jan 10 14:52:39.670: INFO: Waiting up to 5m0s for pod "pod-configmaps-2432ae33-ebaa-4062-91e7-5470212d9979" in namespace "secrets-6969" to be "success or failure"
Jan 10 14:52:39.707: INFO: Pod "pod-configmaps-2432ae33-ebaa-4062-91e7-5470212d9979": Phase="Pending", Reason="", readiness=false. Elapsed: 36.882805ms
Jan 10 14:52:41.719: INFO: Pod "pod-configmaps-2432ae33-ebaa-4062-91e7-5470212d9979": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048843946s
Jan 10 14:52:43.727: INFO: Pod "pod-configmaps-2432ae33-ebaa-4062-91e7-5470212d9979": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056861676s
Jan 10 14:52:45.757: INFO: Pod "pod-configmaps-2432ae33-ebaa-4062-91e7-5470212d9979": Phase="Pending", Reason="", readiness=false. Elapsed: 6.086406741s
Jan 10 14:52:47.765: INFO: Pod "pod-configmaps-2432ae33-ebaa-4062-91e7-5470212d9979": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.095408189s
STEP: Saw pod success
Jan 10 14:52:47.766: INFO: Pod "pod-configmaps-2432ae33-ebaa-4062-91e7-5470212d9979" satisfied condition "success or failure"
Jan 10 14:52:47.770: INFO: Trying to get logs from node iruya-node pod pod-configmaps-2432ae33-ebaa-4062-91e7-5470212d9979 container env-test: 
STEP: delete the pod
Jan 10 14:52:47.881: INFO: Waiting for pod pod-configmaps-2432ae33-ebaa-4062-91e7-5470212d9979 to disappear
Jan 10 14:52:47.900: INFO: Pod pod-configmaps-2432ae33-ebaa-4062-91e7-5470212d9979 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:52:47.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6969" for this suite.
Jan 10 14:52:54.013: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:52:54.135: INFO: namespace secrets-6969 deletion completed in 6.20966354s

• [SLOW TEST:14.547 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:52:54.138: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 10 14:52:54.259: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4c0a5123-b1f2-4b5a-9f2f-5377520c03b7" in namespace "projected-7469" to be "success or failure"
Jan 10 14:52:54.291: INFO: Pod "downwardapi-volume-4c0a5123-b1f2-4b5a-9f2f-5377520c03b7": Phase="Pending", Reason="", readiness=false. Elapsed: 31.622019ms
Jan 10 14:52:56.305: INFO: Pod "downwardapi-volume-4c0a5123-b1f2-4b5a-9f2f-5377520c03b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046500853s
Jan 10 14:52:58.330: INFO: Pod "downwardapi-volume-4c0a5123-b1f2-4b5a-9f2f-5377520c03b7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070749659s
Jan 10 14:53:00.341: INFO: Pod "downwardapi-volume-4c0a5123-b1f2-4b5a-9f2f-5377520c03b7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.08217311s
Jan 10 14:53:02.445: INFO: Pod "downwardapi-volume-4c0a5123-b1f2-4b5a-9f2f-5377520c03b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.186291972s
STEP: Saw pod success
Jan 10 14:53:02.446: INFO: Pod "downwardapi-volume-4c0a5123-b1f2-4b5a-9f2f-5377520c03b7" satisfied condition "success or failure"
Jan 10 14:53:02.465: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-4c0a5123-b1f2-4b5a-9f2f-5377520c03b7 container client-container: 
STEP: delete the pod
Jan 10 14:53:02.657: INFO: Waiting for pod downwardapi-volume-4c0a5123-b1f2-4b5a-9f2f-5377520c03b7 to disappear
Jan 10 14:53:02.676: INFO: Pod downwardapi-volume-4c0a5123-b1f2-4b5a-9f2f-5377520c03b7 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:53:02.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7469" for this suite.
Jan 10 14:53:09.202: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:53:09.365: INFO: namespace projected-7469 deletion completed in 6.681150932s

• [SLOW TEST:15.228 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:53:09.366: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 10 14:53:09.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-5299'
Jan 10 14:53:09.568: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 10 14:53:09.568: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562
Jan 10 14:53:13.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-5299'
Jan 10 14:53:13.846: INFO: stderr: ""
Jan 10 14:53:13.846: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:53:13.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5299" for this suite.
Jan 10 14:53:19.890: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:53:20.009: INFO: namespace kubectl-5299 deletion completed in 6.150369919s

• [SLOW TEST:10.643 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:53:20.009: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan 10 14:53:20.151: INFO: Waiting up to 5m0s for pod "pod-5d6d4d6f-7d91-4262-b67d-0c05fac5ab9b" in namespace "emptydir-4627" to be "success or failure"
Jan 10 14:53:20.163: INFO: Pod "pod-5d6d4d6f-7d91-4262-b67d-0c05fac5ab9b": Phase="Pending", Reason="", readiness=false. Elapsed: 11.151214ms
Jan 10 14:53:22.181: INFO: Pod "pod-5d6d4d6f-7d91-4262-b67d-0c05fac5ab9b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029447839s
Jan 10 14:53:24.188: INFO: Pod "pod-5d6d4d6f-7d91-4262-b67d-0c05fac5ab9b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036621051s
Jan 10 14:53:26.205: INFO: Pod "pod-5d6d4d6f-7d91-4262-b67d-0c05fac5ab9b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054032579s
Jan 10 14:53:28.215: INFO: Pod "pod-5d6d4d6f-7d91-4262-b67d-0c05fac5ab9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.063937525s
STEP: Saw pod success
Jan 10 14:53:28.215: INFO: Pod "pod-5d6d4d6f-7d91-4262-b67d-0c05fac5ab9b" satisfied condition "success or failure"
Jan 10 14:53:28.221: INFO: Trying to get logs from node iruya-node pod pod-5d6d4d6f-7d91-4262-b67d-0c05fac5ab9b container test-container: 
STEP: delete the pod
Jan 10 14:53:28.323: INFO: Waiting for pod pod-5d6d4d6f-7d91-4262-b67d-0c05fac5ab9b to disappear
Jan 10 14:53:28.330: INFO: Pod pod-5d6d4d6f-7d91-4262-b67d-0c05fac5ab9b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:53:28.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4627" for this suite.
Jan 10 14:53:34.376: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:53:34.579: INFO: namespace emptydir-4627 deletion completed in 6.233788405s

• [SLOW TEST:14.570 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:53:34.580: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan 10 14:53:34.676: INFO: Waiting up to 5m0s for pod "downward-api-7bebc2d4-3e1c-4552-972b-5bb881adbcaa" in namespace "downward-api-1740" to be "success or failure"
Jan 10 14:53:34.695: INFO: Pod "downward-api-7bebc2d4-3e1c-4552-972b-5bb881adbcaa": Phase="Pending", Reason="", readiness=false. Elapsed: 18.502735ms
Jan 10 14:53:36.705: INFO: Pod "downward-api-7bebc2d4-3e1c-4552-972b-5bb881adbcaa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028759213s
Jan 10 14:53:38.712: INFO: Pod "downward-api-7bebc2d4-3e1c-4552-972b-5bb881adbcaa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035453535s
Jan 10 14:53:40.727: INFO: Pod "downward-api-7bebc2d4-3e1c-4552-972b-5bb881adbcaa": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050794652s
Jan 10 14:53:42.739: INFO: Pod "downward-api-7bebc2d4-3e1c-4552-972b-5bb881adbcaa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.06251095s
STEP: Saw pod success
Jan 10 14:53:42.739: INFO: Pod "downward-api-7bebc2d4-3e1c-4552-972b-5bb881adbcaa" satisfied condition "success or failure"
Jan 10 14:53:42.745: INFO: Trying to get logs from node iruya-node pod downward-api-7bebc2d4-3e1c-4552-972b-5bb881adbcaa container dapi-container: 
STEP: delete the pod
Jan 10 14:53:43.155: INFO: Waiting for pod downward-api-7bebc2d4-3e1c-4552-972b-5bb881adbcaa to disappear
Jan 10 14:53:43.165: INFO: Pod downward-api-7bebc2d4-3e1c-4552-972b-5bb881adbcaa no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:53:43.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1740" for this suite.
Jan 10 14:53:49.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:53:49.319: INFO: namespace downward-api-1740 deletion completed in 6.145611678s

• [SLOW TEST:14.739 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:53:49.320: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Jan 10 14:53:49.518: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-3924,SelfLink:/api/v1/namespaces/watch-3924/configmaps/e2e-watch-test-watch-closed,UID:86c29965-2376-459e-b740-cc41a971c1e8,ResourceVersion:20041217,Generation:0,CreationTimestamp:2020-01-10 14:53:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 10 14:53:49.519: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-3924,SelfLink:/api/v1/namespaces/watch-3924/configmaps/e2e-watch-test-watch-closed,UID:86c29965-2376-459e-b740-cc41a971c1e8,ResourceVersion:20041218,Generation:0,CreationTimestamp:2020-01-10 14:53:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Jan 10 14:53:49.631: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-3924,SelfLink:/api/v1/namespaces/watch-3924/configmaps/e2e-watch-test-watch-closed,UID:86c29965-2376-459e-b740-cc41a971c1e8,ResourceVersion:20041219,Generation:0,CreationTimestamp:2020-01-10 14:53:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 10 14:53:49.631: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-3924,SelfLink:/api/v1/namespaces/watch-3924/configmaps/e2e-watch-test-watch-closed,UID:86c29965-2376-459e-b740-cc41a971c1e8,ResourceVersion:20041220,Generation:0,CreationTimestamp:2020-01-10 14:53:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:53:49.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3924" for this suite.
Jan 10 14:53:55.696: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:53:55.890: INFO: namespace watch-3924 deletion completed in 6.225612684s

• [SLOW TEST:6.571 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:53:55.891: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0110 14:54:26.627013       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 10 14:54:26.627: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:54:26.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-533" for this suite.
Jan 10 14:54:34.664: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:54:34.917: INFO: namespace gc-533 deletion completed in 8.284702887s

• [SLOW TEST:39.026 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:54:34.918: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 10 14:54:35.186: INFO: Creating deployment "nginx-deployment"
Jan 10 14:54:35.232: INFO: Waiting for observed generation 1
Jan 10 14:54:37.329: INFO: Waiting for all required pods to come up
Jan 10 14:54:37.492: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Jan 10 14:55:05.722: INFO: Waiting for deployment "nginx-deployment" to complete
Jan 10 14:55:05.737: INFO: Updating deployment "nginx-deployment" with a non-existent image
Jan 10 14:55:05.748: INFO: Updating deployment nginx-deployment
Jan 10 14:55:05.748: INFO: Waiting for observed generation 2
Jan 10 14:55:08.568: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Jan 10 14:55:08.991: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Jan 10 14:55:09.003: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jan 10 14:55:09.028: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Jan 10 14:55:09.028: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Jan 10 14:55:09.032: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jan 10 14:55:09.039: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Jan 10 14:55:09.039: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Jan 10 14:55:09.051: INFO: Updating deployment nginx-deployment
Jan 10 14:55:09.051: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Jan 10 14:55:10.055: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Jan 10 14:55:10.403: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jan 10 14:55:10.856: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-5216,SelfLink:/apis/apps/v1/namespaces/deployment-5216/deployments/nginx-deployment,UID:0918dcb7-ac5d-4460-b50f-f258815fd1cc,ResourceVersion:20041567,Generation:3,CreationTimestamp:2020-01-10 14:54:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-01-10 14:55:06 +0000 UTC 2020-01-10 14:54:35 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2020-01-10 14:55:10 +0000 UTC 2020-01-10 14:55:10 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},}

Jan 10 14:55:11.773: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-5216,SelfLink:/apis/apps/v1/namespaces/deployment-5216/replicasets/nginx-deployment-55fb7cb77f,UID:51716b8c-ef5f-42d0-9e19-400c750c564b,ResourceVersion:20041610,Generation:3,CreationTimestamp:2020-01-10 14:55:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 0918dcb7-ac5d-4460-b50f-f258815fd1cc 0xc0019c68c7 0xc0019c68c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 10 14:55:11.773: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Jan 10 14:55:11.773: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-5216,SelfLink:/apis/apps/v1/namespaces/deployment-5216/replicasets/nginx-deployment-7b8c6f4498,UID:7f8e5131-696d-45e5-9164-cae7fe600c5c,ResourceVersion:20041608,Generation:3,CreationTimestamp:2020-01-10 14:54:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 0918dcb7-ac5d-4460-b50f-f258815fd1cc 0xc0019c6997 0xc0019c6998}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Jan 10 14:55:14.489: INFO: Pod "nginx-deployment-55fb7cb77f-4x94s" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-4x94s,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5216,SelfLink:/api/v1/namespaces/deployment-5216/pods/nginx-deployment-55fb7cb77f-4x94s,UID:c37799e6-dcdb-4dc8-b067-21c1f5ec5487,ResourceVersion:20041601,Generation:0,CreationTimestamp:2020-01-10 14:55:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 51716b8c-ef5f-42d0-9e19-400c750c564b 0xc002cc0577 0xc002cc0578}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nz9jz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nz9jz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-nz9jz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002cc0610} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002cc0630}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:55:10 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 14:55:14.490: INFO: Pod "nginx-deployment-55fb7cb77f-5nn4x" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-5nn4x,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5216,SelfLink:/api/v1/namespaces/deployment-5216/pods/nginx-deployment-55fb7cb77f-5nn4x,UID:e83d1c61-1bb9-43ff-82b9-35c920d654e6,ResourceVersion:20041527,Generation:0,CreationTimestamp:2020-01-10 14:55:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 51716b8c-ef5f-42d0-9e19-400c750c564b 0xc002cc06d7 0xc002cc06d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nz9jz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nz9jz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-nz9jz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002cc0740} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002cc0760}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:55:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:55:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:55:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:55:05 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-10 14:55:06 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 14:55:14.490: INFO: Pod "nginx-deployment-55fb7cb77f-9pvzs" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-9pvzs,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5216,SelfLink:/api/v1/namespaces/deployment-5216/pods/nginx-deployment-55fb7cb77f-9pvzs,UID:9a0183f0-667b-47eb-891a-4e90ef353338,ResourceVersion:20041576,Generation:0,CreationTimestamp:2020-01-10 14:55:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 51716b8c-ef5f-42d0-9e19-400c750c564b 0xc002cc0837 0xc002cc0838}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nz9jz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nz9jz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-nz9jz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002cc09b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002cc09d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:55:10 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 14:55:14.490: INFO: Pod "nginx-deployment-55fb7cb77f-dpn6v" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-dpn6v,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5216,SelfLink:/api/v1/namespaces/deployment-5216/pods/nginx-deployment-55fb7cb77f-dpn6v,UID:64b668fc-d7a3-4391-88c1-58a61233fddd,ResourceVersion:20041521,Generation:0,CreationTimestamp:2020-01-10 14:55:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 51716b8c-ef5f-42d0-9e19-400c750c564b 0xc002cc0a57 0xc002cc0a58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nz9jz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nz9jz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-nz9jz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002cc0ad0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002cc0af0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:55:05 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:55:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:55:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:55:05 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-10 14:55:05 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 14:55:14.491: INFO: Pod "nginx-deployment-55fb7cb77f-gv6vr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-gv6vr,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5216,SelfLink:/api/v1/namespaces/deployment-5216/pods/nginx-deployment-55fb7cb77f-gv6vr,UID:f5cd663d-7d86-4349-a172-e54d0cbf2495,ResourceVersion:20041609,Generation:0,CreationTimestamp:2020-01-10 14:55:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 51716b8c-ef5f-42d0-9e19-400c750c564b 0xc002cc0bd7 0xc002cc0bd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nz9jz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nz9jz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-nz9jz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002cc0c40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002cc0c60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:55:10 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 14:55:14.491: INFO: Pod "nginx-deployment-55fb7cb77f-j6xbb" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-j6xbb,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5216,SelfLink:/api/v1/namespaces/deployment-5216/pods/nginx-deployment-55fb7cb77f-j6xbb,UID:5dc3706a-b8b3-4abd-a935-c1129c0dce47,ResourceVersion:20041549,Generation:0,CreationTimestamp:2020-01-10 14:55:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 51716b8c-ef5f-42d0-9e19-400c750c564b 0xc002cc0ce7 0xc002cc0ce8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nz9jz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nz9jz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-nz9jz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002cc0d50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002cc0d70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:55:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:55:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:55:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:55:06 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-10 14:55:06 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 14:55:14.491: INFO: Pod "nginx-deployment-55fb7cb77f-l2z7w" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-l2z7w,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5216,SelfLink:/api/v1/namespaces/deployment-5216/pods/nginx-deployment-55fb7cb77f-l2z7w,UID:23b696dc-c56a-431d-b668-e57151307f17,ResourceVersion:20041600,Generation:0,CreationTimestamp:2020-01-10 14:55:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 51716b8c-ef5f-42d0-9e19-400c750c564b 0xc002cc0e47 0xc002cc0e48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nz9jz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nz9jz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-nz9jz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002cc0ec0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002cc0ee0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:55:10 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 14:55:14.491: INFO: Pod "nginx-deployment-55fb7cb77f-lw8g4" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-lw8g4,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5216,SelfLink:/api/v1/namespaces/deployment-5216/pods/nginx-deployment-55fb7cb77f-lw8g4,UID:9bba5e7f-f843-4a06-b5a8-ce1a9c0ad516,ResourceVersion:20041594,Generation:0,CreationTimestamp:2020-01-10 14:55:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 51716b8c-ef5f-42d0-9e19-400c750c564b 0xc002cc0f67 0xc002cc0f68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nz9jz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nz9jz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-nz9jz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002cc0fe0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002cc1000}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:55:10 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 14:55:14.492: INFO: Pod "nginx-deployment-55fb7cb77f-n8s5f" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-n8s5f,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5216,SelfLink:/api/v1/namespaces/deployment-5216/pods/nginx-deployment-55fb7cb77f-n8s5f,UID:d582a458-b5ea-476b-bcba-fa2ce47d9b87,ResourceVersion:20041592,Generation:0,CreationTimestamp:2020-01-10 14:55:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 51716b8c-ef5f-42d0-9e19-400c750c564b 0xc002cc1087 0xc002cc1088}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nz9jz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nz9jz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-nz9jz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002cc10f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002cc1110}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:55:10 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 14:55:14.492: INFO: Pod "nginx-deployment-55fb7cb77f-ncghp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-ncghp,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5216,SelfLink:/api/v1/namespaces/deployment-5216/pods/nginx-deployment-55fb7cb77f-ncghp,UID:930850a2-7493-4ddf-aaba-bda2addba803,ResourceVersion:20041597,Generation:0,CreationTimestamp:2020-01-10 14:55:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 51716b8c-ef5f-42d0-9e19-400c750c564b 0xc002cc1197 0xc002cc1198}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nz9jz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nz9jz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-nz9jz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002cc1210} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002cc1230}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:55:10 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 14:55:14.493: INFO: Pod "nginx-deployment-55fb7cb77f-r79sq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-r79sq,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5216,SelfLink:/api/v1/namespaces/deployment-5216/pods/nginx-deployment-55fb7cb77f-r79sq,UID:2e52fe35-e259-4f39-9ae8-d7b052dfda48,ResourceVersion:20041531,Generation:0,CreationTimestamp:2020-01-10 14:55:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 51716b8c-ef5f-42d0-9e19-400c750c564b 0xc002cc12b7 0xc002cc12b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nz9jz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nz9jz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-nz9jz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002cc1330} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002cc1350}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:55:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:55:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:55:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:55:05 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-10 14:55:06 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 14:55:14.493: INFO: Pod "nginx-deployment-55fb7cb77f-sbcj6" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-sbcj6,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5216,SelfLink:/api/v1/namespaces/deployment-5216/pods/nginx-deployment-55fb7cb77f-sbcj6,UID:cd237fcc-aa71-4e61-81d6-11fb51c1aabc,ResourceVersion:20041599,Generation:0,CreationTimestamp:2020-01-10 14:55:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 51716b8c-ef5f-42d0-9e19-400c750c564b 0xc002cc1427 0xc002cc1428}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nz9jz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nz9jz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-nz9jz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002cc14a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002cc14c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:55:10 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 14:55:14.493: INFO: Pod "nginx-deployment-55fb7cb77f-sfnxg" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-sfnxg,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5216,SelfLink:/api/v1/namespaces/deployment-5216/pods/nginx-deployment-55fb7cb77f-sfnxg,UID:f9699b21-2ff6-4b36-9311-58ba316afa6e,ResourceVersion:20041555,Generation:0,CreationTimestamp:2020-01-10 14:55:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 51716b8c-ef5f-42d0-9e19-400c750c564b 0xc002cc1547 0xc002cc1548}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nz9jz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nz9jz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-nz9jz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002cc15c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002cc15e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:55:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:55:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:55:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:55:06 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-10 14:55:08 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 14:55:14.493: INFO: Pod "nginx-deployment-7b8c6f4498-2dv4p" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2dv4p,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5216,SelfLink:/api/v1/namespaces/deployment-5216/pods/nginx-deployment-7b8c6f4498-2dv4p,UID:f1ec2a6c-da54-457a-bd03-97fde43a83d9,ResourceVersion:20041606,Generation:0,CreationTimestamp:2020-01-10 14:55:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7f8e5131-696d-45e5-9164-cae7fe600c5c 0xc002cc16b7 0xc002cc16b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nz9jz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nz9jz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-nz9jz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002cc1720} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002cc1740}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:55:10 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 14:55:14.494: INFO: Pod "nginx-deployment-7b8c6f4498-2ftqf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2ftqf,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5216,SelfLink:/api/v1/namespaces/deployment-5216/pods/nginx-deployment-7b8c6f4498-2ftqf,UID:d3d1ee8a-4b18-4f40-9543-4f9904bfa574,ResourceVersion:20041593,Generation:0,CreationTimestamp:2020-01-10 14:55:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7f8e5131-696d-45e5-9164-cae7fe600c5c 0xc002cc17c7 0xc002cc17c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nz9jz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nz9jz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-nz9jz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002cc1830} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002cc1850}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:55:10 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 14:55:14.494: INFO: Pod "nginx-deployment-7b8c6f4498-2v2lz" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2v2lz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5216,SelfLink:/api/v1/namespaces/deployment-5216/pods/nginx-deployment-7b8c6f4498-2v2lz,UID:bb8e7878-7f50-4b82-aea7-a31cd2bbad74,ResourceVersion:20041577,Generation:0,CreationTimestamp:2020-01-10 14:55:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7f8e5131-696d-45e5-9164-cae7fe600c5c 0xc002cc18d7 0xc002cc18d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nz9jz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nz9jz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-nz9jz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002cc1950} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002cc1970}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:55:10 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 14:55:14.495: INFO: Pod "nginx-deployment-7b8c6f4498-2w5n4" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2w5n4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5216,SelfLink:/api/v1/namespaces/deployment-5216/pods/nginx-deployment-7b8c6f4498-2w5n4,UID:cfdce589-61f9-4b84-9dab-c8e82fbd9d16,ResourceVersion:20041468,Generation:0,CreationTimestamp:2020-01-10 14:54:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7f8e5131-696d-45e5-9164-cae7fe600c5c 0xc002cc19f7 0xc002cc19f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nz9jz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nz9jz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-nz9jz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002cc1a60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002cc1a80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:54:37 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:55:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:55:02 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:54:37 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.5,StartTime:2020-01-10 14:54:37 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-10 14:55:02 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://d6ccf1bdf4792a18b4528c2de3ec1852d1c623952e055dfc84b519a0823240a5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 14:55:14.495: INFO: Pod "nginx-deployment-7b8c6f4498-4lb77" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-4lb77,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5216,SelfLink:/api/v1/namespaces/deployment-5216/pods/nginx-deployment-7b8c6f4498-4lb77,UID:61c6082d-d5e1-418b-8a27-ade9daae1c00,ResourceVersion:20041487,Generation:0,CreationTimestamp:2020-01-10 14:54:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7f8e5131-696d-45e5-9164-cae7fe600c5c 0xc002cc1b57 0xc002cc1b58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nz9jz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nz9jz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-nz9jz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002cc1bc0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002cc1be0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:54:37 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:55:03 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:55:03 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:54:37 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.6,StartTime:2020-01-10 14:54:37 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-10 14:55:02 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://3081f536cf38ef7fc13b71cab6fd3516dce8002f1b7ee728fb610747cd107404}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 14:55:14.496: INFO: Pod "nginx-deployment-7b8c6f4498-6pmkw" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6pmkw,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5216,SelfLink:/api/v1/namespaces/deployment-5216/pods/nginx-deployment-7b8c6f4498-6pmkw,UID:07c04c66-be45-4171-b206-f91606fae6c0,ResourceVersion:20041476,Generation:0,CreationTimestamp:2020-01-10 14:54:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7f8e5131-696d-45e5-9164-cae7fe600c5c 0xc002cc1cb7 0xc002cc1cb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nz9jz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nz9jz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-nz9jz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002cc1d30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002cc1d50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:54:37 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:55:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:55:02 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:54:37 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-01-10 14:54:37 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-10 14:55:00 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://232e03fce6d6dd65152cb80e335068eba61a123f55eb9b457d572802cdef1095}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 14:55:14.496: INFO: Pod "nginx-deployment-7b8c6f4498-858qq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-858qq,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5216,SelfLink:/api/v1/namespaces/deployment-5216/pods/nginx-deployment-7b8c6f4498-858qq,UID:e64f2b45-7ad1-407a-9ee3-d8366ac3b261,ResourceVersion:20041604,Generation:0,CreationTimestamp:2020-01-10 14:55:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7f8e5131-696d-45e5-9164-cae7fe600c5c 0xc002cc1e27 0xc002cc1e28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nz9jz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nz9jz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-nz9jz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002cc1ea0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002cc1ec0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:55:10 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 14:55:14.496: INFO: Pod "nginx-deployment-7b8c6f4498-8cjll" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8cjll,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5216,SelfLink:/api/v1/namespaces/deployment-5216/pods/nginx-deployment-7b8c6f4498-8cjll,UID:4bb0b112-0441-4947-a4c2-2bd68e92b253,ResourceVersion:20041590,Generation:0,CreationTimestamp:2020-01-10 14:55:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7f8e5131-696d-45e5-9164-cae7fe600c5c 0xc002cc1f47 0xc002cc1f48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nz9jz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nz9jz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-nz9jz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002cc1fc0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002cc1fe0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:55:10 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 14:55:14.496: INFO: Pod "nginx-deployment-7b8c6f4498-98wtd" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-98wtd,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5216,SelfLink:/api/v1/namespaces/deployment-5216/pods/nginx-deployment-7b8c6f4498-98wtd,UID:4af47130-217b-4db4-9ad4-b9a79a3d19d0,ResourceVersion:20041481,Generation:0,CreationTimestamp:2020-01-10 14:54:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7f8e5131-696d-45e5-9164-cae7fe600c5c 0xc002892087 0xc002892088}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nz9jz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nz9jz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-nz9jz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028920f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002892110}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:54:37 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:55:03 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:55:03 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:54:37 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2020-01-10 14:54:37 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-10 14:54:59 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://6ddb22827599c3f3c84e1f119ff9c8dd71051d4374578d62142a72e5167abc19}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 14:55:14.496: INFO: Pod "nginx-deployment-7b8c6f4498-bbc8v" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bbc8v,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5216,SelfLink:/api/v1/namespaces/deployment-5216/pods/nginx-deployment-7b8c6f4498-bbc8v,UID:e0e18dc3-47e2-4c09-a666-b443ce156a39,ResourceVersion:20041602,Generation:0,CreationTimestamp:2020-01-10 14:55:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7f8e5131-696d-45e5-9164-cae7fe600c5c 0xc0028921e7 0xc0028921e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nz9jz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nz9jz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-nz9jz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002892260} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002892280}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:55:10 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 14:55:14.496: INFO: Pod "nginx-deployment-7b8c6f4498-bp55z" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bp55z,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5216,SelfLink:/api/v1/namespaces/deployment-5216/pods/nginx-deployment-7b8c6f4498-bp55z,UID:9e949b7d-3d01-4b69-a7b9-0a37a3f69b31,ResourceVersion:20041469,Generation:0,CreationTimestamp:2020-01-10 14:54:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7f8e5131-696d-45e5-9164-cae7fe600c5c 0xc002892307 0xc002892308}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nz9jz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nz9jz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-nz9jz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002892380} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0028923a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:54:40 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:55:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:55:02 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:54:37 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.6,StartTime:2020-01-10 14:54:40 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-10 14:55:02 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://acd3a4cf9d71b6de49f79b1c504863e75bad5c0f79de62404286e57264c91dc2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 14:55:14.497: INFO: Pod "nginx-deployment-7b8c6f4498-bwqmg" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bwqmg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5216,SelfLink:/api/v1/namespaces/deployment-5216/pods/nginx-deployment-7b8c6f4498-bwqmg,UID:ef3dbdd6-34a5-41d0-8908-d3895b2ef8e7,ResourceVersion:20041473,Generation:0,CreationTimestamp:2020-01-10 14:54:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7f8e5131-696d-45e5-9164-cae7fe600c5c 0xc002892477 0xc002892478}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nz9jz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nz9jz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-nz9jz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028924f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002892510}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:54:37 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:55:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:55:02 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:54:37 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-01-10 14:54:37 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-10 14:55:01 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://de6b3abdfeb9e1c1bb39e1ebc3c8b495164734eba9e5525668870c3422cfb3bb}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 14:55:14.497: INFO: Pod "nginx-deployment-7b8c6f4498-f26mg" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-f26mg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5216,SelfLink:/api/v1/namespaces/deployment-5216/pods/nginx-deployment-7b8c6f4498-f26mg,UID:180d2f9a-c277-452b-9dc0-b1f6117d446e,ResourceVersion:20041613,Generation:0,CreationTimestamp:2020-01-10 14:55:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7f8e5131-696d-45e5-9164-cae7fe600c5c 0xc0028925e7 0xc0028925e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nz9jz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nz9jz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-nz9jz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002892650} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002892670}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:55:10 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:55:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:55:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:55:10 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-10 14:55:10 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 14:55:14.497: INFO: Pod "nginx-deployment-7b8c6f4498-fttlv" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-fttlv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5216,SelfLink:/api/v1/namespaces/deployment-5216/pods/nginx-deployment-7b8c6f4498-fttlv,UID:8712e8d4-f43d-411a-b077-4e62f785db65,ResourceVersion:20041464,Generation:0,CreationTimestamp:2020-01-10 14:54:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7f8e5131-696d-45e5-9164-cae7fe600c5c 0xc002892747 0xc002892748}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nz9jz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nz9jz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-nz9jz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028927e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002892800}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:54:37 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:55:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:55:02 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:54:36 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.3,StartTime:2020-01-10 14:54:37 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-10 14:55:01 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://40ab12f79f7fe10c844361a0a4253cf702c441a3a41215a1acdf551fc52b3af5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 14:55:14.498: INFO: Pod "nginx-deployment-7b8c6f4498-h2fzz" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-h2fzz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5216,SelfLink:/api/v1/namespaces/deployment-5216/pods/nginx-deployment-7b8c6f4498-h2fzz,UID:5209bca8-06d7-467d-b526-7b49241926f5,ResourceVersion:20041589,Generation:0,CreationTimestamp:2020-01-10 14:55:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7f8e5131-696d-45e5-9164-cae7fe600c5c 0xc0028928d7 0xc0028928d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nz9jz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nz9jz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-nz9jz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002892940} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002892960}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:55:10 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 14:55:14.498: INFO: Pod "nginx-deployment-7b8c6f4498-hwz6t" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-hwz6t,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5216,SelfLink:/api/v1/namespaces/deployment-5216/pods/nginx-deployment-7b8c6f4498-hwz6t,UID:b29ef53f-7bcf-4ec7-80c2-a1eeef6fa463,ResourceVersion:20041591,Generation:0,CreationTimestamp:2020-01-10 14:55:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7f8e5131-696d-45e5-9164-cae7fe600c5c 0xc0028929e7 0xc0028929e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nz9jz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nz9jz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-nz9jz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002892a60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002892a80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:55:10 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 14:55:14.498: INFO: Pod "nginx-deployment-7b8c6f4498-mmrzx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-mmrzx,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5216,SelfLink:/api/v1/namespaces/deployment-5216/pods/nginx-deployment-7b8c6f4498-mmrzx,UID:cab45916-6654-4e69-ac82-561a0dd993d3,ResourceVersion:20041612,Generation:0,CreationTimestamp:2020-01-10 14:55:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7f8e5131-696d-45e5-9164-cae7fe600c5c 0xc002892b07 0xc002892b08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nz9jz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nz9jz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-nz9jz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002892b80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002892ba0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:55:10 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:55:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:55:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:55:10 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-10 14:55:10 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 14:55:14.499: INFO: Pod "nginx-deployment-7b8c6f4498-p7rbd" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-p7rbd,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5216,SelfLink:/api/v1/namespaces/deployment-5216/pods/nginx-deployment-7b8c6f4498-p7rbd,UID:1e519802-ce7e-401b-81da-d8c62fd2cd40,ResourceVersion:20041605,Generation:0,CreationTimestamp:2020-01-10 14:55:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7f8e5131-696d-45e5-9164-cae7fe600c5c 0xc002892c67 0xc002892c68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nz9jz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nz9jz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-nz9jz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002892cd0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002892cf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:55:10 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 14:55:14.499: INFO: Pod "nginx-deployment-7b8c6f4498-scmmd" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-scmmd,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5216,SelfLink:/api/v1/namespaces/deployment-5216/pods/nginx-deployment-7b8c6f4498-scmmd,UID:d051a604-1e82-4ba1-a82c-1db76f8d96d2,ResourceVersion:20041598,Generation:0,CreationTimestamp:2020-01-10 14:55:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7f8e5131-696d-45e5-9164-cae7fe600c5c 0xc002892d77 0xc002892d78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nz9jz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nz9jz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-nz9jz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002892df0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002892e10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:55:10 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 14:55:14.500: INFO: Pod "nginx-deployment-7b8c6f4498-tgln6" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-tgln6,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5216,SelfLink:/api/v1/namespaces/deployment-5216/pods/nginx-deployment-7b8c6f4498-tgln6,UID:f52916d1-eae6-4406-9766-a008665702ba,ResourceVersion:20041484,Generation:0,CreationTimestamp:2020-01-10 14:54:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7f8e5131-696d-45e5-9164-cae7fe600c5c 0xc002892e97 0xc002892e98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nz9jz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nz9jz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-nz9jz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002892f00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002892f20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:54:37 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:55:03 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:55:03 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 14:54:37 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.7,StartTime:2020-01-10 14:54:37 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-10 14:55:02 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://c5bdcb61c3d15d2a3e39302bdbf0b5c2aa0e7df8186ce7949e36a15bf137f676}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:55:14.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-5216" for this suite.
Jan 10 14:56:03.995: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:56:04.063: INFO: namespace deployment-5216 deletion completed in 47.097639606s

• [SLOW TEST:89.145 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:56:04.064: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 10 14:56:04.224: INFO: Waiting up to 5m0s for pod "downwardapi-volume-89d2cac7-680e-4ef2-bc43-235e59af54a8" in namespace "downward-api-5306" to be "success or failure"
Jan 10 14:56:04.368: INFO: Pod "downwardapi-volume-89d2cac7-680e-4ef2-bc43-235e59af54a8": Phase="Pending", Reason="", readiness=false. Elapsed: 144.058698ms
Jan 10 14:56:06.382: INFO: Pod "downwardapi-volume-89d2cac7-680e-4ef2-bc43-235e59af54a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.157451365s
Jan 10 14:56:08.390: INFO: Pod "downwardapi-volume-89d2cac7-680e-4ef2-bc43-235e59af54a8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.165762586s
Jan 10 14:56:10.409: INFO: Pod "downwardapi-volume-89d2cac7-680e-4ef2-bc43-235e59af54a8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.18490516s
Jan 10 14:56:12.420: INFO: Pod "downwardapi-volume-89d2cac7-680e-4ef2-bc43-235e59af54a8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.195520406s
Jan 10 14:56:14.428: INFO: Pod "downwardapi-volume-89d2cac7-680e-4ef2-bc43-235e59af54a8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.203455318s
Jan 10 14:56:16.442: INFO: Pod "downwardapi-volume-89d2cac7-680e-4ef2-bc43-235e59af54a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.218007545s
STEP: Saw pod success
Jan 10 14:56:16.443: INFO: Pod "downwardapi-volume-89d2cac7-680e-4ef2-bc43-235e59af54a8" satisfied condition "success or failure"
Jan 10 14:56:16.448: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-89d2cac7-680e-4ef2-bc43-235e59af54a8 container client-container: 
STEP: delete the pod
Jan 10 14:56:16.509: INFO: Waiting for pod downwardapi-volume-89d2cac7-680e-4ef2-bc43-235e59af54a8 to disappear
Jan 10 14:56:16.553: INFO: Pod downwardapi-volume-89d2cac7-680e-4ef2-bc43-235e59af54a8 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:56:16.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5306" for this suite.
Jan 10 14:56:22.618: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:56:22.813: INFO: namespace downward-api-5306 deletion completed in 6.252742171s

• [SLOW TEST:18.750 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:56:22.814: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on node default medium
Jan 10 14:56:22.955: INFO: Waiting up to 5m0s for pod "pod-c4c35e95-1a7c-4bae-9ec3-f8e4ba3e8d11" in namespace "emptydir-8925" to be "success or failure"
Jan 10 14:56:22.986: INFO: Pod "pod-c4c35e95-1a7c-4bae-9ec3-f8e4ba3e8d11": Phase="Pending", Reason="", readiness=false. Elapsed: 31.062014ms
Jan 10 14:56:25.002: INFO: Pod "pod-c4c35e95-1a7c-4bae-9ec3-f8e4ba3e8d11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047309482s
Jan 10 14:56:27.015: INFO: Pod "pod-c4c35e95-1a7c-4bae-9ec3-f8e4ba3e8d11": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060530017s
Jan 10 14:56:29.032: INFO: Pod "pod-c4c35e95-1a7c-4bae-9ec3-f8e4ba3e8d11": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077009114s
Jan 10 14:56:31.045: INFO: Pod "pod-c4c35e95-1a7c-4bae-9ec3-f8e4ba3e8d11": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.090534608s
STEP: Saw pod success
Jan 10 14:56:31.046: INFO: Pod "pod-c4c35e95-1a7c-4bae-9ec3-f8e4ba3e8d11" satisfied condition "success or failure"
Jan 10 14:56:31.074: INFO: Trying to get logs from node iruya-node pod pod-c4c35e95-1a7c-4bae-9ec3-f8e4ba3e8d11 container test-container: 
STEP: delete the pod
Jan 10 14:56:31.145: INFO: Waiting for pod pod-c4c35e95-1a7c-4bae-9ec3-f8e4ba3e8d11 to disappear
Jan 10 14:56:31.153: INFO: Pod pod-c4c35e95-1a7c-4bae-9ec3-f8e4ba3e8d11 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:56:31.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8925" for this suite.
Jan 10 14:56:37.180: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:56:37.320: INFO: namespace emptydir-8925 deletion completed in 6.16048866s

• [SLOW TEST:14.506 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:56:37.323: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-5506
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 10 14:56:37.514: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 10 14:57:15.782: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5506 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 10 14:57:15.782: INFO: >>> kubeConfig: /root/.kube/config
I0110 14:57:15.899240       8 log.go:172] (0xc0017aa4d0) (0xc0017668c0) Create stream
I0110 14:57:15.899362       8 log.go:172] (0xc0017aa4d0) (0xc0017668c0) Stream added, broadcasting: 1
I0110 14:57:15.908589       8 log.go:172] (0xc0017aa4d0) Reply frame received for 1
I0110 14:57:15.908656       8 log.go:172] (0xc0017aa4d0) (0xc001c35c20) Create stream
I0110 14:57:15.908670       8 log.go:172] (0xc0017aa4d0) (0xc001c35c20) Stream added, broadcasting: 3
I0110 14:57:15.909788       8 log.go:172] (0xc0017aa4d0) Reply frame received for 3
I0110 14:57:15.909836       8 log.go:172] (0xc0017aa4d0) (0xc00318a820) Create stream
I0110 14:57:15.909856       8 log.go:172] (0xc0017aa4d0) (0xc00318a820) Stream added, broadcasting: 5
I0110 14:57:15.911996       8 log.go:172] (0xc0017aa4d0) Reply frame received for 5
I0110 14:57:16.166233       8 log.go:172] (0xc0017aa4d0) Data frame received for 3
I0110 14:57:16.166290       8 log.go:172] (0xc001c35c20) (3) Data frame handling
I0110 14:57:16.166317       8 log.go:172] (0xc001c35c20) (3) Data frame sent
I0110 14:57:16.300977       8 log.go:172] (0xc0017aa4d0) Data frame received for 1
I0110 14:57:16.301086       8 log.go:172] (0xc0017aa4d0) (0xc00318a820) Stream removed, broadcasting: 5
I0110 14:57:16.301141       8 log.go:172] (0xc0017668c0) (1) Data frame handling
I0110 14:57:16.301178       8 log.go:172] (0xc0017668c0) (1) Data frame sent
I0110 14:57:16.301256       8 log.go:172] (0xc0017aa4d0) (0xc0017668c0) Stream removed, broadcasting: 1
I0110 14:57:16.301306       8 log.go:172] (0xc0017aa4d0) (0xc001c35c20) Stream removed, broadcasting: 3
I0110 14:57:16.301363       8 log.go:172] (0xc0017aa4d0) Go away received
I0110 14:57:16.301557       8 log.go:172] (0xc0017aa4d0) (0xc0017668c0) Stream removed, broadcasting: 1
I0110 14:57:16.301579       8 log.go:172] (0xc0017aa4d0) (0xc001c35c20) Stream removed, broadcasting: 3
I0110 14:57:16.301591       8 log.go:172] (0xc0017aa4d0) (0xc00318a820) Stream removed, broadcasting: 5
Jan 10 14:57:16.301: INFO: Found all expected endpoints: [netserver-0]
Jan 10 14:57:16.334: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5506 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 10 14:57:16.335: INFO: >>> kubeConfig: /root/.kube/config
I0110 14:57:16.390016       8 log.go:172] (0xc001840fd0) (0xc001c35f40) Create stream
I0110 14:57:16.390068       8 log.go:172] (0xc001840fd0) (0xc001c35f40) Stream added, broadcasting: 1
I0110 14:57:16.398616       8 log.go:172] (0xc001840fd0) Reply frame received for 1
I0110 14:57:16.398771       8 log.go:172] (0xc001840fd0) (0xc00318aaa0) Create stream
I0110 14:57:16.398792       8 log.go:172] (0xc001840fd0) (0xc00318aaa0) Stream added, broadcasting: 3
I0110 14:57:16.400164       8 log.go:172] (0xc001840fd0) Reply frame received for 3
I0110 14:57:16.400191       8 log.go:172] (0xc001840fd0) (0xc001a7c960) Create stream
I0110 14:57:16.400199       8 log.go:172] (0xc001840fd0) (0xc001a7c960) Stream added, broadcasting: 5
I0110 14:57:16.401439       8 log.go:172] (0xc001840fd0) Reply frame received for 5
I0110 14:57:16.570766       8 log.go:172] (0xc001840fd0) Data frame received for 3
I0110 14:57:16.570992       8 log.go:172] (0xc00318aaa0) (3) Data frame handling
I0110 14:57:16.571069       8 log.go:172] (0xc00318aaa0) (3) Data frame sent
I0110 14:57:16.835354       8 log.go:172] (0xc001840fd0) (0xc00318aaa0) Stream removed, broadcasting: 3
I0110 14:57:16.835667       8 log.go:172] (0xc001840fd0) (0xc001a7c960) Stream removed, broadcasting: 5
I0110 14:57:16.835756       8 log.go:172] (0xc001840fd0) Data frame received for 1
I0110 14:57:16.835763       8 log.go:172] (0xc001c35f40) (1) Data frame handling
I0110 14:57:16.835786       8 log.go:172] (0xc001c35f40) (1) Data frame sent
I0110 14:57:16.835795       8 log.go:172] (0xc001840fd0) (0xc001c35f40) Stream removed, broadcasting: 1
I0110 14:57:16.835910       8 log.go:172] (0xc001840fd0) Go away received
I0110 14:57:16.836374       8 log.go:172] (0xc001840fd0) (0xc001c35f40) Stream removed, broadcasting: 1
I0110 14:57:16.836393       8 log.go:172] (0xc001840fd0) (0xc00318aaa0) Stream removed, broadcasting: 3
I0110 14:57:16.836406       8 log.go:172] (0xc001840fd0) (0xc001a7c960) Stream removed, broadcasting: 5
Jan 10 14:57:16.836: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:57:16.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-5506" for this suite.
Jan 10 14:57:40.921: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:57:41.112: INFO: namespace pod-network-test-5506 deletion completed in 24.259535985s

• [SLOW TEST:63.789 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:57:41.112: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Jan 10 14:57:41.327: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-4433,SelfLink:/api/v1/namespaces/watch-4433/configmaps/e2e-watch-test-resource-version,UID:17c917b4-53fa-4fa9-86c8-8e3ccd4687e1,ResourceVersion:20042107,Generation:0,CreationTimestamp:2020-01-10 14:57:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 10 14:57:41.327: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-4433,SelfLink:/api/v1/namespaces/watch-4433/configmaps/e2e-watch-test-resource-version,UID:17c917b4-53fa-4fa9-86c8-8e3ccd4687e1,ResourceVersion:20042108,Generation:0,CreationTimestamp:2020-01-10 14:57:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:57:41.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4433" for this suite.
Jan 10 14:57:47.361: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:57:47.499: INFO: namespace watch-4433 deletion completed in 6.164163128s

• [SLOW TEST:6.387 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:57:47.500: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: executing a command with run --rm and attach with stdin
Jan 10 14:57:47.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7591 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Jan 10 14:57:58.648: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0110 14:57:57.569360    3510 log.go:172] (0xc0007080b0) (0xc000752140) Create stream\nI0110 14:57:57.569422    3510 log.go:172] (0xc0007080b0) (0xc000752140) Stream added, broadcasting: 1\nI0110 14:57:57.591169    3510 log.go:172] (0xc0007080b0) Reply frame received for 1\nI0110 14:57:57.591643    3510 log.go:172] (0xc0007080b0) (0xc0002c59a0) Create stream\nI0110 14:57:57.591697    3510 log.go:172] (0xc0007080b0) (0xc0002c59a0) Stream added, broadcasting: 3\nI0110 14:57:57.595798    3510 log.go:172] (0xc0007080b0) Reply frame received for 3\nI0110 14:57:57.595840    3510 log.go:172] (0xc0007080b0) (0xc0006a8140) Create stream\nI0110 14:57:57.595853    3510 log.go:172] (0xc0007080b0) (0xc0006a8140) Stream added, broadcasting: 5\nI0110 14:57:57.597395    3510 log.go:172] (0xc0007080b0) Reply frame received for 5\nI0110 14:57:57.597453    3510 log.go:172] (0xc0007080b0) (0xc000752000) Create stream\nI0110 14:57:57.597474    3510 log.go:172] (0xc0007080b0) (0xc000752000) Stream added, broadcasting: 7\nI0110 14:57:57.603010    3510 log.go:172] (0xc0007080b0) Reply frame received for 7\nI0110 14:57:57.603565    3510 log.go:172] (0xc0002c59a0) (3) Writing data frame\nI0110 14:57:57.604224    3510 log.go:172] (0xc0002c59a0) (3) Writing data frame\nI0110 14:57:57.612324    3510 log.go:172] (0xc0007080b0) Data frame received for 5\nI0110 14:57:57.612382    3510 log.go:172] (0xc0006a8140) (5) Data frame handling\nI0110 14:57:57.612423    3510 log.go:172] (0xc0006a8140) (5) Data frame sent\nI0110 14:57:57.619301    3510 log.go:172] (0xc0007080b0) Data frame received for 5\nI0110 14:57:57.619337    3510 log.go:172] (0xc0006a8140) (5) Data frame handling\nI0110 14:57:57.619382    3510 log.go:172] (0xc0006a8140) (5) Data frame sent\nI0110 14:57:58.602217    3510 log.go:172] (0xc0007080b0) Data frame received for 1\nI0110 14:57:58.602985    3510 log.go:172] (0xc0007080b0) (0xc0002c59a0) Stream removed, broadcasting: 3\nI0110 14:57:58.603152    3510 log.go:172] (0xc000752140) (1) Data frame handling\nI0110 14:57:58.603244    3510 log.go:172] (0xc0007080b0) (0xc0006a8140) Stream removed, broadcasting: 5\nI0110 14:57:58.603344    3510 log.go:172] (0xc000752140) (1) Data frame sent\nI0110 14:57:58.603367    3510 log.go:172] (0xc0007080b0) (0xc000752000) Stream removed, broadcasting: 7\nI0110 14:57:58.603407    3510 log.go:172] (0xc0007080b0) (0xc000752140) Stream removed, broadcasting: 1\nI0110 14:57:58.603431    3510 log.go:172] (0xc0007080b0) Go away received\nI0110 14:57:58.603645    3510 log.go:172] (0xc0007080b0) (0xc000752140) Stream removed, broadcasting: 1\nI0110 14:57:58.603661    3510 log.go:172] (0xc0007080b0) (0xc0002c59a0) Stream removed, broadcasting: 3\nI0110 14:57:58.603677    3510 log.go:172] (0xc0007080b0) (0xc0006a8140) Stream removed, broadcasting: 5\nI0110 14:57:58.603693    3510 log.go:172] (0xc0007080b0) (0xc000752000) Stream removed, broadcasting: 7\n"
Jan 10 14:57:58.649: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:58:00.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7591" for this suite.
Jan 10 14:58:06.735: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:58:06.886: INFO: namespace kubectl-7591 deletion completed in 6.188695529s

• [SLOW TEST:19.386 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:58:06.887: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-c8e131bd-f318-4459-be6c-6d7bc727c713
STEP: Creating a pod to test consume configMaps
Jan 10 14:58:07.027: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5988d3a7-9f9c-4c2a-8923-8b202992bba1" in namespace "projected-5875" to be "success or failure"
Jan 10 14:58:07.044: INFO: Pod "pod-projected-configmaps-5988d3a7-9f9c-4c2a-8923-8b202992bba1": Phase="Pending", Reason="", readiness=false. Elapsed: 16.803638ms
Jan 10 14:58:09.053: INFO: Pod "pod-projected-configmaps-5988d3a7-9f9c-4c2a-8923-8b202992bba1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025836249s
Jan 10 14:58:11.082: INFO: Pod "pod-projected-configmaps-5988d3a7-9f9c-4c2a-8923-8b202992bba1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054572s
Jan 10 14:58:13.134: INFO: Pod "pod-projected-configmaps-5988d3a7-9f9c-4c2a-8923-8b202992bba1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.106537391s
Jan 10 14:58:15.149: INFO: Pod "pod-projected-configmaps-5988d3a7-9f9c-4c2a-8923-8b202992bba1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.122045634s
STEP: Saw pod success
Jan 10 14:58:15.150: INFO: Pod "pod-projected-configmaps-5988d3a7-9f9c-4c2a-8923-8b202992bba1" satisfied condition "success or failure"
Jan 10 14:58:15.155: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-5988d3a7-9f9c-4c2a-8923-8b202992bba1 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 10 14:58:15.324: INFO: Waiting for pod pod-projected-configmaps-5988d3a7-9f9c-4c2a-8923-8b202992bba1 to disappear
Jan 10 14:58:15.333: INFO: Pod pod-projected-configmaps-5988d3a7-9f9c-4c2a-8923-8b202992bba1 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:58:15.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5875" for this suite.
Jan 10 14:58:21.369: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:58:21.480: INFO: namespace projected-5875 deletion completed in 6.138735208s

• [SLOW TEST:14.594 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:58:21.481: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-74e244fd-b065-4fd8-ae3a-cf9a12f0f22b
STEP: Creating a pod to test consume secrets
Jan 10 14:58:21.632: INFO: Waiting up to 5m0s for pod "pod-secrets-612f3988-a3ce-4ac1-b29d-53ccf76f2e2a" in namespace "secrets-3940" to be "success or failure"
Jan 10 14:58:21.638: INFO: Pod "pod-secrets-612f3988-a3ce-4ac1-b29d-53ccf76f2e2a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.249122ms
Jan 10 14:58:23.649: INFO: Pod "pod-secrets-612f3988-a3ce-4ac1-b29d-53ccf76f2e2a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016319364s
Jan 10 14:58:25.662: INFO: Pod "pod-secrets-612f3988-a3ce-4ac1-b29d-53ccf76f2e2a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029654345s
Jan 10 14:58:27.674: INFO: Pod "pod-secrets-612f3988-a3ce-4ac1-b29d-53ccf76f2e2a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041860573s
Jan 10 14:58:29.731: INFO: Pod "pod-secrets-612f3988-a3ce-4ac1-b29d-53ccf76f2e2a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.098492854s
Jan 10 14:58:31.748: INFO: Pod "pod-secrets-612f3988-a3ce-4ac1-b29d-53ccf76f2e2a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.115477571s
STEP: Saw pod success
Jan 10 14:58:31.748: INFO: Pod "pod-secrets-612f3988-a3ce-4ac1-b29d-53ccf76f2e2a" satisfied condition "success or failure"
Jan 10 14:58:31.764: INFO: Trying to get logs from node iruya-node pod pod-secrets-612f3988-a3ce-4ac1-b29d-53ccf76f2e2a container secret-volume-test: 
STEP: delete the pod
Jan 10 14:58:31.853: INFO: Waiting for pod pod-secrets-612f3988-a3ce-4ac1-b29d-53ccf76f2e2a to disappear
Jan 10 14:58:31.865: INFO: Pod pod-secrets-612f3988-a3ce-4ac1-b29d-53ccf76f2e2a no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:58:31.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3940" for this suite.
Jan 10 14:58:37.914: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:58:38.083: INFO: namespace secrets-3940 deletion completed in 6.200614762s

• [SLOW TEST:16.602 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:58:38.084: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 10 14:58:38.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Jan 10 14:58:38.342: INFO: stderr: ""
Jan 10 14:58:38.342: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:55:20Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.1\", GitCommit:\"4485c6f18cee9a5d3c3b4e523bd27972b1b53892\", GitTreeState:\"clean\", BuildDate:\"2019-07-18T09:09:21Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:58:38.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4861" for this suite.
Jan 10 14:58:44.372: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:58:44.501: INFO: namespace kubectl-4861 deletion completed in 6.149950299s

• [SLOW TEST:6.418 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:58:44.502: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the initial replication controller
Jan 10 14:58:44.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3620'
Jan 10 14:58:44.987: INFO: stderr: ""
Jan 10 14:58:44.987: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 10 14:58:44.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3620'
Jan 10 14:58:45.099: INFO: stderr: ""
Jan 10 14:58:45.100: INFO: stdout: "update-demo-nautilus-6vk4w update-demo-nautilus-gsmdn "
Jan 10 14:58:45.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6vk4w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3620'
Jan 10 14:58:45.240: INFO: stderr: ""
Jan 10 14:58:45.240: INFO: stdout: ""
Jan 10 14:58:45.240: INFO: update-demo-nautilus-6vk4w is created but not running
Jan 10 14:58:50.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3620'
Jan 10 14:58:51.711: INFO: stderr: ""
Jan 10 14:58:51.711: INFO: stdout: "update-demo-nautilus-6vk4w update-demo-nautilus-gsmdn "
Jan 10 14:58:51.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6vk4w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3620'
Jan 10 14:58:52.578: INFO: stderr: ""
Jan 10 14:58:52.578: INFO: stdout: ""
Jan 10 14:58:52.578: INFO: update-demo-nautilus-6vk4w is created but not running
Jan 10 14:58:57.579: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3620'
Jan 10 14:58:57.743: INFO: stderr: ""
Jan 10 14:58:57.743: INFO: stdout: "update-demo-nautilus-6vk4w update-demo-nautilus-gsmdn "
Jan 10 14:58:57.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6vk4w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3620'
Jan 10 14:58:57.962: INFO: stderr: ""
Jan 10 14:58:57.963: INFO: stdout: "true"
Jan 10 14:58:57.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6vk4w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3620'
Jan 10 14:58:58.097: INFO: stderr: ""
Jan 10 14:58:58.097: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 10 14:58:58.097: INFO: validating pod update-demo-nautilus-6vk4w
Jan 10 14:58:58.108: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 10 14:58:58.108: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 10 14:58:58.108: INFO: update-demo-nautilus-6vk4w is verified up and running
Jan 10 14:58:58.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gsmdn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3620'
Jan 10 14:58:58.209: INFO: stderr: ""
Jan 10 14:58:58.209: INFO: stdout: "true"
Jan 10 14:58:58.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gsmdn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3620'
Jan 10 14:58:58.306: INFO: stderr: ""
Jan 10 14:58:58.307: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 10 14:58:58.307: INFO: validating pod update-demo-nautilus-gsmdn
Jan 10 14:58:58.350: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 10 14:58:58.350: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 10 14:58:58.350: INFO: update-demo-nautilus-gsmdn is verified up and running
STEP: rolling-update to new replication controller
Jan 10 14:58:58.352: INFO: scanned /root for discovery docs: 
Jan 10 14:58:58.353: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-3620'
Jan 10 14:59:31.311: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan 10 14:59:31.312: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 10 14:59:31.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3620'
Jan 10 14:59:31.490: INFO: stderr: ""
Jan 10 14:59:31.490: INFO: stdout: "update-demo-kitten-25qh2 update-demo-kitten-qswj5 "
Jan 10 14:59:31.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-25qh2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3620'
Jan 10 14:59:31.601: INFO: stderr: ""
Jan 10 14:59:31.601: INFO: stdout: "true"
Jan 10 14:59:31.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-25qh2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3620'
Jan 10 14:59:31.723: INFO: stderr: ""
Jan 10 14:59:31.723: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan 10 14:59:31.723: INFO: validating pod update-demo-kitten-25qh2
Jan 10 14:59:31.753: INFO: got data: {
  "image": "kitten.jpg"
}

Jan 10 14:59:31.753: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan 10 14:59:31.753: INFO: update-demo-kitten-25qh2 is verified up and running
Jan 10 14:59:31.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-qswj5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3620'
Jan 10 14:59:31.905: INFO: stderr: ""
Jan 10 14:59:31.905: INFO: stdout: "true"
Jan 10 14:59:31.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-qswj5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3620'
Jan 10 14:59:31.996: INFO: stderr: ""
Jan 10 14:59:31.996: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan 10 14:59:31.996: INFO: validating pod update-demo-kitten-qswj5
Jan 10 14:59:32.036: INFO: got data: {
  "image": "kitten.jpg"
}

Jan 10 14:59:32.036: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan 10 14:59:32.036: INFO: update-demo-kitten-qswj5 is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:59:32.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3620" for this suite.
Jan 10 14:59:56.111: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 14:59:56.241: INFO: namespace kubectl-3620 deletion completed in 24.195550843s

• [SLOW TEST:71.739 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 14:59:56.241: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Jan 10 14:59:56.317: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 10 14:59:56.330: INFO: Waiting for terminating namespaces to be deleted...
Jan 10 14:59:56.333: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Jan 10 14:59:56.344: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Jan 10 14:59:56.344: INFO: 	Container weave ready: true, restart count 0
Jan 10 14:59:56.344: INFO: 	Container weave-npc ready: true, restart count 0
Jan 10 14:59:56.344: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Jan 10 14:59:56.344: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 10 14:59:56.344: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Jan 10 14:59:56.353: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Jan 10 14:59:56.353: INFO: 	Container kube-apiserver ready: true, restart count 0
Jan 10 14:59:56.353: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Jan 10 14:59:56.353: INFO: 	Container kube-scheduler ready: true, restart count 13
Jan 10 14:59:56.353: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan 10 14:59:56.353: INFO: 	Container coredns ready: true, restart count 0
Jan 10 14:59:56.353: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Jan 10 14:59:56.353: INFO: 	Container etcd ready: true, restart count 0
Jan 10 14:59:56.353: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Jan 10 14:59:56.353: INFO: 	Container weave ready: true, restart count 0
Jan 10 14:59:56.353: INFO: 	Container weave-npc ready: true, restart count 0
Jan 10 14:59:56.353: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan 10 14:59:56.353: INFO: 	Container coredns ready: true, restart count 0
Jan 10 14:59:56.353: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Jan 10 14:59:56.353: INFO: 	Container kube-controller-manager ready: true, restart count 19
Jan 10 14:59:56.353: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Jan 10 14:59:56.353: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15e88e8adb3d2e6f], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 14:59:57.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-4308" for this suite.
Jan 10 15:00:03.490: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 15:00:03.668: INFO: namespace sched-pred-4308 deletion completed in 6.214815904s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:7.427 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 15:00:03.669: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-47f8cf84-be1f-4a7c-ab76-1f53fe8fbc88
STEP: Creating a pod to test consume configMaps
Jan 10 15:00:03.833: INFO: Waiting up to 5m0s for pod "pod-configmaps-75739e5a-cc6b-4138-929d-86f650c63b74" in namespace "configmap-6214" to be "success or failure"
Jan 10 15:00:03.866: INFO: Pod "pod-configmaps-75739e5a-cc6b-4138-929d-86f650c63b74": Phase="Pending", Reason="", readiness=false. Elapsed: 32.257493ms
Jan 10 15:00:05.882: INFO: Pod "pod-configmaps-75739e5a-cc6b-4138-929d-86f650c63b74": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048662512s
Jan 10 15:00:07.895: INFO: Pod "pod-configmaps-75739e5a-cc6b-4138-929d-86f650c63b74": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061745956s
Jan 10 15:00:09.906: INFO: Pod "pod-configmaps-75739e5a-cc6b-4138-929d-86f650c63b74": Phase="Pending", Reason="", readiness=false. Elapsed: 6.072279051s
Jan 10 15:00:11.916: INFO: Pod "pod-configmaps-75739e5a-cc6b-4138-929d-86f650c63b74": Phase="Running", Reason="", readiness=true. Elapsed: 8.082331994s
Jan 10 15:00:13.932: INFO: Pod "pod-configmaps-75739e5a-cc6b-4138-929d-86f650c63b74": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.098401113s
STEP: Saw pod success
Jan 10 15:00:13.932: INFO: Pod "pod-configmaps-75739e5a-cc6b-4138-929d-86f650c63b74" satisfied condition "success or failure"
Jan 10 15:00:13.937: INFO: Trying to get logs from node iruya-node pod pod-configmaps-75739e5a-cc6b-4138-929d-86f650c63b74 container configmap-volume-test: 
STEP: delete the pod
Jan 10 15:00:14.309: INFO: Waiting for pod pod-configmaps-75739e5a-cc6b-4138-929d-86f650c63b74 to disappear
Jan 10 15:00:14.406: INFO: Pod pod-configmaps-75739e5a-cc6b-4138-929d-86f650c63b74 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 15:00:14.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6214" for this suite.
Jan 10 15:00:20.447: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 15:00:20.613: INFO: namespace configmap-6214 deletion completed in 6.195398452s

• [SLOW TEST:16.944 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 15:00:20.613: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292
STEP: creating an rc
Jan 10 15:00:20.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1770'
Jan 10 15:00:21.132: INFO: stderr: ""
Jan 10 15:00:21.132: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Waiting for Redis master to start.
Jan 10 15:00:22.142: INFO: Selector matched 1 pods for map[app:redis]
Jan 10 15:00:22.142: INFO: Found 0 / 1
Jan 10 15:00:23.146: INFO: Selector matched 1 pods for map[app:redis]
Jan 10 15:00:23.146: INFO: Found 0 / 1
Jan 10 15:00:24.144: INFO: Selector matched 1 pods for map[app:redis]
Jan 10 15:00:24.144: INFO: Found 0 / 1
Jan 10 15:00:25.139: INFO: Selector matched 1 pods for map[app:redis]
Jan 10 15:00:25.140: INFO: Found 0 / 1
Jan 10 15:00:26.147: INFO: Selector matched 1 pods for map[app:redis]
Jan 10 15:00:26.147: INFO: Found 0 / 1
Jan 10 15:00:27.144: INFO: Selector matched 1 pods for map[app:redis]
Jan 10 15:00:27.144: INFO: Found 0 / 1
Jan 10 15:00:28.146: INFO: Selector matched 1 pods for map[app:redis]
Jan 10 15:00:28.147: INFO: Found 0 / 1
Jan 10 15:00:29.142: INFO: Selector matched 1 pods for map[app:redis]
Jan 10 15:00:29.143: INFO: Found 1 / 1
Jan 10 15:00:29.143: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan 10 15:00:29.149: INFO: Selector matched 1 pods for map[app:redis]
Jan 10 15:00:29.149: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Jan 10 15:00:29.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-t6qz5 redis-master --namespace=kubectl-1770'
Jan 10 15:00:29.312: INFO: stderr: ""
Jan 10 15:00:29.312: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 10 Jan 15:00:27.793 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 10 Jan 15:00:27.793 # Server started, Redis version 3.2.12\n1:M 10 Jan 15:00:27.793 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 10 Jan 15:00:27.793 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Jan 10 15:00:29.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-t6qz5 redis-master --namespace=kubectl-1770 --tail=1'
Jan 10 15:00:29.463: INFO: stderr: ""
Jan 10 15:00:29.464: INFO: stdout: "1:M 10 Jan 15:00:27.793 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Jan 10 15:00:29.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-t6qz5 redis-master --namespace=kubectl-1770 --limit-bytes=1'
Jan 10 15:00:29.580: INFO: stderr: ""
Jan 10 15:00:29.580: INFO: stdout: " "
STEP: exposing timestamps
Jan 10 15:00:29.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-t6qz5 redis-master --namespace=kubectl-1770 --tail=1 --timestamps'
Jan 10 15:00:29.710: INFO: stderr: ""
Jan 10 15:00:29.710: INFO: stdout: "2020-01-10T15:00:27.794021372Z 1:M 10 Jan 15:00:27.793 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Jan 10 15:00:32.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-t6qz5 redis-master --namespace=kubectl-1770 --since=1s'
Jan 10 15:00:32.389: INFO: stderr: ""
Jan 10 15:00:32.389: INFO: stdout: ""
Jan 10 15:00:32.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-t6qz5 redis-master --namespace=kubectl-1770 --since=24h'
Jan 10 15:00:32.577: INFO: stderr: ""
Jan 10 15:00:32.578: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 10 Jan 15:00:27.793 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 10 Jan 15:00:27.793 # Server started, Redis version 3.2.12\n1:M 10 Jan 15:00:27.793 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 10 Jan 15:00:27.793 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
STEP: using delete to clean up resources
Jan 10 15:00:32.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1770'
Jan 10 15:00:32.691: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 10 15:00:32.691: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Jan 10 15:00:32.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-1770'
Jan 10 15:00:32.777: INFO: stderr: "No resources found.\n"
Jan 10 15:00:32.777: INFO: stdout: ""
Jan 10 15:00:32.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-1770 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 10 15:00:32.895: INFO: stderr: ""
Jan 10 15:00:32.895: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 15:00:32.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1770" for this suite.
Jan 10 15:00:55.004: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 15:00:55.111: INFO: namespace kubectl-1770 deletion completed in 22.129040294s

• [SLOW TEST:34.498 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 15:00:55.112: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 10 15:00:55.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-7429'
Jan 10 15:00:55.370: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 10 15:00:55.370: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426
Jan 10 15:00:55.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-7429'
Jan 10 15:00:55.767: INFO: stderr: ""
Jan 10 15:00:55.767: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 15:00:55.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7429" for this suite.
Jan 10 15:01:01.839: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 15:01:01.977: INFO: namespace kubectl-7429 deletion completed in 6.184527324s

• [SLOW TEST:6.865 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 15:01:01.979: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-06b44783-b744-432e-a02d-505a61cbfd47
STEP: Creating a pod to test consume configMaps
Jan 10 15:01:02.167: INFO: Waiting up to 5m0s for pod "pod-configmaps-0fbacd9f-4efe-49bc-aa3a-478711941cb5" in namespace "configmap-3485" to be "success or failure"
Jan 10 15:01:02.176: INFO: Pod "pod-configmaps-0fbacd9f-4efe-49bc-aa3a-478711941cb5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.430383ms
Jan 10 15:01:04.187: INFO: Pod "pod-configmaps-0fbacd9f-4efe-49bc-aa3a-478711941cb5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020029114s
Jan 10 15:01:06.198: INFO: Pod "pod-configmaps-0fbacd9f-4efe-49bc-aa3a-478711941cb5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030768303s
Jan 10 15:01:08.249: INFO: Pod "pod-configmaps-0fbacd9f-4efe-49bc-aa3a-478711941cb5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.082012892s
Jan 10 15:01:10.259: INFO: Pod "pod-configmaps-0fbacd9f-4efe-49bc-aa3a-478711941cb5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.092237803s
STEP: Saw pod success
Jan 10 15:01:10.259: INFO: Pod "pod-configmaps-0fbacd9f-4efe-49bc-aa3a-478711941cb5" satisfied condition "success or failure"
Jan 10 15:01:10.265: INFO: Trying to get logs from node iruya-node pod pod-configmaps-0fbacd9f-4efe-49bc-aa3a-478711941cb5 container configmap-volume-test: 
STEP: delete the pod
Jan 10 15:01:10.340: INFO: Waiting for pod pod-configmaps-0fbacd9f-4efe-49bc-aa3a-478711941cb5 to disappear
Jan 10 15:01:10.370: INFO: Pod pod-configmaps-0fbacd9f-4efe-49bc-aa3a-478711941cb5 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 15:01:10.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3485" for this suite.
Jan 10 15:01:16.401: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 15:01:16.578: INFO: namespace configmap-3485 deletion completed in 6.194819717s

• [SLOW TEST:14.599 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 15:01:16.581: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Jan 10 15:01:25.409: INFO: Successfully updated pod "annotationupdate67b423d6-6785-479d-ace5-8a9d2f29e51a"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 15:01:27.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3616" for this suite.
Jan 10 15:01:49.579: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 15:01:49.866: INFO: namespace downward-api-3616 deletion completed in 22.373018319s

• [SLOW TEST:33.286 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 15:01:49.869: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 15:02:00.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-8945" for this suite.
Jan 10 15:02:52.091: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 15:02:52.279: INFO: namespace kubelet-test-8945 deletion completed in 52.208025623s

• [SLOW TEST:62.410 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 15:02:52.280: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test use defaults
Jan 10 15:02:52.406: INFO: Waiting up to 5m0s for pod "client-containers-fddd2f5f-9354-4feb-be1d-52080c085f7d" in namespace "containers-5876" to be "success or failure"
Jan 10 15:02:52.413: INFO: Pod "client-containers-fddd2f5f-9354-4feb-be1d-52080c085f7d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.02393ms
Jan 10 15:02:54.420: INFO: Pod "client-containers-fddd2f5f-9354-4feb-be1d-52080c085f7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013378477s
Jan 10 15:02:56.432: INFO: Pod "client-containers-fddd2f5f-9354-4feb-be1d-52080c085f7d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025019081s
Jan 10 15:02:58.446: INFO: Pod "client-containers-fddd2f5f-9354-4feb-be1d-52080c085f7d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039730975s
Jan 10 15:03:00.496: INFO: Pod "client-containers-fddd2f5f-9354-4feb-be1d-52080c085f7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.089007078s
STEP: Saw pod success
Jan 10 15:03:00.496: INFO: Pod "client-containers-fddd2f5f-9354-4feb-be1d-52080c085f7d" satisfied condition "success or failure"
Jan 10 15:03:00.509: INFO: Trying to get logs from node iruya-node pod client-containers-fddd2f5f-9354-4feb-be1d-52080c085f7d container test-container: 
STEP: delete the pod
Jan 10 15:03:00.573: INFO: Waiting for pod client-containers-fddd2f5f-9354-4feb-be1d-52080c085f7d to disappear
Jan 10 15:03:00.637: INFO: Pod client-containers-fddd2f5f-9354-4feb-be1d-52080c085f7d no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 15:03:00.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-5876" for this suite.
Jan 10 15:03:06.674: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 15:03:06.783: INFO: namespace containers-5876 deletion completed in 6.134849254s

• [SLOW TEST:14.502 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 15:03:06.783: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 15:03:14.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5937" for this suite.
Jan 10 15:03:57.033: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 15:03:57.199: INFO: namespace kubelet-test-5937 deletion completed in 42.193719098s

• [SLOW TEST:50.416 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 15:03:57.200: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-14952363-f978-46f7-9780-976674014c8c in namespace container-probe-8162
Jan 10 15:04:05.355: INFO: Started pod liveness-14952363-f978-46f7-9780-976674014c8c in namespace container-probe-8162
STEP: checking the pod's current state and verifying that restartCount is present
Jan 10 15:04:05.360: INFO: Initial restart count of pod liveness-14952363-f978-46f7-9780-976674014c8c is 0
Jan 10 15:04:25.765: INFO: Restart count of pod container-probe-8162/liveness-14952363-f978-46f7-9780-976674014c8c is now 1 (20.4048577s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 15:04:25.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8162" for this suite.
Jan 10 15:04:31.939: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 15:04:32.104: INFO: namespace container-probe-8162 deletion completed in 6.205775013s

• [SLOW TEST:34.904 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 15:04:32.105: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 15:04:40.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2337" for this suite.
Jan 10 15:04:46.390: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 15:04:46.527: INFO: namespace kubelet-test-2337 deletion completed in 6.165623952s

• [SLOW TEST:14.423 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 15:04:46.528: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan 10 15:04:46.679: INFO: Waiting up to 5m0s for pod "pod-1f76299a-16ae-4234-9db2-2e48d8a2baf9" in namespace "emptydir-8529" to be "success or failure"
Jan 10 15:04:46.697: INFO: Pod "pod-1f76299a-16ae-4234-9db2-2e48d8a2baf9": Phase="Pending", Reason="", readiness=false. Elapsed: 17.615061ms
Jan 10 15:04:48.707: INFO: Pod "pod-1f76299a-16ae-4234-9db2-2e48d8a2baf9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027534295s
Jan 10 15:04:50.724: INFO: Pod "pod-1f76299a-16ae-4234-9db2-2e48d8a2baf9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044268086s
Jan 10 15:04:52.733: INFO: Pod "pod-1f76299a-16ae-4234-9db2-2e48d8a2baf9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053553891s
Jan 10 15:04:54.742: INFO: Pod "pod-1f76299a-16ae-4234-9db2-2e48d8a2baf9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.061941836s
STEP: Saw pod success
Jan 10 15:04:54.742: INFO: Pod "pod-1f76299a-16ae-4234-9db2-2e48d8a2baf9" satisfied condition "success or failure"
Jan 10 15:04:54.745: INFO: Trying to get logs from node iruya-node pod pod-1f76299a-16ae-4234-9db2-2e48d8a2baf9 container test-container: 
STEP: delete the pod
Jan 10 15:04:54.782: INFO: Waiting for pod pod-1f76299a-16ae-4234-9db2-2e48d8a2baf9 to disappear
Jan 10 15:04:54.797: INFO: Pod pod-1f76299a-16ae-4234-9db2-2e48d8a2baf9 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 15:04:54.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8529" for this suite.
Jan 10 15:05:00.833: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 15:05:00.967: INFO: namespace emptydir-8529 deletion completed in 6.163651226s

• [SLOW TEST:14.439 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 15:05:00.969: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
STEP: reading a file in the container
Jan 10 15:05:10.091: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3505 pod-service-account-560d16f1-6db9-42ab-93b8-7a15263cd55a -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Jan 10 15:05:10.633: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3505 pod-service-account-560d16f1-6db9-42ab-93b8-7a15263cd55a -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Jan 10 15:05:11.159: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3505 pod-service-account-560d16f1-6db9-42ab-93b8-7a15263cd55a -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 15:05:11.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-3505" for this suite.
Jan 10 15:05:17.702: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 15:05:17.971: INFO: namespace svcaccounts-3505 deletion completed in 6.321041269s

• [SLOW TEST:17.003 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 15:05:17.972: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan 10 15:05:38.283: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 10 15:05:38.313: INFO: Pod pod-with-prestop-http-hook still exists
Jan 10 15:05:40.314: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 10 15:05:40.336: INFO: Pod pod-with-prestop-http-hook still exists
Jan 10 15:05:42.314: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 10 15:05:42.323: INFO: Pod pod-with-prestop-http-hook still exists
Jan 10 15:05:44.314: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 10 15:05:44.324: INFO: Pod pod-with-prestop-http-hook still exists
Jan 10 15:05:46.314: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 10 15:05:46.335: INFO: Pod pod-with-prestop-http-hook still exists
Jan 10 15:05:48.314: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 10 15:05:48.326: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 15:05:48.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-3019" for this suite.
Jan 10 15:06:10.427: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 15:06:10.612: INFO: namespace container-lifecycle-hook-3019 deletion completed in 22.244600789s

• [SLOW TEST:52.640 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 10 15:06:10.613: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan 10 15:06:10.878: INFO: Number of nodes with available pods: 0
Jan 10 15:06:10.878: INFO: Node iruya-node is running more than one daemon pod
Jan 10 15:06:11.916: INFO: Number of nodes with available pods: 0
Jan 10 15:06:11.916: INFO: Node iruya-node is running more than one daemon pod
Jan 10 15:06:13.189: INFO: Number of nodes with available pods: 0
Jan 10 15:06:13.189: INFO: Node iruya-node is running more than one daemon pod
Jan 10 15:06:13.912: INFO: Number of nodes with available pods: 0
Jan 10 15:06:13.913: INFO: Node iruya-node is running more than one daemon pod
Jan 10 15:06:14.900: INFO: Number of nodes with available pods: 0
Jan 10 15:06:14.901: INFO: Node iruya-node is running more than one daemon pod
Jan 10 15:06:15.918: INFO: Number of nodes with available pods: 0
Jan 10 15:06:15.918: INFO: Node iruya-node is running more than one daemon pod
Jan 10 15:06:19.085: INFO: Number of nodes with available pods: 0
Jan 10 15:06:19.085: INFO: Node iruya-node is running more than one daemon pod
Jan 10 15:06:20.075: INFO: Number of nodes with available pods: 0
Jan 10 15:06:20.075: INFO: Node iruya-node is running more than one daemon pod
Jan 10 15:06:20.894: INFO: Number of nodes with available pods: 0
Jan 10 15:06:20.894: INFO: Node iruya-node is running more than one daemon pod
Jan 10 15:06:21.905: INFO: Number of nodes with available pods: 1
Jan 10 15:06:21.905: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 10 15:06:22.917: INFO: Number of nodes with available pods: 2
Jan 10 15:06:22.918: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Jan 10 15:06:23.038: INFO: Number of nodes with available pods: 2
Jan 10 15:06:23.038: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4761, will wait for the garbage collector to delete the pods
Jan 10 15:06:24.131: INFO: Deleting DaemonSet.extensions daemon-set took: 11.907666ms
Jan 10 15:06:24.432: INFO: Terminating DaemonSet.extensions daemon-set pods took: 301.094082ms
Jan 10 15:06:37.950: INFO: Number of nodes with available pods: 0
Jan 10 15:06:37.950: INFO: Number of running nodes: 0, number of available pods: 0
Jan 10 15:06:38.021: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4761/daemonsets","resourceVersion":"20043458"},"items":null}

Jan 10 15:06:38.032: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4761/pods","resourceVersion":"20043458"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 10 15:06:38.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4761" for this suite.
Jan 10 15:06:44.170: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 15:06:44.274: INFO: namespace daemonsets-4761 deletion completed in 6.202003036s

• [SLOW TEST:33.662 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSJan 10 15:06:44.276: INFO: Running AfterSuite actions on all nodes
Jan 10 15:06:44.276: INFO: Running AfterSuite actions on node 1
Jan 10 15:06:44.276: INFO: Skipping dumping logs from cluster

Ran 215 of 4412 Specs in 7835.287 seconds
SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped
PASS