I1230 12:56:11.726563 8 e2e.go:243] Starting e2e run "aa9ca122-57d3-4ee7-98bf-213cd2f210ae" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1577710570 - Will randomize all specs Will run 215 of 4412 specs Dec 30 12:56:12.088: INFO: >>> kubeConfig: /root/.kube/config Dec 30 12:56:12.092: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Dec 30 12:56:12.120: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Dec 30 12:56:12.171: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Dec 30 12:56:12.171: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Dec 30 12:56:12.171: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Dec 30 12:56:12.191: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Dec 30 12:56:12.191: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Dec 30 12:56:12.191: INFO: e2e test version: v1.15.7 Dec 30 12:56:12.200: INFO: kube-apiserver version: v1.15.1 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 12:56:12.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers Dec 30 12:56:12.334: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments Dec 30 12:56:12.354: INFO: Waiting up to 5m0s for pod "client-containers-14410595-c164-4587-abb6-c34237531ed1" in namespace "containers-6493" to be "success or failure" Dec 30 12:56:12.376: INFO: Pod "client-containers-14410595-c164-4587-abb6-c34237531ed1": Phase="Pending", Reason="", readiness=false. Elapsed: 21.326103ms Dec 30 12:56:14.384: INFO: Pod "client-containers-14410595-c164-4587-abb6-c34237531ed1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02988318s Dec 30 12:56:16.397: INFO: Pod "client-containers-14410595-c164-4587-abb6-c34237531ed1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042926535s Dec 30 12:56:18.409: INFO: Pod "client-containers-14410595-c164-4587-abb6-c34237531ed1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054063276s Dec 30 12:56:20.418: INFO: Pod "client-containers-14410595-c164-4587-abb6-c34237531ed1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.063688723s Dec 30 12:56:22.430: INFO: Pod "client-containers-14410595-c164-4587-abb6-c34237531ed1": Phase="Pending", Reason="", readiness=false. Elapsed: 10.075448136s Dec 30 12:56:24.438: INFO: Pod "client-containers-14410595-c164-4587-abb6-c34237531ed1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.084009655s STEP: Saw pod success Dec 30 12:56:24.439: INFO: Pod "client-containers-14410595-c164-4587-abb6-c34237531ed1" satisfied condition "success or failure" Dec 30 12:56:24.443: INFO: Trying to get logs from node iruya-node pod client-containers-14410595-c164-4587-abb6-c34237531ed1 container test-container: STEP: delete the pod Dec 30 12:56:24.565: INFO: Waiting for pod client-containers-14410595-c164-4587-abb6-c34237531ed1 to disappear Dec 30 12:56:24.615: INFO: Pod client-containers-14410595-c164-4587-abb6-c34237531ed1 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 12:56:24.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6493" for this suite. Dec 30 12:56:30.661: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 12:56:30.751: INFO: namespace containers-6493 deletion completed in 6.124883862s • [SLOW TEST:18.550 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 12:56:30.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Dec 30 12:56:31.677: INFO: Pod name wrapped-volume-race-64de4560-d10e-42e0-b7e6-4021c9f91e67: Found 0 pods out of 5 Dec 30 12:56:36.688: INFO: Pod name wrapped-volume-race-64de4560-d10e-42e0-b7e6-4021c9f91e67: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-64de4560-d10e-42e0-b7e6-4021c9f91e67 in namespace emptydir-wrapper-3853, will wait for the garbage collector to delete the pods Dec 30 12:57:12.944: INFO: Deleting ReplicationController wrapped-volume-race-64de4560-d10e-42e0-b7e6-4021c9f91e67 took: 26.964929ms Dec 30 12:57:13.344: INFO: Terminating ReplicationController wrapped-volume-race-64de4560-d10e-42e0-b7e6-4021c9f91e67 pods took: 400.506184ms STEP: Creating RC which spawns configmap-volume pods Dec 30 12:58:07.353: INFO: Pod name wrapped-volume-race-7708d381-0ef1-4776-8eac-f4cba847e967: Found 0 pods out of 5 Dec 30 12:58:12.381: INFO: Pod name wrapped-volume-race-7708d381-0ef1-4776-8eac-f4cba847e967: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-7708d381-0ef1-4776-8eac-f4cba847e967 in namespace emptydir-wrapper-3853, will wait for the garbage collector to delete the pods Dec 30 12:58:50.716: INFO: Deleting ReplicationController wrapped-volume-race-7708d381-0ef1-4776-8eac-f4cba847e967 took: 16.323379ms Dec 30 12:58:51.117: INFO: Terminating ReplicationController wrapped-volume-race-7708d381-0ef1-4776-8eac-f4cba847e967 pods took: 400.675263ms STEP: Creating RC which spawns configmap-volume pods Dec 30 12:59:36.980: INFO: Pod name wrapped-volume-race-fcef69a1-58fe-4b66-a959-f2af4bbea614: Found 0 pods out of 5 Dec 30 12:59:42.149: INFO: Pod name wrapped-volume-race-fcef69a1-58fe-4b66-a959-f2af4bbea614: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-fcef69a1-58fe-4b66-a959-f2af4bbea614 in namespace emptydir-wrapper-3853, will wait for the garbage collector to delete the pods Dec 30 13:00:20.274: INFO: Deleting ReplicationController wrapped-volume-race-fcef69a1-58fe-4b66-a959-f2af4bbea614 took: 19.153965ms Dec 30 13:00:20.675: INFO: Terminating ReplicationController wrapped-volume-race-fcef69a1-58fe-4b66-a959-f2af4bbea614 pods took: 400.87177ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:01:04.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-3853" for this suite. Dec 30 13:01:14.161: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:01:14.266: INFO: namespace emptydir-wrapper-3853 deletion completed in 10.127245132s • [SLOW TEST:283.514 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:01:14.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-4c852210-5813-4c23-9b44-b781ab774ddd STEP: Creating a pod to test consume secrets Dec 30 13:01:14.386: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8a18016e-d26a-4491-bc56-138cd45267f2" in namespace "projected-5589" to be "success or failure" Dec 30 13:01:14.394: INFO: Pod "pod-projected-secrets-8a18016e-d26a-4491-bc56-138cd45267f2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.231733ms Dec 30 13:01:16.403: INFO: Pod "pod-projected-secrets-8a18016e-d26a-4491-bc56-138cd45267f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016891708s Dec 30 13:01:18.413: INFO: Pod "pod-projected-secrets-8a18016e-d26a-4491-bc56-138cd45267f2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027100649s Dec 30 13:01:20.426: INFO: Pod "pod-projected-secrets-8a18016e-d26a-4491-bc56-138cd45267f2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040203946s Dec 30 13:01:22.436: INFO: Pod "pod-projected-secrets-8a18016e-d26a-4491-bc56-138cd45267f2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.049876245s Dec 30 13:01:24.444: INFO: Pod "pod-projected-secrets-8a18016e-d26a-4491-bc56-138cd45267f2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.058769471s Dec 30 13:01:26.455: INFO: Pod "pod-projected-secrets-8a18016e-d26a-4491-bc56-138cd45267f2": Phase="Pending", Reason="", readiness=false. Elapsed: 12.069181839s Dec 30 13:01:28.469: INFO: Pod "pod-projected-secrets-8a18016e-d26a-4491-bc56-138cd45267f2": Phase="Pending", Reason="", readiness=false. Elapsed: 14.083483283s Dec 30 13:01:30.486: INFO: Pod "pod-projected-secrets-8a18016e-d26a-4491-bc56-138cd45267f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.100149913s STEP: Saw pod success Dec 30 13:01:30.486: INFO: Pod "pod-projected-secrets-8a18016e-d26a-4491-bc56-138cd45267f2" satisfied condition "success or failure" Dec 30 13:01:30.496: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-8a18016e-d26a-4491-bc56-138cd45267f2 container projected-secret-volume-test: STEP: delete the pod Dec 30 13:01:30.693: INFO: Waiting for pod pod-projected-secrets-8a18016e-d26a-4491-bc56-138cd45267f2 to disappear Dec 30 13:01:30.705: INFO: Pod pod-projected-secrets-8a18016e-d26a-4491-bc56-138cd45267f2 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:01:30.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5589" for this suite. Dec 30 13:01:36.732: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:01:36.922: INFO: namespace projected-5589 deletion completed in 6.21094878s • [SLOW TEST:22.656 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:01:36.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-1955601b-0a3f-4a53-8674-e60cc7eb5171 STEP: Creating a pod to test consume secrets Dec 30 13:01:37.059: INFO: Waiting up to 5m0s for pod "pod-secrets-15933785-f294-4d72-9eb0-da4ee6f169ba" in namespace "secrets-8609" to be "success or failure" Dec 30 13:01:37.109: INFO: Pod "pod-secrets-15933785-f294-4d72-9eb0-da4ee6f169ba": Phase="Pending", Reason="", readiness=false. Elapsed: 49.598926ms Dec 30 13:01:39.115: INFO: Pod "pod-secrets-15933785-f294-4d72-9eb0-da4ee6f169ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055918369s Dec 30 13:01:41.130: INFO: Pod "pod-secrets-15933785-f294-4d72-9eb0-da4ee6f169ba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07148709s Dec 30 13:01:43.137: INFO: Pod "pod-secrets-15933785-f294-4d72-9eb0-da4ee6f169ba": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077633581s Dec 30 13:01:45.146: INFO: Pod "pod-secrets-15933785-f294-4d72-9eb0-da4ee6f169ba": Phase="Pending", Reason="", readiness=false. Elapsed: 8.08712362s Dec 30 13:01:47.168: INFO: Pod "pod-secrets-15933785-f294-4d72-9eb0-da4ee6f169ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.108910393s STEP: Saw pod success Dec 30 13:01:47.168: INFO: Pod "pod-secrets-15933785-f294-4d72-9eb0-da4ee6f169ba" satisfied condition "success or failure" Dec 30 13:01:47.176: INFO: Trying to get logs from node iruya-node pod pod-secrets-15933785-f294-4d72-9eb0-da4ee6f169ba container secret-volume-test: STEP: delete the pod Dec 30 13:01:47.290: INFO: Waiting for pod pod-secrets-15933785-f294-4d72-9eb0-da4ee6f169ba to disappear Dec 30 13:01:47.296: INFO: Pod pod-secrets-15933785-f294-4d72-9eb0-da4ee6f169ba no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:01:47.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8609" for this suite. Dec 30 13:01:53.362: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:01:53.481: INFO: namespace secrets-8609 deletion completed in 6.145696404s • [SLOW TEST:16.559 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:01:53.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 30 13:01:53.644: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0ee93e5b-0017-437b-8a9c-3da496b341e3" in namespace "projected-2848" to be "success or failure" Dec 30 13:01:53.656: INFO: Pod "downwardapi-volume-0ee93e5b-0017-437b-8a9c-3da496b341e3": Phase="Pending", Reason="", readiness=false. Elapsed: 12.630221ms Dec 30 13:01:55.675: INFO: Pod "downwardapi-volume-0ee93e5b-0017-437b-8a9c-3da496b341e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031477296s Dec 30 13:01:57.694: INFO: Pod "downwardapi-volume-0ee93e5b-0017-437b-8a9c-3da496b341e3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050153432s Dec 30 13:01:59.703: INFO: Pod "downwardapi-volume-0ee93e5b-0017-437b-8a9c-3da496b341e3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059634208s Dec 30 13:02:01.713: INFO: Pod "downwardapi-volume-0ee93e5b-0017-437b-8a9c-3da496b341e3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.069556233s Dec 30 13:02:03.719: INFO: Pod "downwardapi-volume-0ee93e5b-0017-437b-8a9c-3da496b341e3": Phase="Pending", Reason="", readiness=false. Elapsed: 10.075033295s Dec 30 13:02:05.727: INFO: Pod "downwardapi-volume-0ee93e5b-0017-437b-8a9c-3da496b341e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.083793348s STEP: Saw pod success Dec 30 13:02:05.728: INFO: Pod "downwardapi-volume-0ee93e5b-0017-437b-8a9c-3da496b341e3" satisfied condition "success or failure" Dec 30 13:02:05.732: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-0ee93e5b-0017-437b-8a9c-3da496b341e3 container client-container: STEP: delete the pod Dec 30 13:02:05.826: INFO: Waiting for pod downwardapi-volume-0ee93e5b-0017-437b-8a9c-3da496b341e3 to disappear Dec 30 13:02:05.892: INFO: Pod downwardapi-volume-0ee93e5b-0017-437b-8a9c-3da496b341e3 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:02:05.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2848" for this suite. Dec 30 13:02:11.918: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:02:12.032: INFO: namespace projected-2848 deletion completed in 6.134196336s • [SLOW TEST:18.550 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:02:12.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-d82169c7-83d0-4310-86e6-086069954790 STEP: Creating a pod to test consume secrets Dec 30 13:02:12.399: INFO: Waiting up to 5m0s for pod "pod-secrets-ac470f5e-d8d2-49b1-9c84-5e0c8188d542" in namespace "secrets-3287" to be "success or failure" Dec 30 13:02:12.406: INFO: Pod "pod-secrets-ac470f5e-d8d2-49b1-9c84-5e0c8188d542": Phase="Pending", Reason="", readiness=false. Elapsed: 6.763839ms Dec 30 13:02:14.455: INFO: Pod "pod-secrets-ac470f5e-d8d2-49b1-9c84-5e0c8188d542": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056190647s Dec 30 13:02:16.466: INFO: Pod "pod-secrets-ac470f5e-d8d2-49b1-9c84-5e0c8188d542": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066787373s Dec 30 13:02:18.477: INFO: Pod "pod-secrets-ac470f5e-d8d2-49b1-9c84-5e0c8188d542": Phase="Pending", Reason="", readiness=false. Elapsed: 6.07770534s Dec 30 13:02:20.513: INFO: Pod "pod-secrets-ac470f5e-d8d2-49b1-9c84-5e0c8188d542": Phase="Pending", Reason="", readiness=false. Elapsed: 8.113699901s Dec 30 13:02:22.537: INFO: Pod "pod-secrets-ac470f5e-d8d2-49b1-9c84-5e0c8188d542": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.137831559s STEP: Saw pod success Dec 30 13:02:22.537: INFO: Pod "pod-secrets-ac470f5e-d8d2-49b1-9c84-5e0c8188d542" satisfied condition "success or failure" Dec 30 13:02:22.552: INFO: Trying to get logs from node iruya-node pod pod-secrets-ac470f5e-d8d2-49b1-9c84-5e0c8188d542 container secret-volume-test: STEP: delete the pod Dec 30 13:02:22.764: INFO: Waiting for pod pod-secrets-ac470f5e-d8d2-49b1-9c84-5e0c8188d542 to disappear Dec 30 13:02:22.773: INFO: Pod pod-secrets-ac470f5e-d8d2-49b1-9c84-5e0c8188d542 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:02:22.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3287" for this suite. Dec 30 13:02:28.844: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:02:28.944: INFO: namespace secrets-3287 deletion completed in 6.154223722s STEP: Destroying namespace "secret-namespace-1551" for this suite. Dec 30 13:02:34.991: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:02:35.101: INFO: namespace secret-namespace-1551 deletion completed in 6.157067107s • [SLOW TEST:23.068 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:02:35.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-6d74a564-98fb-43f8-92fe-1f31409085ea STEP: Creating secret with name s-test-opt-upd-6961ce7b-9936-4dbe-9a39-3a401fc357cf STEP: Creating the pod STEP: Deleting secret s-test-opt-del-6d74a564-98fb-43f8-92fe-1f31409085ea STEP: Updating secret s-test-opt-upd-6961ce7b-9936-4dbe-9a39-3a401fc357cf STEP: Creating secret with name s-test-opt-create-e81058a2-aeef-4d7f-bf8f-27c02110cd19 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:02:51.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4545" for this suite. Dec 30 13:03:13.567: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:03:13.709: INFO: namespace projected-4545 deletion completed in 22.193406598s • [SLOW TEST:38.608 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:03:13.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Dec 30 13:03:22.441: INFO: Successfully updated pod "labelsupdate9ce9724b-1960-41b5-ad0e-9a68814058a3" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:03:26.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9732" for this suite. Dec 30 13:03:48.666: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:03:48.760: INFO: namespace downward-api-9732 deletion completed in 22.125125245s • [SLOW TEST:35.050 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:03:48.761: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-843495b9-c716-45cb-a544-59a2bda794bb STEP: Creating a pod to test consume configMaps Dec 30 13:03:49.470: INFO: Waiting up to 5m0s for pod "pod-configmaps-9cfa96ad-6213-431b-a953-e4eaf5f7c50d" in namespace "configmap-2154" to be "success or failure" Dec 30 13:03:49.479: INFO: Pod "pod-configmaps-9cfa96ad-6213-431b-a953-e4eaf5f7c50d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.615559ms Dec 30 13:03:51.488: INFO: Pod "pod-configmaps-9cfa96ad-6213-431b-a953-e4eaf5f7c50d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017733634s Dec 30 13:03:53.493: INFO: Pod "pod-configmaps-9cfa96ad-6213-431b-a953-e4eaf5f7c50d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023224949s Dec 30 13:03:55.499: INFO: Pod "pod-configmaps-9cfa96ad-6213-431b-a953-e4eaf5f7c50d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028848778s Dec 30 13:03:57.512: INFO: Pod "pod-configmaps-9cfa96ad-6213-431b-a953-e4eaf5f7c50d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.041775822s STEP: Saw pod success Dec 30 13:03:57.512: INFO: Pod "pod-configmaps-9cfa96ad-6213-431b-a953-e4eaf5f7c50d" satisfied condition "success or failure" Dec 30 13:03:57.524: INFO: Trying to get logs from node iruya-node pod pod-configmaps-9cfa96ad-6213-431b-a953-e4eaf5f7c50d container configmap-volume-test: STEP: delete the pod Dec 30 13:03:57.732: INFO: Waiting for pod pod-configmaps-9cfa96ad-6213-431b-a953-e4eaf5f7c50d to disappear Dec 30 13:03:57.740: INFO: Pod pod-configmaps-9cfa96ad-6213-431b-a953-e4eaf5f7c50d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:03:57.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2154" for this suite. Dec 30 13:04:03.812: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:04:03.931: INFO: namespace configmap-2154 deletion completed in 6.181255014s • [SLOW TEST:15.171 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:04:03.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin Dec 30 13:04:04.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4477 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Dec 30 13:04:18.397: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\n" Dec 30 13:04:18.398: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:04:20.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4477" for this suite. Dec 30 13:04:26.492: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:04:26.652: INFO: namespace kubectl-4477 deletion completed in 6.237357538s • [SLOW TEST:22.720 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:04:26.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode Dec 30 13:04:26.813: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-6364" to be "success or failure" Dec 30 13:04:26.900: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 86.307333ms Dec 30 13:04:28.910: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096239186s Dec 30 13:04:30.923: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.109359456s Dec 30 13:04:32.930: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.116448612s Dec 30 13:04:34.942: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.128529559s Dec 30 13:04:36.951: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.137165626s Dec 30 13:04:38.960: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.145938511s STEP: Saw pod success Dec 30 13:04:38.960: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Dec 30 13:04:38.964: INFO: Trying to get logs from node iruya-node pod pod-host-path-test container test-container-1: STEP: delete the pod Dec 30 13:04:39.077: INFO: Waiting for pod pod-host-path-test to disappear Dec 30 13:04:39.143: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:04:39.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-6364" for this suite. Dec 30 13:04:45.182: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:04:45.321: INFO: namespace hostpath-6364 deletion completed in 6.167576678s • [SLOW TEST:18.668 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:04:45.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command Dec 30 13:04:45.436: INFO: Waiting up to 5m0s for pod "var-expansion-774c0053-8b7b-49d3-9c67-0a18043c6297" in namespace "var-expansion-899" to be "success or failure" Dec 30 13:04:45.442: INFO: Pod "var-expansion-774c0053-8b7b-49d3-9c67-0a18043c6297": Phase="Pending", Reason="", readiness=false. Elapsed: 6.606117ms Dec 30 13:04:47.463: INFO: Pod "var-expansion-774c0053-8b7b-49d3-9c67-0a18043c6297": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027416765s Dec 30 13:04:49.472: INFO: Pod "var-expansion-774c0053-8b7b-49d3-9c67-0a18043c6297": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035859116s Dec 30 13:04:51.483: INFO: Pod "var-expansion-774c0053-8b7b-49d3-9c67-0a18043c6297": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046906466s Dec 30 13:04:53.492: INFO: Pod "var-expansion-774c0053-8b7b-49d3-9c67-0a18043c6297": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.056066726s STEP: Saw pod success Dec 30 13:04:53.492: INFO: Pod "var-expansion-774c0053-8b7b-49d3-9c67-0a18043c6297" satisfied condition "success or failure" Dec 30 13:04:53.497: INFO: Trying to get logs from node iruya-node pod var-expansion-774c0053-8b7b-49d3-9c67-0a18043c6297 container dapi-container: STEP: delete the pod Dec 30 13:04:53.537: INFO: Waiting for pod var-expansion-774c0053-8b7b-49d3-9c67-0a18043c6297 to disappear Dec 30 13:04:53.543: INFO: Pod var-expansion-774c0053-8b7b-49d3-9c67-0a18043c6297 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:04:53.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-899" for this suite. Dec 30 13:04:59.580: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:04:59.676: INFO: namespace var-expansion-899 deletion completed in 6.125417354s • [SLOW TEST:14.355 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:04:59.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 30 13:04:59.812: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Dec 30 13:05:03.098: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:05:03.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3799" for this suite. Dec 30 13:05:17.649: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:05:17.725: INFO: namespace replication-controller-3799 deletion completed in 14.132306164s • [SLOW TEST:18.048 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:05:17.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-2a353c04-d1fc-4366-ab98-13f3c6d167f0 STEP: Creating configMap with name cm-test-opt-upd-11fcb6b6-49ca-466d-99a2-7cc257124175 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-2a353c04-d1fc-4366-ab98-13f3c6d167f0 STEP: Updating configmap cm-test-opt-upd-11fcb6b6-49ca-466d-99a2-7cc257124175 STEP: Creating configMap with name cm-test-opt-create-4c22a036-ba9b-4bb6-b625-c4cb1a4fa667 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:06:49.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8354" for this suite. Dec 30 13:07:13.665: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:07:13.776: INFO: namespace projected-8354 deletion completed in 24.136775974s • [SLOW TEST:116.051 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:07:13.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-2030 STEP: creating a selector STEP: Creating the service pods in kubernetes Dec 30 13:07:13.871: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Dec 30 13:07:56.117: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-2030 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 30 13:07:56.117: INFO: >>> kubeConfig: /root/.kube/config Dec 30 13:07:56.707: INFO: Waiting for endpoints: map[] Dec 30 13:07:56.766: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-2030 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 30 13:07:56.766: INFO: >>> kubeConfig: /root/.kube/config Dec 30 13:07:57.149: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:07:57.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2030" for this suite. Dec 30 13:08:21.214: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:08:21.334: INFO: namespace pod-network-test-2030 deletion completed in 24.173713406s • [SLOW TEST:67.555 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:08:21.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:08:21.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8148" for this suite. Dec 30 13:08:27.728: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:08:27.854: INFO: namespace kubelet-test-8148 deletion completed in 6.168138211s • [SLOW TEST:6.520 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:08:27.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Dec 30 13:08:27.994: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. Dec 30 13:08:28.557: INFO: new replicaset for deployment "sample-apiserver-deployment" is yet to be created Dec 30 13:08:30.728: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713308108, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713308108, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713308108, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713308108, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 30 13:08:32.738: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713308108, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713308108, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713308108, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713308108, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 30 13:08:34.739: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713308108, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713308108, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713308108, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713308108, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 30 13:08:37.508: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713308108, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713308108, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713308108, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713308108, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 30 13:08:38.736: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713308108, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713308108, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713308108, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713308108, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 30 13:08:40.735: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713308108, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713308108, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713308108, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713308108, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 30 13:08:42.745: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713308108, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713308108, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713308108, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713308108, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 30 13:08:50.843: INFO: Waited 6.028389696s for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:08:51.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-4273" for this suite. Dec 30 13:09:00.182: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:09:00.368: INFO: namespace aggregator-4273 deletion completed in 8.63512063s • [SLOW TEST:32.513 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:09:00.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 30 13:09:00.521: INFO: Waiting up to 5m0s for pod "downwardapi-volume-21c00b0f-f269-4cc5-88a0-72c02c370ac2" in namespace "projected-2235" to be "success or failure" Dec 30 13:09:00.626: INFO: Pod "downwardapi-volume-21c00b0f-f269-4cc5-88a0-72c02c370ac2": Phase="Pending", Reason="", readiness=false. Elapsed: 104.847702ms Dec 30 13:09:02.634: INFO: Pod "downwardapi-volume-21c00b0f-f269-4cc5-88a0-72c02c370ac2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11294434s Dec 30 13:09:04.648: INFO: Pod "downwardapi-volume-21c00b0f-f269-4cc5-88a0-72c02c370ac2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.1274109s Dec 30 13:09:06.658: INFO: Pod "downwardapi-volume-21c00b0f-f269-4cc5-88a0-72c02c370ac2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.136683305s Dec 30 13:09:08.682: INFO: Pod "downwardapi-volume-21c00b0f-f269-4cc5-88a0-72c02c370ac2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.160696785s Dec 30 13:09:10.689: INFO: Pod "downwardapi-volume-21c00b0f-f269-4cc5-88a0-72c02c370ac2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.168503433s STEP: Saw pod success Dec 30 13:09:10.690: INFO: Pod "downwardapi-volume-21c00b0f-f269-4cc5-88a0-72c02c370ac2" satisfied condition "success or failure" Dec 30 13:09:10.693: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-21c00b0f-f269-4cc5-88a0-72c02c370ac2 container client-container: STEP: delete the pod Dec 30 13:09:10.813: INFO: Waiting for pod downwardapi-volume-21c00b0f-f269-4cc5-88a0-72c02c370ac2 to disappear Dec 30 13:09:10.819: INFO: Pod downwardapi-volume-21c00b0f-f269-4cc5-88a0-72c02c370ac2 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:09:10.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2235" for this suite. Dec 30 13:09:16.897: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:09:16.975: INFO: namespace projected-2235 deletion completed in 6.148836986s • [SLOW TEST:16.607 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:09:16.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-f251c94b-e49e-4191-a485-f8bccc17c512 STEP: Creating a pod to test consume secrets Dec 30 13:09:17.169: INFO: Waiting up to 5m0s for pod "pod-secrets-b3b5e8e8-7c14-47b7-8eb3-08ebb3ab8372" in namespace "secrets-9847" to be "success or failure" Dec 30 13:09:17.230: INFO: Pod "pod-secrets-b3b5e8e8-7c14-47b7-8eb3-08ebb3ab8372": Phase="Pending", Reason="", readiness=false. Elapsed: 60.541196ms Dec 30 13:09:19.273: INFO: Pod "pod-secrets-b3b5e8e8-7c14-47b7-8eb3-08ebb3ab8372": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10405717s Dec 30 13:09:21.288: INFO: Pod "pod-secrets-b3b5e8e8-7c14-47b7-8eb3-08ebb3ab8372": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118720393s Dec 30 13:09:23.295: INFO: Pod "pod-secrets-b3b5e8e8-7c14-47b7-8eb3-08ebb3ab8372": Phase="Pending", Reason="", readiness=false. Elapsed: 6.126096046s Dec 30 13:09:25.309: INFO: Pod "pod-secrets-b3b5e8e8-7c14-47b7-8eb3-08ebb3ab8372": Phase="Pending", Reason="", readiness=false. Elapsed: 8.139584626s Dec 30 13:09:27.317: INFO: Pod "pod-secrets-b3b5e8e8-7c14-47b7-8eb3-08ebb3ab8372": Phase="Pending", Reason="", readiness=false. Elapsed: 10.147563661s Dec 30 13:09:29.348: INFO: Pod "pod-secrets-b3b5e8e8-7c14-47b7-8eb3-08ebb3ab8372": Phase="Running", Reason="", readiness=true. Elapsed: 12.178448803s Dec 30 13:09:31.358: INFO: Pod "pod-secrets-b3b5e8e8-7c14-47b7-8eb3-08ebb3ab8372": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.189301029s STEP: Saw pod success Dec 30 13:09:31.358: INFO: Pod "pod-secrets-b3b5e8e8-7c14-47b7-8eb3-08ebb3ab8372" satisfied condition "success or failure" Dec 30 13:09:31.362: INFO: Trying to get logs from node iruya-node pod pod-secrets-b3b5e8e8-7c14-47b7-8eb3-08ebb3ab8372 container secret-volume-test: STEP: delete the pod Dec 30 13:09:31.531: INFO: Waiting for pod pod-secrets-b3b5e8e8-7c14-47b7-8eb3-08ebb3ab8372 to disappear Dec 30 13:09:31.621: INFO: Pod pod-secrets-b3b5e8e8-7c14-47b7-8eb3-08ebb3ab8372 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:09:31.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9847" for this suite. Dec 30 13:09:37.702: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:09:37.973: INFO: namespace secrets-9847 deletion completed in 6.341452826s • [SLOW TEST:20.997 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:09:37.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-4d1e401f-3dbc-4a42-8941-c2d4a2b2bdde STEP: Creating a pod to test consume configMaps Dec 30 13:09:38.277: INFO: Waiting up to 5m0s for pod "pod-configmaps-db2b58dd-194a-4207-8c01-5181e1f092a0" in namespace "configmap-5301" to be "success or failure" Dec 30 13:09:38.281: INFO: Pod "pod-configmaps-db2b58dd-194a-4207-8c01-5181e1f092a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.401637ms Dec 30 13:09:40.291: INFO: Pod "pod-configmaps-db2b58dd-194a-4207-8c01-5181e1f092a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01396713s Dec 30 13:09:42.307: INFO: Pod "pod-configmaps-db2b58dd-194a-4207-8c01-5181e1f092a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029704474s Dec 30 13:09:44.314: INFO: Pod "pod-configmaps-db2b58dd-194a-4207-8c01-5181e1f092a0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03699382s Dec 30 13:09:46.325: INFO: Pod "pod-configmaps-db2b58dd-194a-4207-8c01-5181e1f092a0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.047774572s Dec 30 13:09:48.390: INFO: Pod "pod-configmaps-db2b58dd-194a-4207-8c01-5181e1f092a0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.113507489s Dec 30 13:09:50.404: INFO: Pod "pod-configmaps-db2b58dd-194a-4207-8c01-5181e1f092a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.127327164s STEP: Saw pod success Dec 30 13:09:50.405: INFO: Pod "pod-configmaps-db2b58dd-194a-4207-8c01-5181e1f092a0" satisfied condition "success or failure" Dec 30 13:09:50.411: INFO: Trying to get logs from node iruya-node pod pod-configmaps-db2b58dd-194a-4207-8c01-5181e1f092a0 container configmap-volume-test: STEP: delete the pod Dec 30 13:09:50.542: INFO: Waiting for pod pod-configmaps-db2b58dd-194a-4207-8c01-5181e1f092a0 to disappear Dec 30 13:09:50.559: INFO: Pod pod-configmaps-db2b58dd-194a-4207-8c01-5181e1f092a0 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:09:50.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5301" for this suite. Dec 30 13:09:56.600: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:09:56.793: INFO: namespace configmap-5301 deletion completed in 6.228804445s • [SLOW TEST:18.818 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:09:56.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Dec 30 13:09:56.950: INFO: Waiting up to 5m0s for pod "downward-api-f5338ed8-a568-4f61-bb74-57791ef24d50" in namespace "downward-api-2125" to be "success or failure" Dec 30 13:09:57.028: INFO: Pod "downward-api-f5338ed8-a568-4f61-bb74-57791ef24d50": Phase="Pending", Reason="", readiness=false. Elapsed: 77.199523ms Dec 30 13:09:59.035: INFO: Pod "downward-api-f5338ed8-a568-4f61-bb74-57791ef24d50": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084791892s Dec 30 13:10:01.047: INFO: Pod "downward-api-f5338ed8-a568-4f61-bb74-57791ef24d50": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096585598s Dec 30 13:10:03.056: INFO: Pod "downward-api-f5338ed8-a568-4f61-bb74-57791ef24d50": Phase="Pending", Reason="", readiness=false. Elapsed: 6.105272451s Dec 30 13:10:05.071: INFO: Pod "downward-api-f5338ed8-a568-4f61-bb74-57791ef24d50": Phase="Pending", Reason="", readiness=false. Elapsed: 8.120748781s Dec 30 13:10:07.091: INFO: Pod "downward-api-f5338ed8-a568-4f61-bb74-57791ef24d50": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.140740639s STEP: Saw pod success Dec 30 13:10:07.091: INFO: Pod "downward-api-f5338ed8-a568-4f61-bb74-57791ef24d50" satisfied condition "success or failure" Dec 30 13:10:07.106: INFO: Trying to get logs from node iruya-node pod downward-api-f5338ed8-a568-4f61-bb74-57791ef24d50 container dapi-container: STEP: delete the pod Dec 30 13:10:07.431: INFO: Waiting for pod downward-api-f5338ed8-a568-4f61-bb74-57791ef24d50 to disappear Dec 30 13:10:07.444: INFO: Pod downward-api-f5338ed8-a568-4f61-bb74-57791ef24d50 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:10:07.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2125" for this suite. Dec 30 13:10:13.571: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:10:13.739: INFO: namespace downward-api-2125 deletion completed in 6.280881313s • [SLOW TEST:16.945 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:10:13.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 30 13:10:13.819: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1345cc1b-2110-4c9e-9ef5-8bbaf310d514" in namespace "downward-api-7605" to be "success or failure" Dec 30 13:10:13.891: INFO: Pod "downwardapi-volume-1345cc1b-2110-4c9e-9ef5-8bbaf310d514": Phase="Pending", Reason="", readiness=false. Elapsed: 72.289403ms Dec 30 13:10:15.897: INFO: Pod "downwardapi-volume-1345cc1b-2110-4c9e-9ef5-8bbaf310d514": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078570094s Dec 30 13:10:17.931: INFO: Pod "downwardapi-volume-1345cc1b-2110-4c9e-9ef5-8bbaf310d514": Phase="Pending", Reason="", readiness=false. Elapsed: 4.112377147s Dec 30 13:10:19.937: INFO: Pod "downwardapi-volume-1345cc1b-2110-4c9e-9ef5-8bbaf310d514": Phase="Pending", Reason="", readiness=false. Elapsed: 6.118467654s Dec 30 13:10:21.949: INFO: Pod "downwardapi-volume-1345cc1b-2110-4c9e-9ef5-8bbaf310d514": Phase="Pending", Reason="", readiness=false. Elapsed: 8.130338866s Dec 30 13:10:23.961: INFO: Pod "downwardapi-volume-1345cc1b-2110-4c9e-9ef5-8bbaf310d514": Phase="Pending", Reason="", readiness=false. Elapsed: 10.142610412s Dec 30 13:10:25.969: INFO: Pod "downwardapi-volume-1345cc1b-2110-4c9e-9ef5-8bbaf310d514": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.150499654s STEP: Saw pod success Dec 30 13:10:25.969: INFO: Pod "downwardapi-volume-1345cc1b-2110-4c9e-9ef5-8bbaf310d514" satisfied condition "success or failure" Dec 30 13:10:25.973: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-1345cc1b-2110-4c9e-9ef5-8bbaf310d514 container client-container: STEP: delete the pod Dec 30 13:10:26.152: INFO: Waiting for pod downwardapi-volume-1345cc1b-2110-4c9e-9ef5-8bbaf310d514 to disappear Dec 30 13:10:26.185: INFO: Pod downwardapi-volume-1345cc1b-2110-4c9e-9ef5-8bbaf310d514 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:10:26.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7605" for this suite. Dec 30 13:10:32.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:10:32.324: INFO: namespace downward-api-7605 deletion completed in 6.133463868s • [SLOW TEST:18.585 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:10:32.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Dec 30 13:10:32.411: INFO: Waiting up to 5m0s for pod "pod-0399cce6-8952-4782-8265-428fcb0b4fbd" in namespace "emptydir-7156" to be "success or failure" Dec 30 13:10:32.422: INFO: Pod "pod-0399cce6-8952-4782-8265-428fcb0b4fbd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.120513ms Dec 30 13:10:34.427: INFO: Pod "pod-0399cce6-8952-4782-8265-428fcb0b4fbd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015470362s Dec 30 13:10:36.442: INFO: Pod "pod-0399cce6-8952-4782-8265-428fcb0b4fbd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030527644s Dec 30 13:10:38.462: INFO: Pod "pod-0399cce6-8952-4782-8265-428fcb0b4fbd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050505487s Dec 30 13:10:40.485: INFO: Pod "pod-0399cce6-8952-4782-8265-428fcb0b4fbd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.073762279s Dec 30 13:10:42.494: INFO: Pod "pod-0399cce6-8952-4782-8265-428fcb0b4fbd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.082124838s STEP: Saw pod success Dec 30 13:10:42.494: INFO: Pod "pod-0399cce6-8952-4782-8265-428fcb0b4fbd" satisfied condition "success or failure" Dec 30 13:10:42.498: INFO: Trying to get logs from node iruya-node pod pod-0399cce6-8952-4782-8265-428fcb0b4fbd container test-container: STEP: delete the pod Dec 30 13:10:42.666: INFO: Waiting for pod pod-0399cce6-8952-4782-8265-428fcb0b4fbd to disappear Dec 30 13:10:42.676: INFO: Pod pod-0399cce6-8952-4782-8265-428fcb0b4fbd no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:10:42.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7156" for this suite. Dec 30 13:10:48.712: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:10:48.924: INFO: namespace emptydir-7156 deletion completed in 6.234647668s • [SLOW TEST:16.599 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:10:48.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Dec 30 13:10:49.002: INFO: namespace kubectl-4680 Dec 30 13:10:49.002: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4680' Dec 30 13:10:49.374: INFO: stderr: "" Dec 30 13:10:49.375: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Dec 30 13:10:50.383: INFO: Selector matched 1 pods for map[app:redis] Dec 30 13:10:50.383: INFO: Found 0 / 1 Dec 30 13:10:51.385: INFO: Selector matched 1 pods for map[app:redis] Dec 30 13:10:51.385: INFO: Found 0 / 1 Dec 30 13:10:52.397: INFO: Selector matched 1 pods for map[app:redis] Dec 30 13:10:52.397: INFO: Found 0 / 1 Dec 30 13:10:53.390: INFO: Selector matched 1 pods for map[app:redis] Dec 30 13:10:53.390: INFO: Found 0 / 1 Dec 30 13:10:54.383: INFO: Selector matched 1 pods for map[app:redis] Dec 30 13:10:54.383: INFO: Found 0 / 1 Dec 30 13:10:55.383: INFO: Selector matched 1 pods for map[app:redis] Dec 30 13:10:55.384: INFO: Found 0 / 1 Dec 30 13:10:56.385: INFO: Selector matched 1 pods for map[app:redis] Dec 30 13:10:56.385: INFO: Found 0 / 1 Dec 30 13:10:57.385: INFO: Selector matched 1 pods for map[app:redis] Dec 30 13:10:57.385: INFO: Found 1 / 1 Dec 30 13:10:57.385: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Dec 30 13:10:57.390: INFO: Selector matched 1 pods for map[app:redis] Dec 30 13:10:57.390: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Dec 30 13:10:57.390: INFO: wait on redis-master startup in kubectl-4680 Dec 30 13:10:57.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-86jhj redis-master --namespace=kubectl-4680' Dec 30 13:10:57.621: INFO: stderr: "" Dec 30 13:10:57.621: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 30 Dec 13:10:55.899 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 30 Dec 13:10:55.899 # Server started, Redis version 3.2.12\n1:M 30 Dec 13:10:55.899 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 30 Dec 13:10:55.899 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Dec 30 13:10:57.621: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-4680' Dec 30 13:10:57.835: INFO: stderr: "" Dec 30 13:10:57.835: INFO: stdout: "service/rm2 exposed\n" Dec 30 13:10:57.845: INFO: Service rm2 in namespace kubectl-4680 found. STEP: exposing service Dec 30 13:10:59.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-4680' Dec 30 13:11:00.192: INFO: stderr: "" Dec 30 13:11:00.192: INFO: stdout: "service/rm3 exposed\n" Dec 30 13:11:00.212: INFO: Service rm3 in namespace kubectl-4680 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:11:02.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4680" for this suite. Dec 30 13:11:26.261: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:11:26.395: INFO: namespace kubectl-4680 deletion completed in 24.165512779s • [SLOW TEST:37.470 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:11:26.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Dec 30 13:11:37.006: INFO: Successfully updated pod "pod-update-81d45db4-4af7-403c-b861-38469959683c" STEP: verifying the updated pod is in kubernetes Dec 30 13:11:37.015: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:11:37.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1160" for this suite. Dec 30 13:11:59.033: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:11:59.130: INFO: namespace pods-1160 deletion completed in 22.112211273s • [SLOW TEST:32.736 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:11:59.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-wgksp in namespace proxy-5227 I1230 13:11:59.457207 8 runners.go:180] Created replication controller with name: proxy-service-wgksp, namespace: proxy-5227, replica count: 1 I1230 13:12:00.508884 8 runners.go:180] proxy-service-wgksp Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1230 13:12:01.509238 8 runners.go:180] proxy-service-wgksp Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1230 13:12:02.509578 8 runners.go:180] proxy-service-wgksp Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1230 13:12:03.509918 8 runners.go:180] proxy-service-wgksp Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1230 13:12:04.510325 8 runners.go:180] proxy-service-wgksp Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1230 13:12:05.511004 8 runners.go:180] proxy-service-wgksp Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1230 13:12:06.511417 8 runners.go:180] proxy-service-wgksp Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1230 13:12:07.511914 8 runners.go:180] proxy-service-wgksp Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1230 13:12:08.512381 8 runners.go:180] proxy-service-wgksp Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1230 13:12:09.512791 8 runners.go:180] proxy-service-wgksp Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1230 13:12:10.513291 8 runners.go:180] proxy-service-wgksp Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1230 13:12:11.513595 8 runners.go:180] proxy-service-wgksp Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1230 13:12:12.513949 8 runners.go:180] proxy-service-wgksp Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1230 13:12:13.514303 8 runners.go:180] proxy-service-wgksp Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1230 13:12:14.514719 8 runners.go:180] proxy-service-wgksp Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1230 13:12:15.515087 8 runners.go:180] proxy-service-wgksp Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Dec 30 13:12:15.527: INFO: setup took 16.233516868s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Dec 30 13:12:15.552: INFO: (0) /api/v1/namespaces/proxy-5227/services/proxy-service-wgksp:portname2/proxy/: bar (200; 25.098603ms) Dec 30 13:12:15.553: INFO: (0) /api/v1/namespaces/proxy-5227/pods/proxy-service-wgksp-9mz2h/proxy/: test (200; 25.252596ms) Dec 30 13:12:15.553: INFO: (0) /api/v1/namespaces/proxy-5227/pods/http:proxy-service-wgksp-9mz2h:1080/proxy/: ... (200; 25.663704ms) Dec 30 13:12:15.561: INFO: (0) /api/v1/namespaces/proxy-5227/pods/http:proxy-service-wgksp-9mz2h:160/proxy/: foo (200; 33.953901ms) Dec 30 13:12:15.562: INFO: (0) /api/v1/namespaces/proxy-5227/pods/proxy-service-wgksp-9mz2h:160/proxy/: foo (200; 34.137732ms) Dec 30 13:12:15.565: INFO: (0) /api/v1/namespaces/proxy-5227/services/proxy-service-wgksp:portname1/proxy/: foo (200; 37.307133ms) Dec 30 13:12:15.565: INFO: (0) /api/v1/namespaces/proxy-5227/pods/http:proxy-service-wgksp-9mz2h:162/proxy/: bar (200; 37.469121ms) Dec 30 13:12:15.569: INFO: (0) /api/v1/namespaces/proxy-5227/services/http:proxy-service-wgksp:portname1/proxy/: foo (200; 41.078717ms) Dec 30 13:12:15.570: INFO: (0) /api/v1/namespaces/proxy-5227/pods/proxy-service-wgksp-9mz2h:1080/proxy/: test<... (200; 42.310908ms) Dec 30 13:12:15.571: INFO: (0) /api/v1/namespaces/proxy-5227/services/http:proxy-service-wgksp:portname2/proxy/: bar (200; 44.194386ms) Dec 30 13:12:15.571: INFO: (0) /api/v1/namespaces/proxy-5227/pods/proxy-service-wgksp-9mz2h:162/proxy/: bar (200; 43.930368ms) Dec 30 13:12:15.571: INFO: (0) /api/v1/namespaces/proxy-5227/pods/https:proxy-service-wgksp-9mz2h:460/proxy/: tls baz (200; 44.269155ms) Dec 30 13:12:15.573: INFO: (0) /api/v1/namespaces/proxy-5227/services/https:proxy-service-wgksp:tlsportname2/proxy/: tls qux (200; 45.885979ms) Dec 30 13:12:15.578: INFO: (0) /api/v1/namespaces/proxy-5227/pods/https:proxy-service-wgksp-9mz2h:443/proxy/: ... (200; 13.592358ms) Dec 30 13:12:15.597: INFO: (1) /api/v1/namespaces/proxy-5227/pods/https:proxy-service-wgksp-9mz2h:462/proxy/: tls qux (200; 13.939543ms) Dec 30 13:12:15.597: INFO: (1) /api/v1/namespaces/proxy-5227/pods/proxy-service-wgksp-9mz2h:160/proxy/: foo (200; 13.902982ms) Dec 30 13:12:15.597: INFO: (1) /api/v1/namespaces/proxy-5227/pods/https:proxy-service-wgksp-9mz2h:460/proxy/: tls baz (200; 14.575202ms) Dec 30 13:12:15.598: INFO: (1) /api/v1/namespaces/proxy-5227/pods/proxy-service-wgksp-9mz2h/proxy/: test (200; 14.638403ms) Dec 30 13:12:15.598: INFO: (1) /api/v1/namespaces/proxy-5227/pods/http:proxy-service-wgksp-9mz2h:160/proxy/: foo (200; 14.758861ms) Dec 30 13:12:15.598: INFO: (1) /api/v1/namespaces/proxy-5227/pods/proxy-service-wgksp-9mz2h:1080/proxy/: test<... (200; 14.608665ms) Dec 30 13:12:15.600: INFO: (1) /api/v1/namespaces/proxy-5227/services/proxy-service-wgksp:portname2/proxy/: bar (200; 17.03345ms) Dec 30 13:12:15.600: INFO: (1) /api/v1/namespaces/proxy-5227/services/http:proxy-service-wgksp:portname2/proxy/: bar (200; 17.509253ms) Dec 30 13:12:15.601: INFO: (1) /api/v1/namespaces/proxy-5227/services/https:proxy-service-wgksp:tlsportname2/proxy/: tls qux (200; 17.752906ms) Dec 30 13:12:15.603: INFO: (1) /api/v1/namespaces/proxy-5227/services/proxy-service-wgksp:portname1/proxy/: foo (200; 20.175155ms) Dec 30 13:12:15.604: INFO: (1) /api/v1/namespaces/proxy-5227/pods/http:proxy-service-wgksp-9mz2h:162/proxy/: bar (200; 20.984139ms) Dec 30 13:12:15.606: INFO: (1) /api/v1/namespaces/proxy-5227/services/https:proxy-service-wgksp:tlsportname1/proxy/: tls baz (200; 23.438927ms) Dec 30 13:12:15.608: INFO: (1) /api/v1/namespaces/proxy-5227/services/http:proxy-service-wgksp:portname1/proxy/: foo (200; 25.272875ms) Dec 30 13:12:15.624: INFO: (2) /api/v1/namespaces/proxy-5227/pods/http:proxy-service-wgksp-9mz2h:1080/proxy/: ... (200; 14.958335ms) Dec 30 13:12:15.624: INFO: (2) /api/v1/namespaces/proxy-5227/pods/https:proxy-service-wgksp-9mz2h:462/proxy/: tls qux (200; 14.135342ms) Dec 30 13:12:15.628: INFO: (2) /api/v1/namespaces/proxy-5227/pods/proxy-service-wgksp-9mz2h/proxy/: test (200; 18.255333ms) Dec 30 13:12:15.628: INFO: (2) /api/v1/namespaces/proxy-5227/pods/https:proxy-service-wgksp-9mz2h:460/proxy/: tls baz (200; 18.647501ms) Dec 30 13:12:15.628: INFO: (2) /api/v1/namespaces/proxy-5227/pods/proxy-service-wgksp-9mz2h:1080/proxy/: test<... (200; 18.838764ms) Dec 30 13:12:15.628: INFO: (2) /api/v1/namespaces/proxy-5227/pods/https:proxy-service-wgksp-9mz2h:443/proxy/: ... (200; 11.51649ms) Dec 30 13:12:15.653: INFO: (3) /api/v1/namespaces/proxy-5227/pods/https:proxy-service-wgksp-9mz2h:443/proxy/: test (200; 11.710745ms) Dec 30 13:12:15.653: INFO: (3) /api/v1/namespaces/proxy-5227/pods/https:proxy-service-wgksp-9mz2h:460/proxy/: tls baz (200; 12.14242ms) Dec 30 13:12:15.654: INFO: (3) /api/v1/namespaces/proxy-5227/pods/proxy-service-wgksp-9mz2h:1080/proxy/: test<... (200; 12.858335ms) Dec 30 13:12:15.655: INFO: (3) /api/v1/namespaces/proxy-5227/pods/proxy-service-wgksp-9mz2h:160/proxy/: foo (200; 13.596796ms) Dec 30 13:12:15.655: INFO: (3) /api/v1/namespaces/proxy-5227/services/proxy-service-wgksp:portname1/proxy/: foo (200; 13.901998ms) Dec 30 13:12:15.658: INFO: (3) /api/v1/namespaces/proxy-5227/services/http:proxy-service-wgksp:portname1/proxy/: foo (200; 16.258317ms) Dec 30 13:12:15.658: INFO: (3) /api/v1/namespaces/proxy-5227/services/http:proxy-service-wgksp:portname2/proxy/: bar (200; 16.873851ms) Dec 30 13:12:15.658: INFO: (3) /api/v1/namespaces/proxy-5227/services/https:proxy-service-wgksp:tlsportname2/proxy/: tls qux (200; 17.037263ms) Dec 30 13:12:15.658: INFO: (3) /api/v1/namespaces/proxy-5227/services/proxy-service-wgksp:portname2/proxy/: bar (200; 16.965269ms) Dec 30 13:12:15.659: INFO: (3) /api/v1/namespaces/proxy-5227/services/https:proxy-service-wgksp:tlsportname1/proxy/: tls baz (200; 17.431973ms) Dec 30 13:12:15.664: INFO: (4) /api/v1/namespaces/proxy-5227/pods/proxy-service-wgksp-9mz2h/proxy/: test (200; 4.83833ms) Dec 30 13:12:15.670: INFO: (4) /api/v1/namespaces/proxy-5227/pods/proxy-service-wgksp-9mz2h:162/proxy/: bar (200; 11.258644ms) Dec 30 13:12:15.671: INFO: (4) /api/v1/namespaces/proxy-5227/pods/proxy-service-wgksp-9mz2h:160/proxy/: foo (200; 12.346419ms) Dec 30 13:12:15.672: INFO: (4) /api/v1/namespaces/proxy-5227/pods/https:proxy-service-wgksp-9mz2h:443/proxy/: ... (200; 15.026748ms) Dec 30 13:12:15.674: INFO: (4) /api/v1/namespaces/proxy-5227/pods/http:proxy-service-wgksp-9mz2h:162/proxy/: bar (200; 15.121115ms) Dec 30 13:12:15.674: INFO: (4) /api/v1/namespaces/proxy-5227/pods/http:proxy-service-wgksp-9mz2h:160/proxy/: foo (200; 15.141264ms) Dec 30 13:12:15.674: INFO: (4) /api/v1/namespaces/proxy-5227/pods/https:proxy-service-wgksp-9mz2h:460/proxy/: tls baz (200; 15.642833ms) Dec 30 13:12:15.676: INFO: (4) /api/v1/namespaces/proxy-5227/services/proxy-service-wgksp:portname2/proxy/: bar (200; 16.98089ms) Dec 30 13:12:15.676: INFO: (4) /api/v1/namespaces/proxy-5227/pods/proxy-service-wgksp-9mz2h:1080/proxy/: test<... (200; 17.120979ms) Dec 30 13:12:15.676: INFO: (4) /api/v1/namespaces/proxy-5227/services/https:proxy-service-wgksp:tlsportname2/proxy/: tls qux (200; 16.942964ms) Dec 30 13:12:15.676: INFO: (4) /api/v1/namespaces/proxy-5227/services/http:proxy-service-wgksp:portname1/proxy/: foo (200; 16.937854ms) Dec 30 13:12:15.677: INFO: (4) /api/v1/namespaces/proxy-5227/services/https:proxy-service-wgksp:tlsportname1/proxy/: tls baz (200; 17.71577ms) Dec 30 13:12:15.677: INFO: (4) /api/v1/namespaces/proxy-5227/pods/https:proxy-service-wgksp-9mz2h:462/proxy/: tls qux (200; 17.870522ms) Dec 30 13:12:15.678: INFO: (4) /api/v1/namespaces/proxy-5227/services/proxy-service-wgksp:portname1/proxy/: foo (200; 19.767085ms) Dec 30 13:12:15.688: INFO: (5) /api/v1/namespaces/proxy-5227/services/http:proxy-service-wgksp:portname1/proxy/: foo (200; 9.394003ms) Dec 30 13:12:15.688: INFO: (5) /api/v1/namespaces/proxy-5227/pods/proxy-service-wgksp-9mz2h:160/proxy/: foo (200; 9.190315ms) Dec 30 13:12:15.688: INFO: (5) /api/v1/namespaces/proxy-5227/pods/http:proxy-service-wgksp-9mz2h:160/proxy/: foo (200; 9.642099ms) Dec 30 13:12:15.689: INFO: (5) /api/v1/namespaces/proxy-5227/pods/proxy-service-wgksp-9mz2h/proxy/: test (200; 9.855166ms) Dec 30 13:12:15.688: INFO: (5) /api/v1/namespaces/proxy-5227/pods/proxy-service-wgksp-9mz2h:1080/proxy/: test<... (200; 9.630099ms) Dec 30 13:12:15.692: INFO: (5) /api/v1/namespaces/proxy-5227/pods/http:proxy-service-wgksp-9mz2h:162/proxy/: bar (200; 12.91497ms) Dec 30 13:12:15.692: INFO: (5) /api/v1/namespaces/proxy-5227/pods/https:proxy-service-wgksp-9mz2h:462/proxy/: tls qux (200; 13.061638ms) Dec 30 13:12:15.692: INFO: (5) /api/v1/namespaces/proxy-5227/pods/https:proxy-service-wgksp-9mz2h:460/proxy/: tls baz (200; 13.066548ms) Dec 30 13:12:15.692: INFO: (5) /api/v1/namespaces/proxy-5227/pods/http:proxy-service-wgksp-9mz2h:1080/proxy/: ... (200; 13.220807ms) Dec 30 13:12:15.694: INFO: (5) /api/v1/namespaces/proxy-5227/pods/proxy-service-wgksp-9mz2h:162/proxy/: bar (200; 15.38113ms) Dec 30 13:12:15.694: INFO: (5) /api/v1/namespaces/proxy-5227/pods/https:proxy-service-wgksp-9mz2h:443/proxy/: ... (200; 91.889607ms) Dec 30 13:12:15.790: INFO: (6) /api/v1/namespaces/proxy-5227/pods/http:proxy-service-wgksp-9mz2h:162/proxy/: bar (200; 91.939121ms) Dec 30 13:12:15.790: INFO: (6) /api/v1/namespaces/proxy-5227/pods/proxy-service-wgksp-9mz2h/proxy/: test (200; 91.928139ms) Dec 30 13:12:15.791: INFO: (6) /api/v1/namespaces/proxy-5227/pods/proxy-service-wgksp-9mz2h:162/proxy/: bar (200; 92.206596ms) Dec 30 13:12:15.791: INFO: (6) /api/v1/namespaces/proxy-5227/pods/proxy-service-wgksp-9mz2h:1080/proxy/: test<... (200; 92.284114ms) Dec 30 13:12:15.791: INFO: (6) /api/v1/namespaces/proxy-5227/pods/proxy-service-wgksp-9mz2h:160/proxy/: foo (200; 93.086863ms) Dec 30 13:12:15.793: INFO: (6) /api/v1/namespaces/proxy-5227/pods/https:proxy-service-wgksp-9mz2h:443/proxy/: test<... (200; 17.014501ms) Dec 30 13:12:15.815: INFO: (7) /api/v1/namespaces/proxy-5227/pods/proxy-service-wgksp-9mz2h/proxy/: test (200; 16.891431ms) Dec 30 13:12:15.815: INFO: (7) /api/v1/namespaces/proxy-5227/pods/http:proxy-service-wgksp-9mz2h:1080/proxy/: ... (200; 17.073954ms) Dec 30 13:12:15.815: INFO: (7) /api/v1/namespaces/proxy-5227/pods/proxy-service-wgksp-9mz2h:162/proxy/: bar (200; 17.309695ms) Dec 30 13:12:15.815: INFO: (7) /api/v1/namespaces/proxy-5227/services/proxy-service-wgksp:portname2/proxy/: bar (200; 17.313163ms) Dec 30 13:12:15.815: INFO: (7) /api/v1/namespaces/proxy-5227/pods/https:proxy-service-wgksp-9mz2h:460/proxy/: tls baz (200; 17.588481ms) Dec 30 13:12:15.816: INFO: (7) /api/v1/namespaces/proxy-5227/pods/http:proxy-service-wgksp-9mz2h:162/proxy/: bar (200; 17.847558ms) Dec 30 13:12:15.816: INFO: (7) /api/v1/namespaces/proxy-5227/pods/https:proxy-service-wgksp-9mz2h:462/proxy/: tls qux (200; 18.277906ms) Dec 30 13:12:15.817: INFO: (7) /api/v1/namespaces/proxy-5227/pods/https:proxy-service-wgksp-9mz2h:443/proxy/: test (200; 13.36138ms) Dec 30 13:12:15.838: INFO: (8) /api/v1/namespaces/proxy-5227/pods/proxy-service-wgksp-9mz2h:160/proxy/: foo (200; 13.933946ms) Dec 30 13:12:15.839: INFO: (8) /api/v1/namespaces/proxy-5227/pods/https:proxy-service-wgksp-9mz2h:460/proxy/: tls baz (200; 15.258915ms) Dec 30 13:12:15.839: INFO: (8) /api/v1/namespaces/proxy-5227/pods/https:proxy-service-wgksp-9mz2h:462/proxy/: tls qux (200; 15.195414ms) Dec 30 13:12:15.839: INFO: (8) /api/v1/namespaces/proxy-5227/pods/http:proxy-service-wgksp-9mz2h:162/proxy/: bar (200; 15.206627ms) Dec 30 13:12:15.839: INFO: (8) /api/v1/namespaces/proxy-5227/pods/proxy-service-wgksp-9mz2h:1080/proxy/: test<... (200; 15.709007ms) Dec 30 13:12:15.839: INFO: (8) /api/v1/namespaces/proxy-5227/pods/proxy-service-wgksp-9mz2h:162/proxy/: bar (200; 15.774875ms) Dec 30 13:12:15.840: INFO: (8) /api/v1/namespaces/proxy-5227/pods/http:proxy-service-wgksp-9mz2h:1080/proxy/: ... (200; 15.971211ms) Dec 30 13:12:15.841: INFO: (8) /api/v1/namespaces/proxy-5227/pods/https:proxy-service-wgksp-9mz2h:443/proxy/: test<... (200; 15.929498ms) Dec 30 13:12:15.865: INFO: (9) /api/v1/namespaces/proxy-5227/services/proxy-service-wgksp:portname1/proxy/: foo (200; 21.546974ms) Dec 30 13:12:15.866: INFO: (9) /api/v1/namespaces/proxy-5227/services/https:proxy-service-wgksp:tlsportname2/proxy/: tls qux (200; 21.809742ms) Dec 30 13:12:15.866: INFO: (9) /api/v1/namespaces/proxy-5227/pods/https:proxy-service-wgksp-9mz2h:443/proxy/: ... (200; 22.36821ms) Dec 30 13:12:15.868: INFO: (9) /api/v1/namespaces/proxy-5227/services/http:proxy-service-wgksp:portname1/proxy/: foo (200; 24.682431ms) Dec 30 13:12:15.870: INFO: (9) /api/v1/namespaces/proxy-5227/services/proxy-service-wgksp:portname2/proxy/: bar (200; 26.419059ms) Dec 30 13:12:15.870: INFO: (9) /api/v1/namespaces/proxy-5227/services/https:proxy-service-wgksp:tlsportname1/proxy/: tls baz (200; 26.459498ms) Dec 30 13:12:15.870: INFO: (9) /api/v1/namespaces/proxy-5227/pods/https:proxy-service-wgksp-9mz2h:460/proxy/: tls baz (200; 26.530316ms) Dec 30 13:12:15.870: INFO: (9) /api/v1/namespaces/proxy-5227/pods/proxy-service-wgksp-9mz2h/proxy/: test (200; 26.61847ms) Dec 30 13:12:15.870: INFO: (9) /api/v1/namespaces/proxy-5227/pods/proxy-service-wgksp-9mz2h:162/proxy/: bar (200; 26.630076ms) Dec 30 13:12:15.870: INFO: (9) /api/v1/namespaces/proxy-5227/pods/proxy-service-wgksp-9mz2h:160/proxy/: foo (200; 26.640076ms) Dec 30 13:12:15.870: INFO: (9) /api/v1/namespaces/proxy-5227/pods/https:proxy-service-wgksp-9mz2h:462/proxy/: tls qux (200; 26.81873ms) Dec 30 13:12:15.870: INFO: (9) /api/v1/namespaces/proxy-5227/pods/http:proxy-service-wgksp-9mz2h:160/proxy/: foo (200; 26.692602ms) Dec 30 13:12:15.870: INFO: (9) /api/v1/namespaces/proxy-5227/pods/http:proxy-service-wgksp-9mz2h:162/proxy/: bar (200; 26.758337ms) Dec 30 13:12:15.876: INFO: (10) /api/v1/namespaces/proxy-5227/pods/https:proxy-service-wgksp-9mz2h:443/proxy/: test<... (200; 8.016073ms) Dec 30 13:12:15.884: INFO: (10) /api/v1/namespaces/proxy-5227/services/proxy-service-wgksp:portname2/proxy/: bar (200; 13.169766ms) Dec 30 13:12:15.889: INFO: (10) /api/v1/namespaces/proxy-5227/pods/proxy-service-wgksp-9mz2h:160/proxy/: foo (200; 18.194956ms) Dec 30 13:12:15.891: INFO: (10) /api/v1/namespaces/proxy-5227/services/http:proxy-service-wgksp:portname1/proxy/: foo (200; 20.130647ms) Dec 30 13:12:15.891: INFO: (10) /api/v1/namespaces/proxy-5227/pods/proxy-service-wgksp-9mz2h:162/proxy/: bar (200; 20.247776ms) Dec 30 13:12:15.891: INFO: (10) /api/v1/namespaces/proxy-5227/pods/http:proxy-service-wgksp-9mz2h:1080/proxy/: ... (200; 20.338101ms) Dec 30 13:12:15.891: INFO: (10) /api/v1/namespaces/proxy-5227/pods/proxy-service-wgksp-9mz2h/proxy/: test (200; 20.217301ms) Dec 30 13:12:15.891: INFO: (10) /api/v1/namespaces/proxy-5227/services/https:proxy-service-wgksp:tlsportname1/proxy/: tls baz (200; 20.160695ms) Dec 30 13:12:15.891: INFO: (10) /api/v1/namespaces/proxy-5227/pods/http:proxy-service-wgksp-9mz2h:160/proxy/: foo (200; 20.307038ms) Dec 30 13:12:15.891: INFO: (10) /api/v1/namespaces/proxy-5227/services/proxy-service-wgksp:portname1/proxy/: foo (200; 20.273807ms) Dec 30 13:12:15.891: INFO: (10) /api/v1/namespaces/proxy-5227/pods/http:proxy-service-wgksp-9mz2h:162/proxy/: bar (200; 20.320158ms) Dec 30 13:12:15.891: INFO: (10) /api/v1/namespaces/proxy-5227/services/http:proxy-service-wgksp:portname2/proxy/: bar (200; 20.395883ms) Dec 30 13:12:15.891: INFO: (10) /api/v1/namespaces/proxy-5227/pods/https:proxy-service-wgksp-9mz2h:460/proxy/: tls baz (200; 20.443249ms) Dec 30 13:12:15.891: INFO: (10) /api/v1/namespaces/proxy-5227/pods/https:proxy-service-wgksp-9mz2h:462/proxy/: tls qux (200; 20.309729ms) Dec 30 13:12:15.891: INFO: (10) /api/v1/namespaces/proxy-5227/services/https:proxy-service-wgksp:tlsportname2/proxy/: tls qux (200; 20.34538ms) Dec 30 13:12:15.900: INFO: (11) /api/v1/namespaces/proxy-5227/pods/proxy-service-wgksp-9mz2h:1080/proxy/: test<... (200; 9.449209ms) Dec 30 13:12:15.903: INFO: (11) /api/v1/namespaces/proxy-5227/pods/https:proxy-service-wgksp-9mz2h:443/proxy/: test (200; 12.660108ms) Dec 30 13:12:15.904: INFO: (11) /api/v1/namespaces/proxy-5227/pods/http:proxy-service-wgksp-9mz2h:1080/proxy/: ... (200; 12.643606ms) Dec 30 13:12:15.904: INFO: (11) /api/v1/namespaces/proxy-5227/pods/http:proxy-service-wgksp-9mz2h:162/proxy/: bar (200; 12.674915ms) Dec 30 13:12:15.904: INFO: (11) /api/v1/namespaces/proxy-5227/services/proxy-service-wgksp:portname1/proxy/: foo (200; 12.869914ms) Dec 30 13:12:15.905: INFO: (11) /api/v1/namespaces/proxy-5227/services/proxy-service-wgksp:portname2/proxy/: bar (200; 13.330347ms) Dec 30 13:12:15.905: INFO: (11) /api/v1/namespaces/proxy-5227/services/http:proxy-service-wgksp:portname2/proxy/: bar (200; 13.901263ms) Dec 30 13:12:15.908: INFO: (11) /api/v1/namespaces/proxy-5227/pods/proxy-service-wgksp-9mz2h:160/proxy/: foo (200; 16.700214ms) Dec 30 13:12:15.908: INFO: (11) /api/v1/namespaces/proxy-5227/pods/https:proxy-service-wgksp-9mz2h:462/proxy/: tls qux (200; 16.822296ms) Dec 30 13:12:15.908: INFO: (11) /api/v1/namespaces/proxy-5227/services/http:proxy-service-wgksp:portname1/proxy/: foo (200; 16.450749ms) Dec 30 13:12:15.908: INFO: (11) /api/v1/namespaces/proxy-5227/services/https:proxy-service-wgksp:tlsportname2/proxy/: tls qux (200; 16.647595ms) Dec 30 13:12:15.908: INFO: (11) /api/v1/namespaces/proxy-5227/pods/http:proxy-service-wgksp-9mz2h:160/proxy/: foo (200; 17.345816ms) Dec 30 13:12:15.916: INFO: (12) /api/v1/namespaces/proxy-5227/pods/proxy-service-wgksp-9mz2h/proxy/: test (200; 7.709362ms) Dec 30 13:12:15.920: INFO: (12) /api/v1/namespaces/proxy-5227/pods/http:proxy-service-wgksp-9mz2h:162/proxy/: bar (200; 10.835628ms) Dec 30 13:12:15.920: INFO: (12) /api/v1/namespaces/proxy-5227/pods/http:proxy-service-wgksp-9mz2h:160/proxy/: foo (200; 11.288023ms) Dec 30 13:12:15.921: INFO: (12) /api/v1/namespaces/proxy-5227/pods/http:proxy-service-wgksp-9mz2h:1080/proxy/: ... (200; 11.753743ms) Dec 30 13:12:15.921: INFO: (12) /api/v1/namespaces/proxy-5227/pods/https:proxy-service-wgksp-9mz2h:460/proxy/: tls baz (200; 12.401522ms) Dec 30 13:12:15.921: INFO: (12) /api/v1/namespaces/proxy-5227/pods/proxy-service-wgksp-9mz2h:1080/proxy/: test<... (200; 12.496182ms) Dec 30 13:12:15.921: INFO: (12) /api/v1/namespaces/proxy-5227/services/https:proxy-service-wgksp:tlsportname2/proxy/: tls qux (200; 12.692603ms) Dec 30 13:12:15.923: INFO: (12) /api/v1/namespaces/proxy-5227/services/proxy-service-wgksp:portname2/proxy/: bar (200; 14.392158ms) Dec 30 13:12:15.923: INFO: (12) /api/v1/namespaces/proxy-5227/services/http:proxy-service-wgksp:portname1/proxy/: foo (200; 14.827133ms) Dec 30 13:12:15.923: INFO: (12) /api/v1/namespaces/proxy-5227/services/https:proxy-service-wgksp:tlsportname1/proxy/: tls baz (200; 14.743982ms) Dec 30 13:12:15.923: INFO: (12) /api/v1/namespaces/proxy-5227/pods/https:proxy-service-wgksp-9mz2h:462/proxy/: tls qux (200; 14.909356ms) Dec 30 13:12:15.923: INFO: (12) /api/v1/namespaces/proxy-5227/pods/proxy-service-wgksp-9mz2h:162/proxy/: bar (200; 14.716524ms) Dec 30 13:12:15.923: INFO: (12) /api/v1/namespaces/proxy-5227/pods/https:proxy-service-wgksp-9mz2h:443/proxy/: ... (200; 14.760838ms) Dec 30 13:12:15.941: INFO: (13) /api/v1/namespaces/proxy-5227/pods/proxy-service-wgksp-9mz2h:162/proxy/: bar (200; 16.41815ms) Dec 30 13:12:15.941: INFO: (13) /api/v1/namespaces/proxy-5227/services/https:proxy-service-wgksp:tlsportname1/proxy/: tls baz (200; 16.364249ms) Dec 30 13:12:15.941: INFO: (13) /api/v1/namespaces/proxy-5227/pods/http:proxy-service-wgksp-9mz2h:160/proxy/: foo (200; 16.38664ms) Dec 30 13:12:15.941: INFO: (13) /api/v1/namespaces/proxy-5227/pods/proxy-service-wgksp-9mz2h:1080/proxy/: test<... (200; 16.551362ms) Dec 30 13:12:15.941: INFO: (13) /api/v1/namespaces/proxy-5227/services/https:proxy-service-wgksp:tlsportname2/proxy/: tls qux (200; 16.299882ms) Dec 30 13:12:15.941: INFO: (13) /api/v1/namespaces/proxy-5227/pods/http:proxy-service-wgksp-9mz2h:162/proxy/: bar (200; 16.46475ms) Dec 30 13:12:15.942: INFO: (13) /api/v1/namespaces/proxy-5227/services/http:proxy-service-wgksp:portname1/proxy/: foo (200; 17.221946ms) Dec 30 13:12:15.942: INFO: (13) /api/v1/namespaces/proxy-5227/pods/https:proxy-service-wgksp-9mz2h:462/proxy/: tls qux (200; 17.357233ms) Dec 30 13:12:15.942: INFO: (13) /api/v1/namespaces/proxy-5227/services/http:proxy-service-wgksp:portname2/proxy/: bar (200; 17.164708ms) Dec 30 13:12:15.942: INFO: (13) /api/v1/namespaces/proxy-5227/pods/proxy-service-wgksp-9mz2h/proxy/: test (200; 17.337589ms) Dec 30 13:12:15.942: INFO: (13) /api/v1/namespaces/proxy-5227/services/proxy-service-wgksp:portname2/proxy/: bar (200; 17.603967ms) Dec 30 13:12:15.943: INFO: (13) /api/v1/namespaces/proxy-5227/pods/proxy-service-wgksp-9mz2h:160/proxy/: foo (200; 18.578328ms) Dec 30 13:12:15.943: INFO: (13) /api/v1/namespaces/proxy-5227/pods/https:proxy-service-wgksp-9mz2h:460/proxy/: tls baz (200; 18.67019ms) Dec 30 13:12:15.957: INFO: (14) /api/v1/namespaces/proxy-5227/pods/proxy-service-wgksp-9mz2h:1080/proxy/: test<... (200; 13.720736ms) Dec 30 13:12:15.957: INFO: (14) /api/v1/namespaces/proxy-5227/pods/http:proxy-service-wgksp-9mz2h:162/proxy/: bar (200; 13.852251ms) Dec 30 13:12:15.957: INFO: (14) /api/v1/namespaces/proxy-5227/pods/proxy-service-wgksp-9mz2h:162/proxy/: bar (200; 13.969659ms) Dec 30 13:12:15.958: INFO: (14) /api/v1/namespaces/proxy-5227/pods/http:proxy-service-wgksp-9mz2h:1080/proxy/: ... (200; 14.044504ms) Dec 30 13:12:15.958: INFO: (14) /api/v1/namespaces/proxy-5227/pods/proxy-service-wgksp-9mz2h:160/proxy/: foo (200; 14.270179ms) Dec 30 13:12:15.958: INFO: (14) /api/v1/namespaces/proxy-5227/services/http:proxy-service-wgksp:portname1/proxy/: foo (200; 14.87178ms) Dec 30 13:12:15.959: INFO: (14) /api/v1/namespaces/proxy-5227/services/https:proxy-service-wgksp:tlsportname1/proxy/: tls baz (200; 15.081725ms) Dec 30 13:12:15.959: INFO: (14) /api/v1/namespaces/proxy-5227/pods/https:proxy-service-wgksp-9mz2h:460/proxy/: tls baz (200; 14.992843ms) Dec 30 13:12:15.959: INFO: (14) /api/v1/namespaces/proxy-5227/pods/https:proxy-service-wgksp-9mz2h:443/proxy/: test (200; 16.236963ms) Dec 30 13:12:15.960: INFO: (14) /api/v1/namespaces/proxy-5227/services/proxy-service-wgksp:portname2/proxy/: bar (200; 16.192479ms) Dec 30 13:12:15.972: INFO: (15) /api/v1/namespaces/proxy-5227/services/proxy-service-wgksp:portname1/proxy/: foo (200; 12.137369ms) Dec 30 13:12:15.972: INFO: (15) /api/v1/namespaces/proxy-5227/pods/http:proxy-service-wgksp-9mz2h:162/proxy/: bar (200; 12.1818ms) Dec 30 13:12:15.972: INFO: (15) /api/v1/namespaces/proxy-5227/pods/http:proxy-service-wgksp-9mz2h:160/proxy/: foo (200; 12.532509ms) Dec 30 13:12:15.973: INFO: (15) /api/v1/namespaces/proxy-5227/pods/https:proxy-service-wgksp-9mz2h:462/proxy/: tls qux (200; 12.743632ms) Dec 30 13:12:15.973: INFO: (15) /api/v1/namespaces/proxy-5227/services/https:proxy-service-wgksp:tlsportname2/proxy/: tls qux (200; 12.995644ms) Dec 30 13:12:15.973: INFO: (15) /api/v1/namespaces/proxy-5227/services/proxy-service-wgksp:portname2/proxy/: bar (200; 12.795699ms) Dec 30 13:12:15.973: INFO: (15) /api/v1/namespaces/proxy-5227/pods/https:proxy-service-wgksp-9mz2h:460/proxy/: tls baz (200; 12.969252ms) Dec 30 13:12:15.973: INFO: (15) /api/v1/namespaces/proxy-5227/pods/proxy-service-wgksp-9mz2h:162/proxy/: bar (200; 13.190166ms) Dec 30 13:12:15.973: INFO: (15) /api/v1/namespaces/proxy-5227/pods/proxy-service-wgksp-9mz2h:160/proxy/: foo (200; 13.432411ms) Dec 30 13:12:15.974: INFO: (15) /api/v1/namespaces/proxy-5227/pods/proxy-service-wgksp-9mz2h:1080/proxy/: test<... (200; 13.752642ms) Dec 30 13:12:15.974: INFO: (15) /api/v1/namespaces/proxy-5227/services/https:proxy-service-wgksp:tlsportname1/proxy/: tls baz (200; 13.842289ms) Dec 30 13:12:15.974: INFO: (15) /api/v1/namespaces/proxy-5227/pods/proxy-service-wgksp-9mz2h/proxy/: test (200; 14.259244ms) Dec 30 13:12:15.974: INFO: (15) /api/v1/namespaces/proxy-5227/pods/https:proxy-service-wgksp-9mz2h:443/proxy/: ... (200; 14.710774ms) Dec 30 13:12:15.975: INFO: (15) /api/v1/namespaces/proxy-5227/services/http:proxy-service-wgksp:portname2/proxy/: bar (200; 15.511069ms) Dec 30 13:12:15.976: INFO: (15) /api/v1/namespaces/proxy-5227/services/http:proxy-service-wgksp:portname1/proxy/: foo (200; 16.076048ms) Dec 30 13:12:15.982: INFO: (16) /api/v1/namespaces/proxy-5227/pods/http:proxy-service-wgksp-9mz2h:162/proxy/: bar (200; 5.698449ms) Dec 30 13:12:15.983: INFO: (16) /api/v1/namespaces/proxy-5227/pods/https:proxy-service-wgksp-9mz2h:462/proxy/: tls qux (200; 7.171141ms) Dec 30 13:12:15.983: INFO: (16) /api/v1/namespaces/proxy-5227/pods/http:proxy-service-wgksp-9mz2h:160/proxy/: foo (200; 7.187079ms) Dec 30 13:12:15.985: INFO: (16) /api/v1/namespaces/proxy-5227/pods/https:proxy-service-wgksp-9mz2h:443/proxy/: test<... (200; 9.666241ms) Dec 30 13:12:15.986: INFO: (16) /api/v1/namespaces/proxy-5227/pods/https:proxy-service-wgksp-9mz2h:460/proxy/: tls baz (200; 9.769051ms) Dec 30 13:12:15.986: INFO: (16) /api/v1/namespaces/proxy-5227/pods/proxy-service-wgksp-9mz2h:162/proxy/: bar (200; 9.955788ms) Dec 30 13:12:15.987: INFO: (16) /api/v1/namespaces/proxy-5227/pods/http:proxy-service-wgksp-9mz2h:1080/proxy/: ... (200; 11.125184ms) Dec 30 13:12:15.987: INFO: (16) /api/v1/namespaces/proxy-5227/pods/proxy-service-wgksp-9mz2h/proxy/: test (200; 11.26209ms) Dec 30 13:12:15.989: INFO: (16) /api/v1/namespaces/proxy-5227/services/proxy-service-wgksp:portname1/proxy/: foo (200; 12.559376ms) Dec 30 13:12:15.989: INFO: (16) /api/v1/namespaces/proxy-5227/services/http:proxy-service-wgksp:portname2/proxy/: bar (200; 12.563985ms) Dec 30 13:12:15.989: INFO: (16) /api/v1/namespaces/proxy-5227/services/https:proxy-service-wgksp:tlsportname2/proxy/: tls qux (200; 12.554629ms) Dec 30 13:12:15.989: INFO: (16) /api/v1/namespaces/proxy-5227/services/proxy-service-wgksp:portname2/proxy/: bar (200; 12.773311ms) Dec 30 13:12:15.989: INFO: (16) /api/v1/namespaces/proxy-5227/services/https:proxy-service-wgksp:tlsportname1/proxy/: tls baz (200; 12.796282ms) Dec 30 13:12:15.993: INFO: (17) /api/v1/namespaces/proxy-5227/pods/http:proxy-service-wgksp-9mz2h:162/proxy/: bar (200; 4.140532ms) Dec 30 13:12:15.997: INFO: (17) /api/v1/namespaces/proxy-5227/pods/https:proxy-service-wgksp-9mz2h:443/proxy/: ... (200; 9.303943ms) Dec 30 13:12:15.999: INFO: (17) /api/v1/namespaces/proxy-5227/pods/https:proxy-service-wgksp-9mz2h:460/proxy/: tls baz (200; 9.651202ms) Dec 30 13:12:15.999: INFO: (17) /api/v1/namespaces/proxy-5227/services/http:proxy-service-wgksp:portname2/proxy/: bar (200; 9.796501ms) Dec 30 13:12:16.002: INFO: (17) /api/v1/namespaces/proxy-5227/pods/proxy-service-wgksp-9mz2h:162/proxy/: bar (200; 13.358086ms) Dec 30 13:12:16.003: INFO: (17) /api/v1/namespaces/proxy-5227/services/https:proxy-service-wgksp:tlsportname1/proxy/: tls baz (200; 13.82623ms) Dec 30 13:12:16.003: INFO: (17) /api/v1/namespaces/proxy-5227/pods/proxy-service-wgksp-9mz2h:1080/proxy/: test<... (200; 13.644908ms) Dec 30 13:12:16.003: INFO: (17) /api/v1/namespaces/proxy-5227/pods/proxy-service-wgksp-9mz2h/proxy/: test (200; 13.932522ms) Dec 30 13:12:16.003: INFO: (17) /api/v1/namespaces/proxy-5227/pods/https:proxy-service-wgksp-9mz2h:462/proxy/: tls qux (200; 13.819122ms) Dec 30 13:12:16.004: INFO: (17) /api/v1/namespaces/proxy-5227/services/proxy-service-wgksp:portname1/proxy/: foo (200; 14.278229ms) Dec 30 13:12:16.004: INFO: (17) /api/v1/namespaces/proxy-5227/services/https:proxy-service-wgksp:tlsportname2/proxy/: tls qux (200; 14.454995ms) Dec 30 13:12:16.004: INFO: (17) /api/v1/namespaces/proxy-5227/services/proxy-service-wgksp:portname2/proxy/: bar (200; 14.309784ms) Dec 30 13:12:16.004: INFO: (17) /api/v1/namespaces/proxy-5227/pods/proxy-service-wgksp-9mz2h:160/proxy/: foo (200; 14.534501ms) Dec 30 13:12:16.004: INFO: (17) /api/v1/namespaces/proxy-5227/services/http:proxy-service-wgksp:portname1/proxy/: foo (200; 15.225176ms) Dec 30 13:12:16.015: INFO: (18) /api/v1/namespaces/proxy-5227/pods/proxy-service-wgksp-9mz2h:160/proxy/: foo (200; 10.595676ms) Dec 30 13:12:16.015: INFO: (18) /api/v1/namespaces/proxy-5227/pods/proxy-service-wgksp-9mz2h/proxy/: test (200; 10.628244ms) Dec 30 13:12:16.015: INFO: (18) /api/v1/namespaces/proxy-5227/pods/proxy-service-wgksp-9mz2h:162/proxy/: bar (200; 10.687303ms) Dec 30 13:12:16.015: INFO: (18) /api/v1/namespaces/proxy-5227/pods/http:proxy-service-wgksp-9mz2h:1080/proxy/: ... (200; 10.789114ms) Dec 30 13:12:16.016: INFO: (18) /api/v1/namespaces/proxy-5227/pods/http:proxy-service-wgksp-9mz2h:162/proxy/: bar (200; 11.389325ms) Dec 30 13:12:16.018: INFO: (18) /api/v1/namespaces/proxy-5227/pods/http:proxy-service-wgksp-9mz2h:160/proxy/: foo (200; 13.682372ms) Dec 30 13:12:16.018: INFO: (18) /api/v1/namespaces/proxy-5227/pods/https:proxy-service-wgksp-9mz2h:462/proxy/: tls qux (200; 13.780929ms) Dec 30 13:12:16.019: INFO: (18) /api/v1/namespaces/proxy-5227/services/https:proxy-service-wgksp:tlsportname2/proxy/: tls qux (200; 14.129039ms) Dec 30 13:12:16.019: INFO: (18) /api/v1/namespaces/proxy-5227/pods/https:proxy-service-wgksp-9mz2h:443/proxy/: test<... (200; 14.9399ms) Dec 30 13:12:16.020: INFO: (18) /api/v1/namespaces/proxy-5227/services/http:proxy-service-wgksp:portname1/proxy/: foo (200; 15.060474ms) Dec 30 13:12:16.020: INFO: (18) /api/v1/namespaces/proxy-5227/pods/https:proxy-service-wgksp-9mz2h:460/proxy/: tls baz (200; 15.033985ms) Dec 30 13:12:16.020: INFO: (18) /api/v1/namespaces/proxy-5227/services/proxy-service-wgksp:portname2/proxy/: bar (200; 15.513463ms) Dec 30 13:12:16.020: INFO: (18) /api/v1/namespaces/proxy-5227/services/https:proxy-service-wgksp:tlsportname1/proxy/: tls baz (200; 15.573851ms) Dec 30 13:12:16.028: INFO: (19) /api/v1/namespaces/proxy-5227/pods/https:proxy-service-wgksp-9mz2h:462/proxy/: tls qux (200; 8.02644ms) Dec 30 13:12:16.028: INFO: (19) /api/v1/namespaces/proxy-5227/pods/http:proxy-service-wgksp-9mz2h:160/proxy/: foo (200; 7.799118ms) Dec 30 13:12:16.029: INFO: (19) /api/v1/namespaces/proxy-5227/pods/https:proxy-service-wgksp-9mz2h:443/proxy/: test (200; 9.61404ms) Dec 30 13:12:16.033: INFO: (19) /api/v1/namespaces/proxy-5227/services/proxy-service-wgksp:portname1/proxy/: foo (200; 12.392293ms) Dec 30 13:12:16.033: INFO: (19) /api/v1/namespaces/proxy-5227/services/https:proxy-service-wgksp:tlsportname2/proxy/: tls qux (200; 12.552913ms) Dec 30 13:12:16.033: INFO: (19) /api/v1/namespaces/proxy-5227/pods/proxy-service-wgksp-9mz2h:162/proxy/: bar (200; 12.595893ms) Dec 30 13:12:16.033: INFO: (19) /api/v1/namespaces/proxy-5227/services/https:proxy-service-wgksp:tlsportname1/proxy/: tls baz (200; 12.623566ms) Dec 30 13:12:16.033: INFO: (19) /api/v1/namespaces/proxy-5227/services/http:proxy-service-wgksp:portname2/proxy/: bar (200; 12.694392ms) Dec 30 13:12:16.033: INFO: (19) /api/v1/namespaces/proxy-5227/services/http:proxy-service-wgksp:portname1/proxy/: foo (200; 12.692667ms) Dec 30 13:12:16.034: INFO: (19) /api/v1/namespaces/proxy-5227/services/proxy-service-wgksp:portname2/proxy/: bar (200; 13.99267ms) Dec 30 13:12:16.035: INFO: (19) /api/v1/namespaces/proxy-5227/pods/proxy-service-wgksp-9mz2h:1080/proxy/: test<... (200; 14.258471ms) Dec 30 13:12:16.035: INFO: (19) /api/v1/namespaces/proxy-5227/pods/http:proxy-service-wgksp-9mz2h:1080/proxy/: ... (200; 14.506836ms) Dec 30 13:12:16.035: INFO: (19) /api/v1/namespaces/proxy-5227/pods/proxy-service-wgksp-9mz2h:160/proxy/: foo (200; 14.353357ms) Dec 30 13:12:16.035: INFO: (19) /api/v1/namespaces/proxy-5227/pods/https:proxy-service-wgksp-9mz2h:460/proxy/: tls baz (200; 14.875978ms) STEP: deleting ReplicationController proxy-service-wgksp in namespace proxy-5227, will wait for the garbage collector to delete the pods Dec 30 13:12:16.100: INFO: Deleting ReplicationController proxy-service-wgksp took: 11.423448ms Dec 30 13:12:16.401: INFO: Terminating ReplicationController proxy-service-wgksp pods took: 300.704071ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:12:23.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-5227" for this suite. Dec 30 13:12:29.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:12:29.628: INFO: namespace proxy-5227 deletion completed in 6.147144145s • [SLOW TEST:30.497 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:12:29.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 30 13:12:29.756: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:12:30.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2347" for this suite. Dec 30 13:12:36.956: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:12:37.057: INFO: namespace custom-resource-definition-2347 deletion completed in 6.139573733s • [SLOW TEST:7.429 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:12:37.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-4035/configmap-test-2ac42e92-9d30-445a-9aee-21379d2bc3a0 STEP: Creating a pod to test consume configMaps Dec 30 13:12:37.205: INFO: Waiting up to 5m0s for pod "pod-configmaps-496a8f48-1f56-4960-9d88-34ec69c5d15c" in namespace "configmap-4035" to be "success or failure" Dec 30 13:12:37.209: INFO: Pod "pod-configmaps-496a8f48-1f56-4960-9d88-34ec69c5d15c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.544792ms Dec 30 13:12:39.217: INFO: Pod "pod-configmaps-496a8f48-1f56-4960-9d88-34ec69c5d15c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012307316s Dec 30 13:12:41.238: INFO: Pod "pod-configmaps-496a8f48-1f56-4960-9d88-34ec69c5d15c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032861846s Dec 30 13:12:43.250: INFO: Pod "pod-configmaps-496a8f48-1f56-4960-9d88-34ec69c5d15c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044994482s Dec 30 13:12:45.263: INFO: Pod "pod-configmaps-496a8f48-1f56-4960-9d88-34ec69c5d15c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.057716919s Dec 30 13:12:47.271: INFO: Pod "pod-configmaps-496a8f48-1f56-4960-9d88-34ec69c5d15c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.065922415s STEP: Saw pod success Dec 30 13:12:47.271: INFO: Pod "pod-configmaps-496a8f48-1f56-4960-9d88-34ec69c5d15c" satisfied condition "success or failure" Dec 30 13:12:47.274: INFO: Trying to get logs from node iruya-node pod pod-configmaps-496a8f48-1f56-4960-9d88-34ec69c5d15c container env-test: STEP: delete the pod Dec 30 13:12:47.987: INFO: Waiting for pod pod-configmaps-496a8f48-1f56-4960-9d88-34ec69c5d15c to disappear Dec 30 13:12:48.001: INFO: Pod pod-configmaps-496a8f48-1f56-4960-9d88-34ec69c5d15c no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:12:48.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4035" for this suite. Dec 30 13:12:54.040: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:12:54.196: INFO: namespace configmap-4035 deletion completed in 6.184103343s • [SLOW TEST:17.139 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:12:54.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-6693/secret-test-4edddd4b-eae6-4956-bfac-533d868cc3cf STEP: Creating a pod to test consume secrets Dec 30 13:12:54.405: INFO: Waiting up to 5m0s for pod "pod-configmaps-577e8efa-3ea6-4caa-accf-e9a366b38060" in namespace "secrets-6693" to be "success or failure" Dec 30 13:12:54.429: INFO: Pod "pod-configmaps-577e8efa-3ea6-4caa-accf-e9a366b38060": Phase="Pending", Reason="", readiness=false. Elapsed: 24.297724ms Dec 30 13:12:56.438: INFO: Pod "pod-configmaps-577e8efa-3ea6-4caa-accf-e9a366b38060": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033413396s Dec 30 13:12:58.448: INFO: Pod "pod-configmaps-577e8efa-3ea6-4caa-accf-e9a366b38060": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042571761s Dec 30 13:13:00.453: INFO: Pod "pod-configmaps-577e8efa-3ea6-4caa-accf-e9a366b38060": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048412721s Dec 30 13:13:02.460: INFO: Pod "pod-configmaps-577e8efa-3ea6-4caa-accf-e9a366b38060": Phase="Pending", Reason="", readiness=false. Elapsed: 8.055037975s Dec 30 13:13:04.468: INFO: Pod "pod-configmaps-577e8efa-3ea6-4caa-accf-e9a366b38060": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.062888538s STEP: Saw pod success Dec 30 13:13:04.468: INFO: Pod "pod-configmaps-577e8efa-3ea6-4caa-accf-e9a366b38060" satisfied condition "success or failure" Dec 30 13:13:04.471: INFO: Trying to get logs from node iruya-node pod pod-configmaps-577e8efa-3ea6-4caa-accf-e9a366b38060 container env-test: STEP: delete the pod Dec 30 13:13:04.566: INFO: Waiting for pod pod-configmaps-577e8efa-3ea6-4caa-accf-e9a366b38060 to disappear Dec 30 13:13:04.576: INFO: Pod pod-configmaps-577e8efa-3ea6-4caa-accf-e9a366b38060 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:13:04.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6693" for this suite. Dec 30 13:13:10.700: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:13:10.838: INFO: namespace secrets-6693 deletion completed in 6.248692209s • [SLOW TEST:16.642 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:13:10.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Dec 30 13:13:22.032: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:13:23.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-7916" for this suite. Dec 30 13:15:59.129: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:15:59.202: INFO: namespace replicaset-7916 deletion completed in 2m36.128634133s • [SLOW TEST:168.363 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:15:59.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 30 13:15:59.280: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4307fa57-7927-4b5e-93da-ac3af783b2db" in namespace "downward-api-9676" to be "success or failure" Dec 30 13:15:59.289: INFO: Pod "downwardapi-volume-4307fa57-7927-4b5e-93da-ac3af783b2db": Phase="Pending", Reason="", readiness=false. Elapsed: 8.434574ms Dec 30 13:16:01.322: INFO: Pod "downwardapi-volume-4307fa57-7927-4b5e-93da-ac3af783b2db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041396341s Dec 30 13:16:03.331: INFO: Pod "downwardapi-volume-4307fa57-7927-4b5e-93da-ac3af783b2db": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050932171s Dec 30 13:16:05.340: INFO: Pod "downwardapi-volume-4307fa57-7927-4b5e-93da-ac3af783b2db": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059161296s Dec 30 13:16:07.348: INFO: Pod "downwardapi-volume-4307fa57-7927-4b5e-93da-ac3af783b2db": Phase="Pending", Reason="", readiness=false. Elapsed: 8.067273129s Dec 30 13:16:09.354: INFO: Pod "downwardapi-volume-4307fa57-7927-4b5e-93da-ac3af783b2db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.073439893s STEP: Saw pod success Dec 30 13:16:09.354: INFO: Pod "downwardapi-volume-4307fa57-7927-4b5e-93da-ac3af783b2db" satisfied condition "success or failure" Dec 30 13:16:09.388: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-4307fa57-7927-4b5e-93da-ac3af783b2db container client-container: STEP: delete the pod Dec 30 13:16:09.456: INFO: Waiting for pod downwardapi-volume-4307fa57-7927-4b5e-93da-ac3af783b2db to disappear Dec 30 13:16:09.481: INFO: Pod downwardapi-volume-4307fa57-7927-4b5e-93da-ac3af783b2db no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:16:09.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9676" for this suite. Dec 30 13:16:15.555: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:16:15.668: INFO: namespace downward-api-9676 deletion completed in 6.178218537s • [SLOW TEST:16.465 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:16:15.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Dec 30 13:16:35.892: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-176 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 30 13:16:35.892: INFO: >>> kubeConfig: /root/.kube/config Dec 30 13:16:36.307: INFO: Exec stderr: "" Dec 30 13:16:36.307: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-176 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 30 13:16:36.307: INFO: >>> kubeConfig: /root/.kube/config Dec 30 13:16:36.661: INFO: Exec stderr: "" Dec 30 13:16:36.661: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-176 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 30 13:16:36.661: INFO: >>> kubeConfig: /root/.kube/config Dec 30 13:16:37.036: INFO: Exec stderr: "" Dec 30 13:16:37.036: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-176 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 30 13:16:37.036: INFO: >>> kubeConfig: /root/.kube/config Dec 30 13:16:37.299: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Dec 30 13:16:37.299: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-176 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 30 13:16:37.299: INFO: >>> kubeConfig: /root/.kube/config Dec 30 13:16:37.531: INFO: Exec stderr: "" Dec 30 13:16:37.531: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-176 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 30 13:16:37.532: INFO: >>> kubeConfig: /root/.kube/config Dec 30 13:16:37.864: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Dec 30 13:16:37.864: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-176 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 30 13:16:37.864: INFO: >>> kubeConfig: /root/.kube/config Dec 30 13:16:38.300: INFO: Exec stderr: "" Dec 30 13:16:38.300: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-176 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 30 13:16:38.300: INFO: >>> kubeConfig: /root/.kube/config Dec 30 13:16:38.644: INFO: Exec stderr: "" Dec 30 13:16:38.644: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-176 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 30 13:16:38.644: INFO: >>> kubeConfig: /root/.kube/config Dec 30 13:16:38.933: INFO: Exec stderr: "" Dec 30 13:16:38.933: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-176 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 30 13:16:38.933: INFO: >>> kubeConfig: /root/.kube/config Dec 30 13:16:39.167: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:16:39.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-176" for this suite. Dec 30 13:17:31.207: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:17:31.307: INFO: namespace e2e-kubelet-etc-hosts-176 deletion completed in 52.130623556s • [SLOW TEST:75.639 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:17:31.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 30 13:17:31.461: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d269d924-2f6d-4158-8fa9-0facf1e1c674" in namespace "downward-api-6953" to be "success or failure" Dec 30 13:17:31.469: INFO: Pod "downwardapi-volume-d269d924-2f6d-4158-8fa9-0facf1e1c674": Phase="Pending", Reason="", readiness=false. Elapsed: 7.075234ms Dec 30 13:17:33.478: INFO: Pod "downwardapi-volume-d269d924-2f6d-4158-8fa9-0facf1e1c674": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01668502s Dec 30 13:17:35.484: INFO: Pod "downwardapi-volume-d269d924-2f6d-4158-8fa9-0facf1e1c674": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022460348s Dec 30 13:17:37.490: INFO: Pod "downwardapi-volume-d269d924-2f6d-4158-8fa9-0facf1e1c674": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028283843s Dec 30 13:17:39.519: INFO: Pod "downwardapi-volume-d269d924-2f6d-4158-8fa9-0facf1e1c674": Phase="Pending", Reason="", readiness=false. Elapsed: 8.056973098s Dec 30 13:17:41.527: INFO: Pod "downwardapi-volume-d269d924-2f6d-4158-8fa9-0facf1e1c674": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.065797777s STEP: Saw pod success Dec 30 13:17:41.527: INFO: Pod "downwardapi-volume-d269d924-2f6d-4158-8fa9-0facf1e1c674" satisfied condition "success or failure" Dec 30 13:17:41.532: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-d269d924-2f6d-4158-8fa9-0facf1e1c674 container client-container: STEP: delete the pod Dec 30 13:17:41.620: INFO: Waiting for pod downwardapi-volume-d269d924-2f6d-4158-8fa9-0facf1e1c674 to disappear Dec 30 13:17:41.627: INFO: Pod downwardapi-volume-d269d924-2f6d-4158-8fa9-0facf1e1c674 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:17:41.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6953" for this suite. Dec 30 13:17:47.709: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:17:47.873: INFO: namespace downward-api-6953 deletion completed in 6.197083152s • [SLOW TEST:16.566 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:17:47.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap that has name configmap-test-emptyKey-ede456cf-490c-45bf-8f71-8123197ccba0 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:17:47.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2449" for this suite. Dec 30 13:17:54.004: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:17:54.165: INFO: namespace configmap-2449 deletion completed in 6.180907312s • [SLOW TEST:6.291 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:17:54.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Dec 30 13:17:54.430: INFO: Number of nodes with available pods: 0 Dec 30 13:17:54.431: INFO: Node iruya-node is running more than one daemon pod Dec 30 13:17:55.780: INFO: Number of nodes with available pods: 0 Dec 30 13:17:55.781: INFO: Node iruya-node is running more than one daemon pod Dec 30 13:17:57.005: INFO: Number of nodes with available pods: 0 Dec 30 13:17:57.005: INFO: Node iruya-node is running more than one daemon pod Dec 30 13:17:57.464: INFO: Number of nodes with available pods: 0 Dec 30 13:17:57.464: INFO: Node iruya-node is running more than one daemon pod Dec 30 13:17:58.492: INFO: Number of nodes with available pods: 0 Dec 30 13:17:58.492: INFO: Node iruya-node is running more than one daemon pod Dec 30 13:18:00.320: INFO: Number of nodes with available pods: 0 Dec 30 13:18:00.320: INFO: Node iruya-node is running more than one daemon pod Dec 30 13:18:01.014: INFO: Number of nodes with available pods: 0 Dec 30 13:18:01.014: INFO: Node iruya-node is running more than one daemon pod Dec 30 13:18:01.622: INFO: Number of nodes with available pods: 0 Dec 30 13:18:01.622: INFO: Node iruya-node is running more than one daemon pod Dec 30 13:18:02.455: INFO: Number of nodes with available pods: 0 Dec 30 13:18:02.455: INFO: Node iruya-node is running more than one daemon pod Dec 30 13:18:03.505: INFO: Number of nodes with available pods: 0 Dec 30 13:18:03.505: INFO: Node iruya-node is running more than one daemon pod Dec 30 13:18:04.447: INFO: Number of nodes with available pods: 0 Dec 30 13:18:04.447: INFO: Node iruya-node is running more than one daemon pod Dec 30 13:18:05.454: INFO: Number of nodes with available pods: 2 Dec 30 13:18:05.454: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Dec 30 13:18:05.586: INFO: Number of nodes with available pods: 1 Dec 30 13:18:05.586: INFO: Node iruya-node is running more than one daemon pod Dec 30 13:18:06.602: INFO: Number of nodes with available pods: 1 Dec 30 13:18:06.602: INFO: Node iruya-node is running more than one daemon pod Dec 30 13:18:07.613: INFO: Number of nodes with available pods: 1 Dec 30 13:18:07.613: INFO: Node iruya-node is running more than one daemon pod Dec 30 13:18:08.600: INFO: Number of nodes with available pods: 1 Dec 30 13:18:08.601: INFO: Node iruya-node is running more than one daemon pod Dec 30 13:18:09.604: INFO: Number of nodes with available pods: 1 Dec 30 13:18:09.604: INFO: Node iruya-node is running more than one daemon pod Dec 30 13:18:10.631: INFO: Number of nodes with available pods: 1 Dec 30 13:18:10.631: INFO: Node iruya-node is running more than one daemon pod Dec 30 13:18:11.599: INFO: Number of nodes with available pods: 1 Dec 30 13:18:11.599: INFO: Node iruya-node is running more than one daemon pod Dec 30 13:18:12.630: INFO: Number of nodes with available pods: 1 Dec 30 13:18:12.630: INFO: Node iruya-node is running more than one daemon pod Dec 30 13:18:13.607: INFO: Number of nodes with available pods: 1 Dec 30 13:18:13.607: INFO: Node iruya-node is running more than one daemon pod Dec 30 13:18:14.602: INFO: Number of nodes with available pods: 1 Dec 30 13:18:14.602: INFO: Node iruya-node is running more than one daemon pod Dec 30 13:18:15.607: INFO: Number of nodes with available pods: 1 Dec 30 13:18:15.608: INFO: Node iruya-node is running more than one daemon pod Dec 30 13:18:16.615: INFO: Number of nodes with available pods: 2 Dec 30 13:18:16.615: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5760, will wait for the garbage collector to delete the pods Dec 30 13:18:16.703: INFO: Deleting DaemonSet.extensions daemon-set took: 13.696965ms Dec 30 13:18:17.004: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.514997ms Dec 30 13:18:27.917: INFO: Number of nodes with available pods: 0 Dec 30 13:18:27.917: INFO: Number of running nodes: 0, number of available pods: 0 Dec 30 13:18:27.924: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5760/daemonsets","resourceVersion":"18643679"},"items":null} Dec 30 13:18:27.929: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5760/pods","resourceVersion":"18643679"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:18:27.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5760" for this suite. Dec 30 13:18:33.975: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:18:34.086: INFO: namespace daemonsets-5760 deletion completed in 6.138785582s • [SLOW TEST:39.920 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:18:34.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Dec 30 13:18:42.916: INFO: Successfully updated pod "annotationupdatedf667e01-b9ec-4ea5-bf33-d9a022d79671" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:18:44.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4297" for this suite. Dec 30 13:19:07.025: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:19:07.095: INFO: namespace downward-api-4297 deletion completed in 22.092709975s • [SLOW TEST:33.008 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:19:07.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 30 13:19:07.201: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Dec 30 13:19:07.248: INFO: Pod name sample-pod: Found 0 pods out of 1 Dec 30 13:19:12.255: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Dec 30 13:19:16.265: INFO: Creating deployment "test-rolling-update-deployment" Dec 30 13:19:16.276: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Dec 30 13:19:16.317: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Dec 30 13:19:18.329: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Dec 30 13:19:18.333: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713308756, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713308756, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713308756, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713308756, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 30 13:19:20.342: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713308756, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713308756, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713308756, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713308756, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 30 13:19:22.344: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713308756, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713308756, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713308756, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713308756, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 30 13:19:24.339: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Dec 30 13:19:24.349: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-5253,SelfLink:/apis/apps/v1/namespaces/deployment-5253/deployments/test-rolling-update-deployment,UID:3a00073e-ac0c-470f-afc1-3d884e0afcd1,ResourceVersion:18643849,Generation:1,CreationTimestamp:2019-12-30 13:19:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-30 13:19:16 +0000 UTC 2019-12-30 13:19:16 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-30 13:19:23 +0000 UTC 2019-12-30 13:19:16 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Dec 30 13:19:24.353: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-5253,SelfLink:/apis/apps/v1/namespaces/deployment-5253/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:26940c5e-b8c4-4847-a3f9-1ffd1f58d5b9,ResourceVersion:18643838,Generation:1,CreationTimestamp:2019-12-30 13:19:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 3a00073e-ac0c-470f-afc1-3d884e0afcd1 0xc0026d4587 0xc0026d4588}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Dec 30 13:19:24.353: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Dec 30 13:19:24.353: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-5253,SelfLink:/apis/apps/v1/namespaces/deployment-5253/replicasets/test-rolling-update-controller,UID:b1aacb17-82df-4906-b5c2-79e7c4c2cf99,ResourceVersion:18643848,Generation:2,CreationTimestamp:2019-12-30 13:19:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 3a00073e-ac0c-470f-afc1-3d884e0afcd1 0xc0026d44b7 0xc0026d44b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Dec 30 13:19:24.363: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-ghrcg" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-ghrcg,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-5253,SelfLink:/api/v1/namespaces/deployment-5253/pods/test-rolling-update-deployment-79f6b9d75c-ghrcg,UID:4b0282ca-ce1e-4dec-9182-d9d1b06e3fe2,ResourceVersion:18643837,Generation:0,CreationTimestamp:2019-12-30 13:19:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 26940c5e-b8c4-4847-a3f9-1ffd1f58d5b9 0xc0026d4e77 0xc0026d4e78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kk5n6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kk5n6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-kk5n6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026d4ef0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026d4f10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 13:19:16 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 13:19:23 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 13:19:23 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 13:19:16 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2019-12-30 13:19:16 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-30 13:19:23 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://9cf4bbe9d5b882678120355a39bb3520d1b145864dc648390fbc093946b36a95}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:19:24.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5253" for this suite. Dec 30 13:19:30.396: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:19:30.506: INFO: namespace deployment-5253 deletion completed in 6.134298153s • [SLOW TEST:23.410 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:19:30.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:19:30.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8079" for this suite. Dec 30 13:19:52.803: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:19:52.954: INFO: namespace pods-8079 deletion completed in 22.193192928s • [SLOW TEST:22.447 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:19:52.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-d0ec5525-1f83-4350-b0b7-7103e10d36bb STEP: Creating a pod to test consume secrets Dec 30 13:19:53.079: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b6fa7f1e-c2fb-4ced-a95f-516ff831c1a2" in namespace "projected-9670" to be "success or failure" Dec 30 13:19:53.091: INFO: Pod "pod-projected-secrets-b6fa7f1e-c2fb-4ced-a95f-516ff831c1a2": Phase="Pending", Reason="", readiness=false. Elapsed: 11.276465ms Dec 30 13:19:55.101: INFO: Pod "pod-projected-secrets-b6fa7f1e-c2fb-4ced-a95f-516ff831c1a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021053149s Dec 30 13:19:57.106: INFO: Pod "pod-projected-secrets-b6fa7f1e-c2fb-4ced-a95f-516ff831c1a2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026822494s Dec 30 13:20:00.001: INFO: Pod "pod-projected-secrets-b6fa7f1e-c2fb-4ced-a95f-516ff831c1a2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.921467511s Dec 30 13:20:02.009: INFO: Pod "pod-projected-secrets-b6fa7f1e-c2fb-4ced-a95f-516ff831c1a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.929239558s STEP: Saw pod success Dec 30 13:20:02.009: INFO: Pod "pod-projected-secrets-b6fa7f1e-c2fb-4ced-a95f-516ff831c1a2" satisfied condition "success or failure" Dec 30 13:20:02.011: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-b6fa7f1e-c2fb-4ced-a95f-516ff831c1a2 container projected-secret-volume-test: STEP: delete the pod Dec 30 13:20:02.119: INFO: Waiting for pod pod-projected-secrets-b6fa7f1e-c2fb-4ced-a95f-516ff831c1a2 to disappear Dec 30 13:20:02.143: INFO: Pod pod-projected-secrets-b6fa7f1e-c2fb-4ced-a95f-516ff831c1a2 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:20:02.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9670" for this suite. Dec 30 13:20:08.173: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:20:08.315: INFO: namespace projected-9670 deletion completed in 6.165287364s • [SLOW TEST:15.361 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:20:08.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Dec 30 13:20:08.528: INFO: Waiting up to 5m0s for pod "pod-5a852e4f-45eb-40b6-b8f5-b29f4cbe10db" in namespace "emptydir-5054" to be "success or failure" Dec 30 13:20:08.542: INFO: Pod "pod-5a852e4f-45eb-40b6-b8f5-b29f4cbe10db": Phase="Pending", Reason="", readiness=false. Elapsed: 13.409872ms Dec 30 13:20:10.552: INFO: Pod "pod-5a852e4f-45eb-40b6-b8f5-b29f4cbe10db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023382661s Dec 30 13:20:12.562: INFO: Pod "pod-5a852e4f-45eb-40b6-b8f5-b29f4cbe10db": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033132774s Dec 30 13:20:14.573: INFO: Pod "pod-5a852e4f-45eb-40b6-b8f5-b29f4cbe10db": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044392475s Dec 30 13:20:16.588: INFO: Pod "pod-5a852e4f-45eb-40b6-b8f5-b29f4cbe10db": Phase="Pending", Reason="", readiness=false. Elapsed: 8.059147601s Dec 30 13:20:18.608: INFO: Pod "pod-5a852e4f-45eb-40b6-b8f5-b29f4cbe10db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.079728523s STEP: Saw pod success Dec 30 13:20:18.608: INFO: Pod "pod-5a852e4f-45eb-40b6-b8f5-b29f4cbe10db" satisfied condition "success or failure" Dec 30 13:20:18.622: INFO: Trying to get logs from node iruya-node pod pod-5a852e4f-45eb-40b6-b8f5-b29f4cbe10db container test-container: STEP: delete the pod Dec 30 13:20:18.784: INFO: Waiting for pod pod-5a852e4f-45eb-40b6-b8f5-b29f4cbe10db to disappear Dec 30 13:20:18.788: INFO: Pod pod-5a852e4f-45eb-40b6-b8f5-b29f4cbe10db no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:20:18.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5054" for this suite. Dec 30 13:20:24.810: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:20:24.987: INFO: namespace emptydir-5054 deletion completed in 6.195986819s • [SLOW TEST:16.671 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:20:24.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-2493e6de-6009-477e-ab5c-0733e83101f2 in namespace container-probe-882 Dec 30 13:20:35.139: INFO: Started pod busybox-2493e6de-6009-477e-ab5c-0733e83101f2 in namespace container-probe-882 STEP: checking the pod's current state and verifying that restartCount is present Dec 30 13:20:35.143: INFO: Initial restart count of pod busybox-2493e6de-6009-477e-ab5c-0733e83101f2 is 0 Dec 30 13:21:33.437: INFO: Restart count of pod container-probe-882/busybox-2493e6de-6009-477e-ab5c-0733e83101f2 is now 1 (58.29419122s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:21:33.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-882" for this suite. Dec 30 13:21:39.586: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:21:39.709: INFO: namespace container-probe-882 deletion completed in 6.201788655s • [SLOW TEST:74.720 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:21:39.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 30 13:21:47.937: INFO: Waiting up to 5m0s for pod "client-envvars-91207ec9-15fd-4934-9fa6-a54a1a0629c5" in namespace "pods-5061" to be "success or failure" Dec 30 13:21:47.954: INFO: Pod "client-envvars-91207ec9-15fd-4934-9fa6-a54a1a0629c5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.368971ms Dec 30 13:21:49.962: INFO: Pod "client-envvars-91207ec9-15fd-4934-9fa6-a54a1a0629c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024516339s Dec 30 13:21:51.969: INFO: Pod "client-envvars-91207ec9-15fd-4934-9fa6-a54a1a0629c5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031587544s Dec 30 13:21:53.979: INFO: Pod "client-envvars-91207ec9-15fd-4934-9fa6-a54a1a0629c5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04195531s Dec 30 13:21:55.990: INFO: Pod "client-envvars-91207ec9-15fd-4934-9fa6-a54a1a0629c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.052684336s STEP: Saw pod success Dec 30 13:21:55.990: INFO: Pod "client-envvars-91207ec9-15fd-4934-9fa6-a54a1a0629c5" satisfied condition "success or failure" Dec 30 13:21:55.994: INFO: Trying to get logs from node iruya-node pod client-envvars-91207ec9-15fd-4934-9fa6-a54a1a0629c5 container env3cont: STEP: delete the pod Dec 30 13:21:56.051: INFO: Waiting for pod client-envvars-91207ec9-15fd-4934-9fa6-a54a1a0629c5 to disappear Dec 30 13:21:56.059: INFO: Pod client-envvars-91207ec9-15fd-4934-9fa6-a54a1a0629c5 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:21:56.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5061" for this suite. Dec 30 13:22:48.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:22:48.329: INFO: namespace pods-5061 deletion completed in 52.261590601s • [SLOW TEST:68.620 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:22:48.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 30 13:22:48.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-5167' Dec 30 13:22:50.433: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Dec 30 13:22:50.433: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 Dec 30 13:22:50.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-5167' Dec 30 13:22:50.622: INFO: stderr: "" Dec 30 13:22:50.622: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:22:50.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5167" for this suite. Dec 30 13:23:12.702: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:23:12.787: INFO: namespace kubectl-5167 deletion completed in 22.150077868s • [SLOW TEST:24.456 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:23:12.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Dec 30 13:23:12.931: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Dec 30 13:23:12.964: INFO: Waiting for terminating namespaces to be deleted... Dec 30 13:23:12.968: INFO: Logging pods the kubelet thinks is on node iruya-node before test Dec 30 13:23:12.979: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded) Dec 30 13:23:12.979: INFO: Container kube-proxy ready: true, restart count 0 Dec 30 13:23:12.979: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded) Dec 30 13:23:12.979: INFO: Container weave ready: true, restart count 0 Dec 30 13:23:12.979: INFO: Container weave-npc ready: true, restart count 0 Dec 30 13:23:12.979: INFO: Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test Dec 30 13:23:12.989: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded) Dec 30 13:23:12.989: INFO: Container kube-scheduler ready: true, restart count 10 Dec 30 13:23:12.989: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Dec 30 13:23:12.989: INFO: Container coredns ready: true, restart count 0 Dec 30 13:23:12.989: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded) Dec 30 13:23:12.989: INFO: Container etcd ready: true, restart count 0 Dec 30 13:23:12.989: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded) Dec 30 13:23:12.989: INFO: Container weave ready: true, restart count 0 Dec 30 13:23:12.989: INFO: Container weave-npc ready: true, restart count 0 Dec 30 13:23:12.989: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Dec 30 13:23:12.989: INFO: Container coredns ready: true, restart count 0 Dec 30 13:23:12.989: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded) Dec 30 13:23:12.989: INFO: Container kube-controller-manager ready: true, restart count 14 Dec 30 13:23:12.989: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded) Dec 30 13:23:12.989: INFO: Container kube-proxy ready: true, restart count 0 Dec 30 13:23:12.989: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded) Dec 30 13:23:12.989: INFO: Container kube-apiserver ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-fd690832-d67e-4924-a45a-56ce5c163d77 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-fd690832-d67e-4924-a45a-56ce5c163d77 off the node iruya-node STEP: verifying the node doesn't have the label kubernetes.io/e2e-fd690832-d67e-4924-a45a-56ce5c163d77 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:23:31.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9132" for this suite. Dec 30 13:23:51.363: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:23:51.494: INFO: namespace sched-pred-9132 deletion completed in 20.202733005s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:38.707 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:23:51.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Dec 30 13:23:51.616: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:24:04.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8669" for this suite. Dec 30 13:24:12.965: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:24:13.078: INFO: namespace init-container-8669 deletion completed in 8.133326534s • [SLOW TEST:21.584 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:24:13.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-bntr STEP: Creating a pod to test atomic-volume-subpath Dec 30 13:24:13.230: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-bntr" in namespace "subpath-7063" to be "success or failure" Dec 30 13:24:13.261: INFO: Pod "pod-subpath-test-configmap-bntr": Phase="Pending", Reason="", readiness=false. Elapsed: 31.109741ms Dec 30 13:24:15.276: INFO: Pod "pod-subpath-test-configmap-bntr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046130974s Dec 30 13:24:17.284: INFO: Pod "pod-subpath-test-configmap-bntr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053986343s Dec 30 13:24:19.295: INFO: Pod "pod-subpath-test-configmap-bntr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064389864s Dec 30 13:24:21.301: INFO: Pod "pod-subpath-test-configmap-bntr": Phase="Pending", Reason="", readiness=false. Elapsed: 8.071108136s Dec 30 13:24:23.312: INFO: Pod "pod-subpath-test-configmap-bntr": Phase="Running", Reason="", readiness=true. Elapsed: 10.081310644s Dec 30 13:24:25.318: INFO: Pod "pod-subpath-test-configmap-bntr": Phase="Running", Reason="", readiness=true. Elapsed: 12.087586333s Dec 30 13:24:27.326: INFO: Pod "pod-subpath-test-configmap-bntr": Phase="Running", Reason="", readiness=true. Elapsed: 14.09538883s Dec 30 13:24:29.334: INFO: Pod "pod-subpath-test-configmap-bntr": Phase="Running", Reason="", readiness=true. Elapsed: 16.104099178s Dec 30 13:24:31.343: INFO: Pod "pod-subpath-test-configmap-bntr": Phase="Running", Reason="", readiness=true. Elapsed: 18.113185608s Dec 30 13:24:33.352: INFO: Pod "pod-subpath-test-configmap-bntr": Phase="Running", Reason="", readiness=true. Elapsed: 20.122121775s Dec 30 13:24:35.374: INFO: Pod "pod-subpath-test-configmap-bntr": Phase="Running", Reason="", readiness=true. Elapsed: 22.143256236s Dec 30 13:24:37.383: INFO: Pod "pod-subpath-test-configmap-bntr": Phase="Running", Reason="", readiness=true. Elapsed: 24.153029654s Dec 30 13:24:39.393: INFO: Pod "pod-subpath-test-configmap-bntr": Phase="Running", Reason="", readiness=true. Elapsed: 26.162471127s Dec 30 13:24:41.401: INFO: Pod "pod-subpath-test-configmap-bntr": Phase="Running", Reason="", readiness=true. Elapsed: 28.170680234s Dec 30 13:24:43.409: INFO: Pod "pod-subpath-test-configmap-bntr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.179049057s STEP: Saw pod success Dec 30 13:24:43.409: INFO: Pod "pod-subpath-test-configmap-bntr" satisfied condition "success or failure" Dec 30 13:24:43.414: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-bntr container test-container-subpath-configmap-bntr: STEP: delete the pod Dec 30 13:24:43.484: INFO: Waiting for pod pod-subpath-test-configmap-bntr to disappear Dec 30 13:24:43.488: INFO: Pod pod-subpath-test-configmap-bntr no longer exists STEP: Deleting pod pod-subpath-test-configmap-bntr Dec 30 13:24:43.488: INFO: Deleting pod "pod-subpath-test-configmap-bntr" in namespace "subpath-7063" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:24:43.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7063" for this suite. Dec 30 13:24:49.583: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:24:49.725: INFO: namespace subpath-7063 deletion completed in 6.223436348s • [SLOW TEST:36.646 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:24:49.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod Dec 30 13:24:49.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6685' Dec 30 13:24:50.155: INFO: stderr: "" Dec 30 13:24:50.156: INFO: stdout: "pod/pause created\n" Dec 30 13:24:50.156: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Dec 30 13:24:50.156: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-6685" to be "running and ready" Dec 30 13:24:50.197: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 41.571131ms Dec 30 13:24:52.214: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058215782s Dec 30 13:24:54.222: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065818087s Dec 30 13:24:56.229: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073637444s Dec 30 13:24:58.240: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.084177566s Dec 30 13:25:00.250: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 10.094441028s Dec 30 13:25:00.250: INFO: Pod "pause" satisfied condition "running and ready" Dec 30 13:25:00.250: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod Dec 30 13:25:00.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-6685' Dec 30 13:25:00.454: INFO: stderr: "" Dec 30 13:25:00.454: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Dec 30 13:25:00.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6685' Dec 30 13:25:00.642: INFO: stderr: "" Dec 30 13:25:00.642: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 10s testing-label-value\n" STEP: removing the label testing-label of a pod Dec 30 13:25:00.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-6685' Dec 30 13:25:00.739: INFO: stderr: "" Dec 30 13:25:00.739: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Dec 30 13:25:00.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6685' Dec 30 13:25:00.830: INFO: stderr: "" Dec 30 13:25:00.830: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 10s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources Dec 30 13:25:00.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6685' Dec 30 13:25:00.945: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 30 13:25:00.945: INFO: stdout: "pod \"pause\" force deleted\n" Dec 30 13:25:00.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-6685' Dec 30 13:25:01.150: INFO: stderr: "No resources found.\n" Dec 30 13:25:01.150: INFO: stdout: "" Dec 30 13:25:01.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-6685 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Dec 30 13:25:01.249: INFO: stderr: "" Dec 30 13:25:01.249: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:25:01.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6685" for this suite. Dec 30 13:25:07.351: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:25:07.528: INFO: namespace kubectl-6685 deletion completed in 6.26894966s • [SLOW TEST:17.803 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:25:07.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 30 13:25:07.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Dec 30 13:25:07.832: INFO: stderr: "" Dec 30 13:25:07.832: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:55:20Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.1\", GitCommit:\"4485c6f18cee9a5d3c3b4e523bd27972b1b53892\", GitTreeState:\"clean\", BuildDate:\"2019-07-18T09:09:21Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:25:07.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6471" for this suite. Dec 30 13:25:13.934: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:25:14.000: INFO: namespace kubectl-6471 deletion completed in 6.152948914s • [SLOW TEST:6.472 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:25:14.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Dec 30 13:25:24.218: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:25:24.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3057" for this suite. Dec 30 13:25:30.401: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:25:30.554: INFO: namespace container-runtime-3057 deletion completed in 6.302162143s • [SLOW TEST:16.554 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:25:30.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Dec 30 13:25:30.682: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Dec 30 13:25:30.711: INFO: Waiting for terminating namespaces to be deleted... Dec 30 13:25:30.714: INFO: Logging pods the kubelet thinks is on node iruya-node before test Dec 30 13:25:30.733: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded) Dec 30 13:25:30.733: INFO: Container weave ready: true, restart count 0 Dec 30 13:25:30.733: INFO: Container weave-npc ready: true, restart count 0 Dec 30 13:25:30.733: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded) Dec 30 13:25:30.733: INFO: Container kube-proxy ready: true, restart count 0 Dec 30 13:25:30.733: INFO: Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test Dec 30 13:25:30.742: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded) Dec 30 13:25:30.742: INFO: Container kube-apiserver ready: true, restart count 0 Dec 30 13:25:30.742: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded) Dec 30 13:25:30.742: INFO: Container kube-scheduler ready: true, restart count 10 Dec 30 13:25:30.742: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Dec 30 13:25:30.742: INFO: Container coredns ready: true, restart count 0 Dec 30 13:25:30.742: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded) Dec 30 13:25:30.742: INFO: Container etcd ready: true, restart count 0 Dec 30 13:25:30.742: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded) Dec 30 13:25:30.742: INFO: Container weave ready: true, restart count 0 Dec 30 13:25:30.742: INFO: Container weave-npc ready: true, restart count 0 Dec 30 13:25:30.742: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Dec 30 13:25:30.742: INFO: Container coredns ready: true, restart count 0 Dec 30 13:25:30.742: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded) Dec 30 13:25:30.742: INFO: Container kube-controller-manager ready: true, restart count 14 Dec 30 13:25:30.742: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded) Dec 30 13:25:30.742: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-node STEP: verifying the node has the label node iruya-server-sfge57q7djm7 Dec 30 13:25:30.842: INFO: Pod coredns-5c98db65d4-bm4gs requesting resource cpu=100m on Node iruya-server-sfge57q7djm7 Dec 30 13:25:30.842: INFO: Pod coredns-5c98db65d4-xx8w8 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7 Dec 30 13:25:30.842: INFO: Pod etcd-iruya-server-sfge57q7djm7 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7 Dec 30 13:25:30.842: INFO: Pod kube-apiserver-iruya-server-sfge57q7djm7 requesting resource cpu=250m on Node iruya-server-sfge57q7djm7 Dec 30 13:25:30.842: INFO: Pod kube-controller-manager-iruya-server-sfge57q7djm7 requesting resource cpu=200m on Node iruya-server-sfge57q7djm7 Dec 30 13:25:30.842: INFO: Pod kube-proxy-58v95 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7 Dec 30 13:25:30.842: INFO: Pod kube-proxy-976zl requesting resource cpu=0m on Node iruya-node Dec 30 13:25:30.842: INFO: Pod kube-scheduler-iruya-server-sfge57q7djm7 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7 Dec 30 13:25:30.842: INFO: Pod weave-net-bzl4d requesting resource cpu=20m on Node iruya-server-sfge57q7djm7 Dec 30 13:25:30.842: INFO: Pod weave-net-rlp57 requesting resource cpu=20m on Node iruya-node STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-30b27f7e-7c71-4741-9a60-64da9b88a047.15e529017f9bc460], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1846/filler-pod-30b27f7e-7c71-4741-9a60-64da9b88a047 to iruya-node] STEP: Considering event: Type = [Normal], Name = [filler-pod-30b27f7e-7c71-4741-9a60-64da9b88a047.15e529026df05d85], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-30b27f7e-7c71-4741-9a60-64da9b88a047.15e529032e15b214], Reason = [Created], Message = [Created container filler-pod-30b27f7e-7c71-4741-9a60-64da9b88a047] STEP: Considering event: Type = [Normal], Name = [filler-pod-30b27f7e-7c71-4741-9a60-64da9b88a047.15e529035df25fe3], Reason = [Started], Message = [Started container filler-pod-30b27f7e-7c71-4741-9a60-64da9b88a047] STEP: Considering event: Type = [Normal], Name = [filler-pod-628dae43-8e90-40e8-baa8-23ca49f08a19.15e5290184f588e2], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1846/filler-pod-628dae43-8e90-40e8-baa8-23ca49f08a19 to iruya-server-sfge57q7djm7] STEP: Considering event: Type = [Normal], Name = [filler-pod-628dae43-8e90-40e8-baa8-23ca49f08a19.15e52902ba83eaa5], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-628dae43-8e90-40e8-baa8-23ca49f08a19.15e5290374f79c29], Reason = [Created], Message = [Created container filler-pod-628dae43-8e90-40e8-baa8-23ca49f08a19] STEP: Considering event: Type = [Normal], Name = [filler-pod-628dae43-8e90-40e8-baa8-23ca49f08a19.15e529039bba3bda], Reason = [Started], Message = [Started container filler-pod-628dae43-8e90-40e8-baa8-23ca49f08a19] STEP: Considering event: Type = [Warning], Name = [additional-pod.15e52903db9d253d], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.] STEP: removing the label node off the node iruya-node STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-server-sfge57q7djm7 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:25:42.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1846" for this suite. Dec 30 13:25:48.920: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:25:49.512: INFO: namespace sched-pred-1846 deletion completed in 7.37321826s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:18.957 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:25:49.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W1230 13:26:20.458880 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 30 13:26:20.459: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:26:20.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2256" for this suite. Dec 30 13:26:27.652: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:26:28.400: INFO: namespace gc-2256 deletion completed in 7.934042288s • [SLOW TEST:38.887 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:26:28.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-26feb23e-957f-45e5-987c-f9f790b87d2d STEP: Creating a pod to test consume configMaps Dec 30 13:26:28.622: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7864d842-90ff-46d2-b917-9a811fe5c0dd" in namespace "projected-7333" to be "success or failure" Dec 30 13:26:28.637: INFO: Pod "pod-projected-configmaps-7864d842-90ff-46d2-b917-9a811fe5c0dd": Phase="Pending", Reason="", readiness=false. Elapsed: 15.100959ms Dec 30 13:26:30.652: INFO: Pod "pod-projected-configmaps-7864d842-90ff-46d2-b917-9a811fe5c0dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029911841s Dec 30 13:26:32.674: INFO: Pod "pod-projected-configmaps-7864d842-90ff-46d2-b917-9a811fe5c0dd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052222711s Dec 30 13:26:34.682: INFO: Pod "pod-projected-configmaps-7864d842-90ff-46d2-b917-9a811fe5c0dd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060034641s Dec 30 13:26:36.725: INFO: Pod "pod-projected-configmaps-7864d842-90ff-46d2-b917-9a811fe5c0dd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.10313633s Dec 30 13:26:38.748: INFO: Pod "pod-projected-configmaps-7864d842-90ff-46d2-b917-9a811fe5c0dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.126436507s STEP: Saw pod success Dec 30 13:26:38.748: INFO: Pod "pod-projected-configmaps-7864d842-90ff-46d2-b917-9a811fe5c0dd" satisfied condition "success or failure" Dec 30 13:26:38.757: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-7864d842-90ff-46d2-b917-9a811fe5c0dd container projected-configmap-volume-test: STEP: delete the pod Dec 30 13:26:38.896: INFO: Waiting for pod pod-projected-configmaps-7864d842-90ff-46d2-b917-9a811fe5c0dd to disappear Dec 30 13:26:38.984: INFO: Pod pod-projected-configmaps-7864d842-90ff-46d2-b917-9a811fe5c0dd no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:26:38.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7333" for this suite. Dec 30 13:26:45.059: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:26:45.199: INFO: namespace projected-7333 deletion completed in 6.180156835s • [SLOW TEST:16.798 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:26:45.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Dec 30 13:26:45.306: INFO: Waiting up to 5m0s for pod "pod-40fd4f41-fb0b-4e92-92de-c0790c358340" in namespace "emptydir-5890" to be "success or failure" Dec 30 13:26:45.335: INFO: Pod "pod-40fd4f41-fb0b-4e92-92de-c0790c358340": Phase="Pending", Reason="", readiness=false. Elapsed: 28.260532ms Dec 30 13:26:47.348: INFO: Pod "pod-40fd4f41-fb0b-4e92-92de-c0790c358340": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041803896s Dec 30 13:26:49.356: INFO: Pod "pod-40fd4f41-fb0b-4e92-92de-c0790c358340": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04956694s Dec 30 13:26:51.367: INFO: Pod "pod-40fd4f41-fb0b-4e92-92de-c0790c358340": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06071659s Dec 30 13:26:53.375: INFO: Pod "pod-40fd4f41-fb0b-4e92-92de-c0790c358340": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.068758911s STEP: Saw pod success Dec 30 13:26:53.375: INFO: Pod "pod-40fd4f41-fb0b-4e92-92de-c0790c358340" satisfied condition "success or failure" Dec 30 13:26:53.379: INFO: Trying to get logs from node iruya-node pod pod-40fd4f41-fb0b-4e92-92de-c0790c358340 container test-container: STEP: delete the pod Dec 30 13:26:53.492: INFO: Waiting for pod pod-40fd4f41-fb0b-4e92-92de-c0790c358340 to disappear Dec 30 13:26:53.497: INFO: Pod pod-40fd4f41-fb0b-4e92-92de-c0790c358340 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:26:53.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5890" for this suite. Dec 30 13:26:59.536: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:26:59.711: INFO: namespace emptydir-5890 deletion completed in 6.206816695s • [SLOW TEST:14.512 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:26:59.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 30 13:26:59.859: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:27:08.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4295" for this suite. Dec 30 13:28:00.338: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:28:00.491: INFO: namespace pods-4295 deletion completed in 52.184861398s • [SLOW TEST:60.780 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:28:00.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-def4da76-f034-496b-94b8-17c0acf01bf2 STEP: Creating a pod to test consume secrets Dec 30 13:28:00.677: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-71c12cb1-1679-4f7a-a1a5-39bc1baa3b96" in namespace "projected-1071" to be "success or failure" Dec 30 13:28:00.691: INFO: Pod "pod-projected-secrets-71c12cb1-1679-4f7a-a1a5-39bc1baa3b96": Phase="Pending", Reason="", readiness=false. Elapsed: 14.059009ms Dec 30 13:28:02.699: INFO: Pod "pod-projected-secrets-71c12cb1-1679-4f7a-a1a5-39bc1baa3b96": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021830985s Dec 30 13:28:04.707: INFO: Pod "pod-projected-secrets-71c12cb1-1679-4f7a-a1a5-39bc1baa3b96": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029585345s Dec 30 13:28:06.716: INFO: Pod "pod-projected-secrets-71c12cb1-1679-4f7a-a1a5-39bc1baa3b96": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03910206s Dec 30 13:28:08.777: INFO: Pod "pod-projected-secrets-71c12cb1-1679-4f7a-a1a5-39bc1baa3b96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.100462998s STEP: Saw pod success Dec 30 13:28:08.777: INFO: Pod "pod-projected-secrets-71c12cb1-1679-4f7a-a1a5-39bc1baa3b96" satisfied condition "success or failure" Dec 30 13:28:08.789: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-71c12cb1-1679-4f7a-a1a5-39bc1baa3b96 container projected-secret-volume-test: STEP: delete the pod Dec 30 13:28:08.838: INFO: Waiting for pod pod-projected-secrets-71c12cb1-1679-4f7a-a1a5-39bc1baa3b96 to disappear Dec 30 13:28:08.842: INFO: Pod pod-projected-secrets-71c12cb1-1679-4f7a-a1a5-39bc1baa3b96 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:28:08.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1071" for this suite. Dec 30 13:28:14.869: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:28:14.972: INFO: namespace projected-1071 deletion completed in 6.123996485s • [SLOW TEST:14.480 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:28:14.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Dec 30 13:28:15.125: INFO: Waiting up to 5m0s for pod "pod-1a7634ab-cd22-4c92-bfb0-37d5e713fc26" in namespace "emptydir-6089" to be "success or failure" Dec 30 13:28:15.137: INFO: Pod "pod-1a7634ab-cd22-4c92-bfb0-37d5e713fc26": Phase="Pending", Reason="", readiness=false. Elapsed: 12.019115ms Dec 30 13:28:17.147: INFO: Pod "pod-1a7634ab-cd22-4c92-bfb0-37d5e713fc26": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021456578s Dec 30 13:28:19.177: INFO: Pod "pod-1a7634ab-cd22-4c92-bfb0-37d5e713fc26": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052014955s Dec 30 13:28:21.184: INFO: Pod "pod-1a7634ab-cd22-4c92-bfb0-37d5e713fc26": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058775418s Dec 30 13:28:23.198: INFO: Pod "pod-1a7634ab-cd22-4c92-bfb0-37d5e713fc26": Phase="Pending", Reason="", readiness=false. Elapsed: 8.072337306s Dec 30 13:28:25.209: INFO: Pod "pod-1a7634ab-cd22-4c92-bfb0-37d5e713fc26": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.08339295s STEP: Saw pod success Dec 30 13:28:25.209: INFO: Pod "pod-1a7634ab-cd22-4c92-bfb0-37d5e713fc26" satisfied condition "success or failure" Dec 30 13:28:25.215: INFO: Trying to get logs from node iruya-node pod pod-1a7634ab-cd22-4c92-bfb0-37d5e713fc26 container test-container: STEP: delete the pod Dec 30 13:28:25.315: INFO: Waiting for pod pod-1a7634ab-cd22-4c92-bfb0-37d5e713fc26 to disappear Dec 30 13:28:25.424: INFO: Pod pod-1a7634ab-cd22-4c92-bfb0-37d5e713fc26 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:28:25.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6089" for this suite. Dec 30 13:28:31.464: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:28:31.608: INFO: namespace emptydir-6089 deletion completed in 6.171750818s • [SLOW TEST:16.635 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:28:31.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-f5ceff49-1add-4867-9909-e4ae8b5ee287 Dec 30 13:28:31.680: INFO: Pod name my-hostname-basic-f5ceff49-1add-4867-9909-e4ae8b5ee287: Found 0 pods out of 1 Dec 30 13:28:36.687: INFO: Pod name my-hostname-basic-f5ceff49-1add-4867-9909-e4ae8b5ee287: Found 1 pods out of 1 Dec 30 13:28:36.687: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-f5ceff49-1add-4867-9909-e4ae8b5ee287" are running Dec 30 13:28:40.702: INFO: Pod "my-hostname-basic-f5ceff49-1add-4867-9909-e4ae8b5ee287-pxjb8" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-30 13:28:31 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-30 13:28:31 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-f5ceff49-1add-4867-9909-e4ae8b5ee287]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-30 13:28:31 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-f5ceff49-1add-4867-9909-e4ae8b5ee287]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-30 13:28:31 +0000 UTC Reason: Message:}]) Dec 30 13:28:40.702: INFO: Trying to dial the pod Dec 30 13:28:45.750: INFO: Controller my-hostname-basic-f5ceff49-1add-4867-9909-e4ae8b5ee287: Got expected result from replica 1 [my-hostname-basic-f5ceff49-1add-4867-9909-e4ae8b5ee287-pxjb8]: "my-hostname-basic-f5ceff49-1add-4867-9909-e4ae8b5ee287-pxjb8", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:28:45.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1761" for this suite. Dec 30 13:28:51.851: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:28:52.037: INFO: namespace replication-controller-1761 deletion completed in 6.26078345s • [SLOW TEST:20.429 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:28:52.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 30 13:28:52.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-783' Dec 30 13:28:52.433: INFO: stderr: "" Dec 30 13:28:52.433: INFO: stdout: "replicationcontroller/redis-master created\n" Dec 30 13:28:52.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-783' Dec 30 13:28:52.909: INFO: stderr: "" Dec 30 13:28:52.909: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Dec 30 13:28:53.920: INFO: Selector matched 1 pods for map[app:redis] Dec 30 13:28:53.920: INFO: Found 0 / 1 Dec 30 13:28:54.920: INFO: Selector matched 1 pods for map[app:redis] Dec 30 13:28:54.920: INFO: Found 0 / 1 Dec 30 13:28:55.919: INFO: Selector matched 1 pods for map[app:redis] Dec 30 13:28:55.919: INFO: Found 0 / 1 Dec 30 13:28:56.918: INFO: Selector matched 1 pods for map[app:redis] Dec 30 13:28:56.919: INFO: Found 0 / 1 Dec 30 13:28:57.928: INFO: Selector matched 1 pods for map[app:redis] Dec 30 13:28:57.928: INFO: Found 0 / 1 Dec 30 13:28:58.918: INFO: Selector matched 1 pods for map[app:redis] Dec 30 13:28:58.918: INFO: Found 0 / 1 Dec 30 13:28:59.922: INFO: Selector matched 1 pods for map[app:redis] Dec 30 13:28:59.922: INFO: Found 0 / 1 Dec 30 13:29:00.923: INFO: Selector matched 1 pods for map[app:redis] Dec 30 13:29:00.923: INFO: Found 0 / 1 Dec 30 13:29:01.928: INFO: Selector matched 1 pods for map[app:redis] Dec 30 13:29:01.928: INFO: Found 0 / 1 Dec 30 13:29:02.916: INFO: Selector matched 1 pods for map[app:redis] Dec 30 13:29:02.916: INFO: Found 0 / 1 Dec 30 13:29:03.924: INFO: Selector matched 1 pods for map[app:redis] Dec 30 13:29:03.924: INFO: Found 1 / 1 Dec 30 13:29:03.924: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Dec 30 13:29:03.931: INFO: Selector matched 1 pods for map[app:redis] Dec 30 13:29:03.931: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Dec 30 13:29:03.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-7lllj --namespace=kubectl-783' Dec 30 13:29:04.040: INFO: stderr: "" Dec 30 13:29:04.040: INFO: stdout: "Name: redis-master-7lllj\nNamespace: kubectl-783\nPriority: 0\nNode: iruya-node/10.96.3.65\nStart Time: Mon, 30 Dec 2019 13:28:52 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.44.0.1\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: docker://7ee7851f7c96b5b35ddd9c6f5fc8739a88342108d9324a09f1b25296ae7e808f\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Mon, 30 Dec 2019 13:29:02 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-7v9rp (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-7v9rp:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-7v9rp\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 12s default-scheduler Successfully assigned kubectl-783/redis-master-7lllj to iruya-node\n Normal Pulled 6s kubelet, iruya-node Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 3s kubelet, iruya-node Created container redis-master\n Normal Started 2s kubelet, iruya-node Started container redis-master\n" Dec 30 13:29:04.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-783' Dec 30 13:29:04.135: INFO: stderr: "" Dec 30 13:29:04.135: INFO: stdout: "Name: redis-master\nNamespace: kubectl-783\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 12s replication-controller Created pod: redis-master-7lllj\n" Dec 30 13:29:04.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-783' Dec 30 13:29:04.264: INFO: stderr: "" Dec 30 13:29:04.264: INFO: stdout: "Name: redis-master\nNamespace: kubectl-783\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.100.220.0\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.44.0.1:6379\nSession Affinity: None\nEvents: \n" Dec 30 13:29:04.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-node' Dec 30 13:29:04.379: INFO: stderr: "" Dec 30 13:29:04.379: INFO: stdout: "Name: iruya-node\nRoles: \nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-node\n kubernetes.io/os=linux\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 04 Aug 2019 09:01:39 +0000\nTaints: \nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Sat, 12 Oct 2019 11:56:49 +0000 Sat, 12 Oct 2019 11:56:49 +0000 WeaveIsUp Weave pod has set this\n MemoryPressure False Mon, 30 Dec 2019 13:28:53 +0000 Sun, 04 Aug 2019 09:01:39 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Mon, 30 Dec 2019 13:28:53 +0000 Sun, 04 Aug 2019 09:01:39 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Mon, 30 Dec 2019 13:28:53 +0000 Sun, 04 Aug 2019 09:01:39 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Mon, 30 Dec 2019 13:28:53 +0000 Sun, 04 Aug 2019 09:02:19 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled\nAddresses:\n InternalIP: 10.96.3.65\n Hostname: iruya-node\nCapacity:\n cpu: 4\n ephemeral-storage: 20145724Ki\n hugepages-2Mi: 0\n memory: 4039076Ki\n pods: 110\nAllocatable:\n cpu: 4\n ephemeral-storage: 18566299208\n hugepages-2Mi: 0\n memory: 3936676Ki\n pods: 110\nSystem Info:\n Machine ID: f573dcf04d6f4a87856a35d266a2fa7a\n System UUID: F573DCF0-4D6F-4A87-856A-35D266A2FA7A\n Boot ID: 8baf4beb-8391-43e6-b17b-b1e184b5370a\n Kernel Version: 4.15.0-52-generic\n OS Image: Ubuntu 18.04.2 LTS\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://18.9.7\n Kubelet Version: v1.15.1\n Kube-Proxy Version: v1.15.1\nPodCIDR: 10.96.1.0/24\nNon-terminated Pods: (3 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system kube-proxy-976zl 0 (0%) 0 (0%) 0 (0%) 0 (0%) 148d\n kube-system weave-net-rlp57 20m (0%) 0 (0%) 0 (0%) 0 (0%) 79d\n kubectl-783 redis-master-7lllj 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12s\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 20m (0%) 0 (0%)\n memory 0 (0%) 0 (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Dec 30 13:29:04.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-783' Dec 30 13:29:04.496: INFO: stderr: "" Dec 30 13:29:04.497: INFO: stdout: "Name: kubectl-783\nLabels: e2e-framework=kubectl\n e2e-run=aa9ca122-57d3-4ee7-98bf-213cd2f210ae\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:29:04.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-783" for this suite. Dec 30 13:29:26.535: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:29:26.739: INFO: namespace kubectl-783 deletion completed in 22.236500659s • [SLOW TEST:34.701 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:29:26.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Dec 30 13:29:26.888: INFO: Number of nodes with available pods: 0 Dec 30 13:29:26.888: INFO: Node iruya-node is running more than one daemon pod Dec 30 13:29:28.283: INFO: Number of nodes with available pods: 0 Dec 30 13:29:28.283: INFO: Node iruya-node is running more than one daemon pod Dec 30 13:29:29.562: INFO: Number of nodes with available pods: 0 Dec 30 13:29:29.562: INFO: Node iruya-node is running more than one daemon pod Dec 30 13:29:29.924: INFO: Number of nodes with available pods: 0 Dec 30 13:29:29.924: INFO: Node iruya-node is running more than one daemon pod Dec 30 13:29:30.923: INFO: Number of nodes with available pods: 0 Dec 30 13:29:30.923: INFO: Node iruya-node is running more than one daemon pod Dec 30 13:29:32.782: INFO: Number of nodes with available pods: 0 Dec 30 13:29:32.782: INFO: Node iruya-node is running more than one daemon pod Dec 30 13:29:34.250: INFO: Number of nodes with available pods: 0 Dec 30 13:29:34.250: INFO: Node iruya-node is running more than one daemon pod Dec 30 13:29:35.620: INFO: Number of nodes with available pods: 0 Dec 30 13:29:35.620: INFO: Node iruya-node is running more than one daemon pod Dec 30 13:29:35.905: INFO: Number of nodes with available pods: 0 Dec 30 13:29:35.906: INFO: Node iruya-node is running more than one daemon pod Dec 30 13:29:36.922: INFO: Number of nodes with available pods: 2 Dec 30 13:29:36.922: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Dec 30 13:29:37.035: INFO: Number of nodes with available pods: 1 Dec 30 13:29:37.035: INFO: Node iruya-node is running more than one daemon pod Dec 30 13:29:38.051: INFO: Number of nodes with available pods: 1 Dec 30 13:29:38.051: INFO: Node iruya-node is running more than one daemon pod Dec 30 13:29:39.049: INFO: Number of nodes with available pods: 1 Dec 30 13:29:39.049: INFO: Node iruya-node is running more than one daemon pod Dec 30 13:29:40.046: INFO: Number of nodes with available pods: 1 Dec 30 13:29:40.046: INFO: Node iruya-node is running more than one daemon pod Dec 30 13:29:41.052: INFO: Number of nodes with available pods: 1 Dec 30 13:29:41.052: INFO: Node iruya-node is running more than one daemon pod Dec 30 13:29:42.044: INFO: Number of nodes with available pods: 1 Dec 30 13:29:42.044: INFO: Node iruya-node is running more than one daemon pod Dec 30 13:29:43.064: INFO: Number of nodes with available pods: 1 Dec 30 13:29:43.064: INFO: Node iruya-node is running more than one daemon pod Dec 30 13:29:44.099: INFO: Number of nodes with available pods: 1 Dec 30 13:29:44.099: INFO: Node iruya-node is running more than one daemon pod Dec 30 13:29:45.114: INFO: Number of nodes with available pods: 1 Dec 30 13:29:45.114: INFO: Node iruya-node is running more than one daemon pod Dec 30 13:29:46.047: INFO: Number of nodes with available pods: 1 Dec 30 13:29:46.047: INFO: Node iruya-node is running more than one daemon pod Dec 30 13:29:47.053: INFO: Number of nodes with available pods: 1 Dec 30 13:29:47.054: INFO: Node iruya-node is running more than one daemon pod Dec 30 13:29:48.048: INFO: Number of nodes with available pods: 1 Dec 30 13:29:48.048: INFO: Node iruya-node is running more than one daemon pod Dec 30 13:29:49.052: INFO: Number of nodes with available pods: 1 Dec 30 13:29:49.052: INFO: Node iruya-node is running more than one daemon pod Dec 30 13:29:50.048: INFO: Number of nodes with available pods: 1 Dec 30 13:29:50.048: INFO: Node iruya-node is running more than one daemon pod Dec 30 13:29:51.072: INFO: Number of nodes with available pods: 1 Dec 30 13:29:51.072: INFO: Node iruya-node is running more than one daemon pod Dec 30 13:29:52.047: INFO: Number of nodes with available pods: 1 Dec 30 13:29:52.047: INFO: Node iruya-node is running more than one daemon pod Dec 30 13:29:53.057: INFO: Number of nodes with available pods: 2 Dec 30 13:29:53.057: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1404, will wait for the garbage collector to delete the pods Dec 30 13:29:53.154: INFO: Deleting DaemonSet.extensions daemon-set took: 36.618836ms Dec 30 13:29:53.454: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.317556ms Dec 30 13:30:07.893: INFO: Number of nodes with available pods: 0 Dec 30 13:30:07.893: INFO: Number of running nodes: 0, number of available pods: 0 Dec 30 13:30:07.901: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1404/daemonsets","resourceVersion":"18645404"},"items":null} Dec 30 13:30:07.906: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1404/pods","resourceVersion":"18645404"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:30:07.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1404" for this suite. Dec 30 13:30:13.976: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:30:14.085: INFO: namespace daemonsets-1404 deletion completed in 6.148894456s • [SLOW TEST:47.345 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:30:14.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Dec 30 13:30:24.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-d2faf89a-7bc3-4252-b42e-2927eded6896 -c busybox-main-container --namespace=emptydir-8675 -- cat /usr/share/volumeshare/shareddata.txt' Dec 30 13:30:24.772: INFO: stderr: "" Dec 30 13:30:24.773: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:30:24.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8675" for this suite. Dec 30 13:30:30.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:30:30.979: INFO: namespace emptydir-8675 deletion completed in 6.190870685s • [SLOW TEST:16.894 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:30:30.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Dec 30 13:30:31.138: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3324,SelfLink:/api/v1/namespaces/watch-3324/configmaps/e2e-watch-test-configmap-a,UID:5d3fb7fb-62ff-4016-9856-32ddb9f83db6,ResourceVersion:18645489,Generation:0,CreationTimestamp:2019-12-30 13:30:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Dec 30 13:30:31.139: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3324,SelfLink:/api/v1/namespaces/watch-3324/configmaps/e2e-watch-test-configmap-a,UID:5d3fb7fb-62ff-4016-9856-32ddb9f83db6,ResourceVersion:18645489,Generation:0,CreationTimestamp:2019-12-30 13:30:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Dec 30 13:30:41.153: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3324,SelfLink:/api/v1/namespaces/watch-3324/configmaps/e2e-watch-test-configmap-a,UID:5d3fb7fb-62ff-4016-9856-32ddb9f83db6,ResourceVersion:18645503,Generation:0,CreationTimestamp:2019-12-30 13:30:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Dec 30 13:30:41.153: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3324,SelfLink:/api/v1/namespaces/watch-3324/configmaps/e2e-watch-test-configmap-a,UID:5d3fb7fb-62ff-4016-9856-32ddb9f83db6,ResourceVersion:18645503,Generation:0,CreationTimestamp:2019-12-30 13:30:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Dec 30 13:30:51.169: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3324,SelfLink:/api/v1/namespaces/watch-3324/configmaps/e2e-watch-test-configmap-a,UID:5d3fb7fb-62ff-4016-9856-32ddb9f83db6,ResourceVersion:18645517,Generation:0,CreationTimestamp:2019-12-30 13:30:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 30 13:30:51.169: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3324,SelfLink:/api/v1/namespaces/watch-3324/configmaps/e2e-watch-test-configmap-a,UID:5d3fb7fb-62ff-4016-9856-32ddb9f83db6,ResourceVersion:18645517,Generation:0,CreationTimestamp:2019-12-30 13:30:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Dec 30 13:31:01.184: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3324,SelfLink:/api/v1/namespaces/watch-3324/configmaps/e2e-watch-test-configmap-a,UID:5d3fb7fb-62ff-4016-9856-32ddb9f83db6,ResourceVersion:18645532,Generation:0,CreationTimestamp:2019-12-30 13:30:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 30 13:31:01.184: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3324,SelfLink:/api/v1/namespaces/watch-3324/configmaps/e2e-watch-test-configmap-a,UID:5d3fb7fb-62ff-4016-9856-32ddb9f83db6,ResourceVersion:18645532,Generation:0,CreationTimestamp:2019-12-30 13:30:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Dec 30 13:31:11.196: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-3324,SelfLink:/api/v1/namespaces/watch-3324/configmaps/e2e-watch-test-configmap-b,UID:f8aa47b4-d6e1-4822-bd74-e44ebcbabfa8,ResourceVersion:18645546,Generation:0,CreationTimestamp:2019-12-30 13:31:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Dec 30 13:31:11.196: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-3324,SelfLink:/api/v1/namespaces/watch-3324/configmaps/e2e-watch-test-configmap-b,UID:f8aa47b4-d6e1-4822-bd74-e44ebcbabfa8,ResourceVersion:18645546,Generation:0,CreationTimestamp:2019-12-30 13:31:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Dec 30 13:31:21.224: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-3324,SelfLink:/api/v1/namespaces/watch-3324/configmaps/e2e-watch-test-configmap-b,UID:f8aa47b4-d6e1-4822-bd74-e44ebcbabfa8,ResourceVersion:18645560,Generation:0,CreationTimestamp:2019-12-30 13:31:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Dec 30 13:31:21.224: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-3324,SelfLink:/api/v1/namespaces/watch-3324/configmaps/e2e-watch-test-configmap-b,UID:f8aa47b4-d6e1-4822-bd74-e44ebcbabfa8,ResourceVersion:18645560,Generation:0,CreationTimestamp:2019-12-30 13:31:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:31:31.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3324" for this suite. Dec 30 13:31:37.267: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:31:37.436: INFO: namespace watch-3324 deletion completed in 6.203242781s • [SLOW TEST:66.456 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:31:37.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc Dec 30 13:31:37.520: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1423' Dec 30 13:31:37.840: INFO: stderr: "" Dec 30 13:31:37.840: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. Dec 30 13:31:38.850: INFO: Selector matched 1 pods for map[app:redis] Dec 30 13:31:38.850: INFO: Found 0 / 1 Dec 30 13:31:39.850: INFO: Selector matched 1 pods for map[app:redis] Dec 30 13:31:39.851: INFO: Found 0 / 1 Dec 30 13:31:40.851: INFO: Selector matched 1 pods for map[app:redis] Dec 30 13:31:40.851: INFO: Found 0 / 1 Dec 30 13:31:41.853: INFO: Selector matched 1 pods for map[app:redis] Dec 30 13:31:41.853: INFO: Found 0 / 1 Dec 30 13:31:42.856: INFO: Selector matched 1 pods for map[app:redis] Dec 30 13:31:42.856: INFO: Found 0 / 1 Dec 30 13:31:43.895: INFO: Selector matched 1 pods for map[app:redis] Dec 30 13:31:43.895: INFO: Found 0 / 1 Dec 30 13:31:44.851: INFO: Selector matched 1 pods for map[app:redis] Dec 30 13:31:44.851: INFO: Found 0 / 1 Dec 30 13:31:45.858: INFO: Selector matched 1 pods for map[app:redis] Dec 30 13:31:45.858: INFO: Found 1 / 1 Dec 30 13:31:45.858: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Dec 30 13:31:45.864: INFO: Selector matched 1 pods for map[app:redis] Dec 30 13:31:45.864: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Dec 30 13:31:45.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-2zbhs redis-master --namespace=kubectl-1423' Dec 30 13:31:46.016: INFO: stderr: "" Dec 30 13:31:46.016: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 30 Dec 13:31:44.567 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 30 Dec 13:31:44.567 # Server started, Redis version 3.2.12\n1:M 30 Dec 13:31:44.568 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 30 Dec 13:31:44.568 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Dec 30 13:31:46.016: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-2zbhs redis-master --namespace=kubectl-1423 --tail=1' Dec 30 13:31:46.144: INFO: stderr: "" Dec 30 13:31:46.144: INFO: stdout: "1:M 30 Dec 13:31:44.568 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Dec 30 13:31:46.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-2zbhs redis-master --namespace=kubectl-1423 --limit-bytes=1' Dec 30 13:31:46.280: INFO: stderr: "" Dec 30 13:31:46.280: INFO: stdout: " " STEP: exposing timestamps Dec 30 13:31:46.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-2zbhs redis-master --namespace=kubectl-1423 --tail=1 --timestamps' Dec 30 13:31:46.432: INFO: stderr: "" Dec 30 13:31:46.432: INFO: stdout: "2019-12-30T13:31:44.573582734Z 1:M 30 Dec 13:31:44.568 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Dec 30 13:31:48.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-2zbhs redis-master --namespace=kubectl-1423 --since=1s' Dec 30 13:31:49.052: INFO: stderr: "" Dec 30 13:31:49.052: INFO: stdout: "" Dec 30 13:31:49.052: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-2zbhs redis-master --namespace=kubectl-1423 --since=24h' Dec 30 13:31:49.159: INFO: stderr: "" Dec 30 13:31:49.159: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 30 Dec 13:31:44.567 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 30 Dec 13:31:44.567 # Server started, Redis version 3.2.12\n1:M 30 Dec 13:31:44.568 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 30 Dec 13:31:44.568 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources Dec 30 13:31:49.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1423' Dec 30 13:31:49.277: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 30 13:31:49.277: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Dec 30 13:31:49.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-1423' Dec 30 13:31:49.388: INFO: stderr: "No resources found.\n" Dec 30 13:31:49.388: INFO: stdout: "" Dec 30 13:31:49.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-1423 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Dec 30 13:31:49.496: INFO: stderr: "" Dec 30 13:31:49.496: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:31:49.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1423" for this suite. Dec 30 13:32:11.561: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:32:11.727: INFO: namespace kubectl-1423 deletion completed in 22.190301499s • [SLOW TEST:34.290 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:32:11.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-e52d6c41-035d-4312-9420-d62c53e997cd STEP: Creating secret with name s-test-opt-upd-4dd78dac-22d9-4acf-8849-6ac5f0b411b2 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-e52d6c41-035d-4312-9420-d62c53e997cd STEP: Updating secret s-test-opt-upd-4dd78dac-22d9-4acf-8849-6ac5f0b411b2 STEP: Creating secret with name s-test-opt-create-eb9707b3-fbb6-46aa-a975-8c3ad3d1b9f7 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:32:26.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1971" for this suite. Dec 30 13:32:50.284: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:32:50.433: INFO: namespace secrets-1971 deletion completed in 24.170721275s • [SLOW TEST:38.705 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:32:50.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod Dec 30 13:32:58.658: INFO: Pod pod-hostip-bb7267da-0a76-41cc-b075-49f27456af0d has hostIP: 10.96.3.65 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:32:58.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2046" for this suite. Dec 30 13:33:20.758: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:33:20.896: INFO: namespace pods-2046 deletion completed in 22.232732475s • [SLOW TEST:30.461 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:33:20.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 30 13:33:21.053: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cf27d247-496e-4195-9797-05a8eba23754" in namespace "projected-3242" to be "success or failure" Dec 30 13:33:21.090: INFO: Pod "downwardapi-volume-cf27d247-496e-4195-9797-05a8eba23754": Phase="Pending", Reason="", readiness=false. Elapsed: 37.509101ms Dec 30 13:33:23.098: INFO: Pod "downwardapi-volume-cf27d247-496e-4195-9797-05a8eba23754": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045567245s Dec 30 13:33:25.107: INFO: Pod "downwardapi-volume-cf27d247-496e-4195-9797-05a8eba23754": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053750535s Dec 30 13:33:27.114: INFO: Pod "downwardapi-volume-cf27d247-496e-4195-9797-05a8eba23754": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060936749s Dec 30 13:33:29.122: INFO: Pod "downwardapi-volume-cf27d247-496e-4195-9797-05a8eba23754": Phase="Pending", Reason="", readiness=false. Elapsed: 8.069253979s Dec 30 13:33:31.133: INFO: Pod "downwardapi-volume-cf27d247-496e-4195-9797-05a8eba23754": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.079836688s STEP: Saw pod success Dec 30 13:33:31.133: INFO: Pod "downwardapi-volume-cf27d247-496e-4195-9797-05a8eba23754" satisfied condition "success or failure" Dec 30 13:33:31.138: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-cf27d247-496e-4195-9797-05a8eba23754 container client-container: STEP: delete the pod Dec 30 13:33:31.191: INFO: Waiting for pod downwardapi-volume-cf27d247-496e-4195-9797-05a8eba23754 to disappear Dec 30 13:33:31.204: INFO: Pod downwardapi-volume-cf27d247-496e-4195-9797-05a8eba23754 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:33:31.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3242" for this suite. Dec 30 13:33:37.315: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:33:37.497: INFO: namespace projected-3242 deletion completed in 6.285185619s • [SLOW TEST:16.600 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:33:37.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod test-webserver-071819a0-c73f-4979-ade9-45cf7d31a768 in namespace container-probe-3441 Dec 30 13:33:45.646: INFO: Started pod test-webserver-071819a0-c73f-4979-ade9-45cf7d31a768 in namespace container-probe-3441 STEP: checking the pod's current state and verifying that restartCount is present Dec 30 13:33:45.650: INFO: Initial restart count of pod test-webserver-071819a0-c73f-4979-ade9-45cf7d31a768 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:37:47.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3441" for this suite. Dec 30 13:37:53.459: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:37:53.611: INFO: namespace container-probe-3441 deletion completed in 6.177363004s • [SLOW TEST:256.114 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:37:53.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 30 13:37:53.801: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8a683bcb-58ce-430a-a364-9dd736ac382d" in namespace "projected-8184" to be "success or failure" Dec 30 13:37:53.814: INFO: Pod "downwardapi-volume-8a683bcb-58ce-430a-a364-9dd736ac382d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.567361ms Dec 30 13:37:55.825: INFO: Pod "downwardapi-volume-8a683bcb-58ce-430a-a364-9dd736ac382d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02344229s Dec 30 13:37:57.838: INFO: Pod "downwardapi-volume-8a683bcb-58ce-430a-a364-9dd736ac382d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03669693s Dec 30 13:37:59.848: INFO: Pod "downwardapi-volume-8a683bcb-58ce-430a-a364-9dd736ac382d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04693834s Dec 30 13:38:01.858: INFO: Pod "downwardapi-volume-8a683bcb-58ce-430a-a364-9dd736ac382d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.056849639s Dec 30 13:38:03.870: INFO: Pod "downwardapi-volume-8a683bcb-58ce-430a-a364-9dd736ac382d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.068112095s STEP: Saw pod success Dec 30 13:38:03.870: INFO: Pod "downwardapi-volume-8a683bcb-58ce-430a-a364-9dd736ac382d" satisfied condition "success or failure" Dec 30 13:38:03.878: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-8a683bcb-58ce-430a-a364-9dd736ac382d container client-container: STEP: delete the pod Dec 30 13:38:04.008: INFO: Waiting for pod downwardapi-volume-8a683bcb-58ce-430a-a364-9dd736ac382d to disappear Dec 30 13:38:04.027: INFO: Pod downwardapi-volume-8a683bcb-58ce-430a-a364-9dd736ac382d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:38:04.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8184" for this suite. Dec 30 13:38:10.075: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:38:10.215: INFO: namespace projected-8184 deletion completed in 6.180761716s • [SLOW TEST:16.604 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:38:10.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-94wf STEP: Creating a pod to test atomic-volume-subpath Dec 30 13:38:10.348: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-94wf" in namespace "subpath-1633" to be "success or failure" Dec 30 13:38:10.351: INFO: Pod "pod-subpath-test-secret-94wf": Phase="Pending", Reason="", readiness=false. Elapsed: 3.086263ms Dec 30 13:38:12.360: INFO: Pod "pod-subpath-test-secret-94wf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011595423s Dec 30 13:38:14.532: INFO: Pod "pod-subpath-test-secret-94wf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.183642611s Dec 30 13:38:16.547: INFO: Pod "pod-subpath-test-secret-94wf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.198862615s Dec 30 13:38:18.560: INFO: Pod "pod-subpath-test-secret-94wf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.211424188s Dec 30 13:38:20.574: INFO: Pod "pod-subpath-test-secret-94wf": Phase="Running", Reason="", readiness=true. Elapsed: 10.226199345s Dec 30 13:38:22.584: INFO: Pod "pod-subpath-test-secret-94wf": Phase="Running", Reason="", readiness=true. Elapsed: 12.235683848s Dec 30 13:38:24.593: INFO: Pod "pod-subpath-test-secret-94wf": Phase="Running", Reason="", readiness=true. Elapsed: 14.244435615s Dec 30 13:38:26.621: INFO: Pod "pod-subpath-test-secret-94wf": Phase="Running", Reason="", readiness=true. Elapsed: 16.272823824s Dec 30 13:38:28.636: INFO: Pod "pod-subpath-test-secret-94wf": Phase="Running", Reason="", readiness=true. Elapsed: 18.287912687s Dec 30 13:38:30.646: INFO: Pod "pod-subpath-test-secret-94wf": Phase="Running", Reason="", readiness=true. Elapsed: 20.297752846s Dec 30 13:38:32.659: INFO: Pod "pod-subpath-test-secret-94wf": Phase="Running", Reason="", readiness=true. Elapsed: 22.310366663s Dec 30 13:38:34.669: INFO: Pod "pod-subpath-test-secret-94wf": Phase="Running", Reason="", readiness=true. Elapsed: 24.321132794s Dec 30 13:38:36.683: INFO: Pod "pod-subpath-test-secret-94wf": Phase="Running", Reason="", readiness=true. Elapsed: 26.335086055s Dec 30 13:38:38.691: INFO: Pod "pod-subpath-test-secret-94wf": Phase="Running", Reason="", readiness=true. Elapsed: 28.34257569s Dec 30 13:38:40.707: INFO: Pod "pod-subpath-test-secret-94wf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.35879682s STEP: Saw pod success Dec 30 13:38:40.707: INFO: Pod "pod-subpath-test-secret-94wf" satisfied condition "success or failure" Dec 30 13:38:40.711: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-secret-94wf container test-container-subpath-secret-94wf: STEP: delete the pod Dec 30 13:38:40.819: INFO: Waiting for pod pod-subpath-test-secret-94wf to disappear Dec 30 13:38:40.830: INFO: Pod pod-subpath-test-secret-94wf no longer exists STEP: Deleting pod pod-subpath-test-secret-94wf Dec 30 13:38:40.830: INFO: Deleting pod "pod-subpath-test-secret-94wf" in namespace "subpath-1633" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:38:40.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1633" for this suite. Dec 30 13:38:46.892: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:38:47.243: INFO: namespace subpath-1633 deletion completed in 6.395441155s • [SLOW TEST:37.027 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:38:47.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 30 13:38:47.363: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ec601a75-e957-42c9-b594-72424a923330" in namespace "downward-api-3555" to be "success or failure" Dec 30 13:38:47.370: INFO: Pod "downwardapi-volume-ec601a75-e957-42c9-b594-72424a923330": Phase="Pending", Reason="", readiness=false. Elapsed: 7.084066ms Dec 30 13:38:49.392: INFO: Pod "downwardapi-volume-ec601a75-e957-42c9-b594-72424a923330": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029069601s Dec 30 13:38:51.405: INFO: Pod "downwardapi-volume-ec601a75-e957-42c9-b594-72424a923330": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042067398s Dec 30 13:38:53.413: INFO: Pod "downwardapi-volume-ec601a75-e957-42c9-b594-72424a923330": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049931967s Dec 30 13:38:55.419: INFO: Pod "downwardapi-volume-ec601a75-e957-42c9-b594-72424a923330": Phase="Pending", Reason="", readiness=false. Elapsed: 8.056114568s Dec 30 13:38:57.428: INFO: Pod "downwardapi-volume-ec601a75-e957-42c9-b594-72424a923330": Phase="Running", Reason="", readiness=true. Elapsed: 10.06447031s Dec 30 13:38:59.439: INFO: Pod "downwardapi-volume-ec601a75-e957-42c9-b594-72424a923330": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.07532679s STEP: Saw pod success Dec 30 13:38:59.439: INFO: Pod "downwardapi-volume-ec601a75-e957-42c9-b594-72424a923330" satisfied condition "success or failure" Dec 30 13:38:59.445: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-ec601a75-e957-42c9-b594-72424a923330 container client-container: STEP: delete the pod Dec 30 13:38:59.679: INFO: Waiting for pod downwardapi-volume-ec601a75-e957-42c9-b594-72424a923330 to disappear Dec 30 13:38:59.686: INFO: Pod downwardapi-volume-ec601a75-e957-42c9-b594-72424a923330 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:38:59.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3555" for this suite. Dec 30 13:39:05.710: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:39:05.919: INFO: namespace downward-api-3555 deletion completed in 6.22831483s • [SLOW TEST:18.676 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:39:05.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-bcb0547e-ee65-4696-964b-ed74e48caa15 STEP: Creating a pod to test consume configMaps Dec 30 13:39:06.074: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0fb49387-704e-4d21-ba0b-5d6bbe6b70b9" in namespace "projected-1120" to be "success or failure" Dec 30 13:39:06.080: INFO: Pod "pod-projected-configmaps-0fb49387-704e-4d21-ba0b-5d6bbe6b70b9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.205703ms Dec 30 13:39:08.090: INFO: Pod "pod-projected-configmaps-0fb49387-704e-4d21-ba0b-5d6bbe6b70b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016395979s Dec 30 13:39:10.133: INFO: Pod "pod-projected-configmaps-0fb49387-704e-4d21-ba0b-5d6bbe6b70b9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059080959s Dec 30 13:39:12.149: INFO: Pod "pod-projected-configmaps-0fb49387-704e-4d21-ba0b-5d6bbe6b70b9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.075385601s Dec 30 13:39:14.158: INFO: Pod "pod-projected-configmaps-0fb49387-704e-4d21-ba0b-5d6bbe6b70b9": Phase="Running", Reason="", readiness=true. Elapsed: 8.083577922s Dec 30 13:39:16.165: INFO: Pod "pod-projected-configmaps-0fb49387-704e-4d21-ba0b-5d6bbe6b70b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.09090929s STEP: Saw pod success Dec 30 13:39:16.165: INFO: Pod "pod-projected-configmaps-0fb49387-704e-4d21-ba0b-5d6bbe6b70b9" satisfied condition "success or failure" Dec 30 13:39:16.170: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-0fb49387-704e-4d21-ba0b-5d6bbe6b70b9 container projected-configmap-volume-test: STEP: delete the pod Dec 30 13:39:16.257: INFO: Waiting for pod pod-projected-configmaps-0fb49387-704e-4d21-ba0b-5d6bbe6b70b9 to disappear Dec 30 13:39:16.273: INFO: Pod pod-projected-configmaps-0fb49387-704e-4d21-ba0b-5d6bbe6b70b9 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:39:16.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1120" for this suite. Dec 30 13:39:22.372: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:39:22.560: INFO: namespace projected-1120 deletion completed in 6.279442912s • [SLOW TEST:16.640 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:39:22.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Dec 30 13:39:22.680: INFO: PodSpec: initContainers in spec.initContainers Dec 30 13:40:32.636: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-1e24a9c6-97cd-4fb6-b8d9-a0cf3b1f8706", GenerateName:"", Namespace:"init-container-7008", SelfLink:"/api/v1/namespaces/init-container-7008/pods/pod-init-1e24a9c6-97cd-4fb6-b8d9-a0cf3b1f8706", UID:"1cf71ece-0e00-490f-b442-1e2f349acea3", ResourceVersion:"18646578", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63713309962, loc:(*time.Location)(0x7ea48a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"680544613"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-h9ttw", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002126000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-h9ttw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-h9ttw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-h9ttw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0033a80d8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002d2a000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0033a81c0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0033a81f0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0033a81f8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0033a81fc), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713309962, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713309962, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713309962, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713309962, loc:(*time.Location)(0x7ea48a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.3.65", PodIP:"10.44.0.1", StartTime:(*v1.Time)(0xc003306060), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0025c82a0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0025c8310)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://67750677b422f243cf9475a20a0089eabb551a0a070eb0b0a1f8ff989b7f114c"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0033060a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003306080), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:40:32.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7008" for this suite. Dec 30 13:40:54.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:40:54.865: INFO: namespace init-container-7008 deletion completed in 22.134600983s • [SLOW TEST:92.305 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:40:54.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container Dec 30 13:41:05.646: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-501 pod-service-account-72079212-4239-49d4-bd98-19d33d879339 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Dec 30 13:41:08.186: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-501 pod-service-account-72079212-4239-49d4-bd98-19d33d879339 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Dec 30 13:41:08.778: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-501 pod-service-account-72079212-4239-49d4-bd98-19d33d879339 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:41:09.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-501" for this suite. Dec 30 13:41:15.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:41:15.341: INFO: namespace svcaccounts-501 deletion completed in 6.155868594s • [SLOW TEST:20.476 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:41:15.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W1230 13:41:19.182501 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 30 13:41:19.182: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:41:19.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1475" for this suite. Dec 30 13:41:25.590: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:41:25.748: INFO: namespace gc-1475 deletion completed in 6.558372783s • [SLOW TEST:10.406 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:41:25.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Dec 30 13:41:25.890: INFO: Waiting up to 5m0s for pod "pod-554df0a5-4e82-4f7a-905b-c6d7b1bfcb52" in namespace "emptydir-1480" to be "success or failure" Dec 30 13:41:25.915: INFO: Pod "pod-554df0a5-4e82-4f7a-905b-c6d7b1bfcb52": Phase="Pending", Reason="", readiness=false. Elapsed: 24.890409ms Dec 30 13:41:27.922: INFO: Pod "pod-554df0a5-4e82-4f7a-905b-c6d7b1bfcb52": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031926529s Dec 30 13:41:29.933: INFO: Pod "pod-554df0a5-4e82-4f7a-905b-c6d7b1bfcb52": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042847476s Dec 30 13:41:31.947: INFO: Pod "pod-554df0a5-4e82-4f7a-905b-c6d7b1bfcb52": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056716047s Dec 30 13:41:33.964: INFO: Pod "pod-554df0a5-4e82-4f7a-905b-c6d7b1bfcb52": Phase="Pending", Reason="", readiness=false. Elapsed: 8.074352549s Dec 30 13:41:35.973: INFO: Pod "pod-554df0a5-4e82-4f7a-905b-c6d7b1bfcb52": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.083299178s STEP: Saw pod success Dec 30 13:41:35.973: INFO: Pod "pod-554df0a5-4e82-4f7a-905b-c6d7b1bfcb52" satisfied condition "success or failure" Dec 30 13:41:35.977: INFO: Trying to get logs from node iruya-node pod pod-554df0a5-4e82-4f7a-905b-c6d7b1bfcb52 container test-container: STEP: delete the pod Dec 30 13:41:36.094: INFO: Waiting for pod pod-554df0a5-4e82-4f7a-905b-c6d7b1bfcb52 to disappear Dec 30 13:41:36.105: INFO: Pod pod-554df0a5-4e82-4f7a-905b-c6d7b1bfcb52 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:41:36.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1480" for this suite. Dec 30 13:41:42.160: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:41:42.262: INFO: namespace emptydir-1480 deletion completed in 6.146306083s • [SLOW TEST:16.513 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:41:42.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-3161 STEP: creating a selector STEP: Creating the service pods in kubernetes Dec 30 13:41:42.365: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Dec 30 13:42:22.661: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-3161 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 30 13:42:22.661: INFO: >>> kubeConfig: /root/.kube/config Dec 30 13:42:23.024: INFO: Waiting for endpoints: map[] Dec 30 13:42:23.034: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-3161 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 30 13:42:23.034: INFO: >>> kubeConfig: /root/.kube/config Dec 30 13:42:23.389: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:42:23.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3161" for this suite. Dec 30 13:42:47.435: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:42:47.617: INFO: namespace pod-network-test-3161 deletion completed in 24.213615632s • [SLOW TEST:65.354 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:42:47.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Dec 30 13:43:05.847: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 30 13:43:05.890: INFO: Pod pod-with-prestop-http-hook still exists Dec 30 13:43:07.890: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 30 13:43:07.920: INFO: Pod pod-with-prestop-http-hook still exists Dec 30 13:43:09.890: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 30 13:43:09.902: INFO: Pod pod-with-prestop-http-hook still exists Dec 30 13:43:11.890: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 30 13:43:11.898: INFO: Pod pod-with-prestop-http-hook still exists Dec 30 13:43:13.890: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 30 13:43:13.903: INFO: Pod pod-with-prestop-http-hook still exists Dec 30 13:43:15.890: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 30 13:43:15.900: INFO: Pod pod-with-prestop-http-hook still exists Dec 30 13:43:17.890: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 30 13:43:17.899: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:43:17.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1290" for this suite. Dec 30 13:43:39.997: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:43:40.107: INFO: namespace container-lifecycle-hook-1290 deletion completed in 22.138627212s • [SLOW TEST:52.490 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:43:40.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Dec 30 13:43:48.351: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:43:48.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7984" for this suite. Dec 30 13:43:54.409: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:43:54.517: INFO: namespace container-runtime-7984 deletion completed in 6.137451891s • [SLOW TEST:14.409 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:43:54.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:44:04.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-2355" for this suite. Dec 30 13:44:10.926: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:44:11.076: INFO: namespace emptydir-wrapper-2355 deletion completed in 6.240992961s • [SLOW TEST:16.558 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:44:11.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-8c8f6f36-120c-444e-b6eb-1d7a9ed1eacd STEP: Creating a pod to test consume secrets Dec 30 13:44:11.248: INFO: Waiting up to 5m0s for pod "pod-secrets-2af32bc3-5d54-4b17-954b-675a23fd1ade" in namespace "secrets-9026" to be "success or failure" Dec 30 13:44:11.267: INFO: Pod "pod-secrets-2af32bc3-5d54-4b17-954b-675a23fd1ade": Phase="Pending", Reason="", readiness=false. Elapsed: 18.551077ms Dec 30 13:44:13.283: INFO: Pod "pod-secrets-2af32bc3-5d54-4b17-954b-675a23fd1ade": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033914236s Dec 30 13:44:15.288: INFO: Pod "pod-secrets-2af32bc3-5d54-4b17-954b-675a23fd1ade": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039173978s Dec 30 13:44:17.309: INFO: Pod "pod-secrets-2af32bc3-5d54-4b17-954b-675a23fd1ade": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060606177s Dec 30 13:44:19.323: INFO: Pod "pod-secrets-2af32bc3-5d54-4b17-954b-675a23fd1ade": Phase="Pending", Reason="", readiness=false. Elapsed: 8.074514044s Dec 30 13:44:21.333: INFO: Pod "pod-secrets-2af32bc3-5d54-4b17-954b-675a23fd1ade": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.084275466s STEP: Saw pod success Dec 30 13:44:21.333: INFO: Pod "pod-secrets-2af32bc3-5d54-4b17-954b-675a23fd1ade" satisfied condition "success or failure" Dec 30 13:44:21.338: INFO: Trying to get logs from node iruya-node pod pod-secrets-2af32bc3-5d54-4b17-954b-675a23fd1ade container secret-env-test: STEP: delete the pod Dec 30 13:44:21.402: INFO: Waiting for pod pod-secrets-2af32bc3-5d54-4b17-954b-675a23fd1ade to disappear Dec 30 13:44:21.426: INFO: Pod pod-secrets-2af32bc3-5d54-4b17-954b-675a23fd1ade no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 30 13:44:21.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9026" for this suite. Dec 30 13:44:27.540: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 30 13:44:27.654: INFO: namespace secrets-9026 deletion completed in 6.219408687s • [SLOW TEST:16.579 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 30 13:44:27.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 30 13:44:27.862: INFO: (0) /api/v1/nodes/iruya-node:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 27.9155ms)
Dec 30 13:44:27.903: INFO: (1) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 40.474446ms)
Dec 30 13:44:27.909: INFO: (2) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.358301ms)
Dec 30 13:44:27.914: INFO: (3) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.795537ms)
Dec 30 13:44:27.932: INFO: (4) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 17.697438ms)
Dec 30 13:44:27.936: INFO: (5) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.612688ms)
Dec 30 13:44:27.941: INFO: (6) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.672839ms)
Dec 30 13:44:27.945: INFO: (7) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.69154ms)
Dec 30 13:44:27.951: INFO: (8) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.374245ms)
Dec 30 13:44:27.956: INFO: (9) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.419897ms)
Dec 30 13:44:27.961: INFO: (10) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.988841ms)
Dec 30 13:44:27.965: INFO: (11) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.602353ms)
Dec 30 13:44:27.972: INFO: (12) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.353571ms)
Dec 30 13:44:27.977: INFO: (13) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.047835ms)
Dec 30 13:44:27.981: INFO: (14) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.99546ms)
Dec 30 13:44:27.986: INFO: (15) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.253587ms)
Dec 30 13:44:27.995: INFO: (16) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.871985ms)
Dec 30 13:44:28.010: INFO: (17) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 14.734065ms)
Dec 30 13:44:28.019: INFO: (18) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.941672ms)
Dec 30 13:44:28.027: INFO: (19) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.791848ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 13:44:28.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-635" for this suite.
Dec 30 13:44:34.054: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 13:44:34.148: INFO: namespace proxy-635 deletion completed in 6.11797367s

• [SLOW TEST:6.494 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 13:44:34.149: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Dec 30 13:44:34.258: INFO: Waiting up to 5m0s for pod "downward-api-9f45f864-27fd-4ffa-ae71-89c0db47750c" in namespace "downward-api-1536" to be "success or failure"
Dec 30 13:44:34.268: INFO: Pod "downward-api-9f45f864-27fd-4ffa-ae71-89c0db47750c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.186253ms
Dec 30 13:44:36.277: INFO: Pod "downward-api-9f45f864-27fd-4ffa-ae71-89c0db47750c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018852753s
Dec 30 13:44:38.360: INFO: Pod "downward-api-9f45f864-27fd-4ffa-ae71-89c0db47750c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.101932925s
Dec 30 13:44:40.370: INFO: Pod "downward-api-9f45f864-27fd-4ffa-ae71-89c0db47750c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.111703082s
Dec 30 13:44:42.378: INFO: Pod "downward-api-9f45f864-27fd-4ffa-ae71-89c0db47750c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.120431683s
Dec 30 13:44:44.387: INFO: Pod "downward-api-9f45f864-27fd-4ffa-ae71-89c0db47750c": Phase="Running", Reason="", readiness=true. Elapsed: 10.129453282s
Dec 30 13:44:46.398: INFO: Pod "downward-api-9f45f864-27fd-4ffa-ae71-89c0db47750c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.140145794s
STEP: Saw pod success
Dec 30 13:44:46.398: INFO: Pod "downward-api-9f45f864-27fd-4ffa-ae71-89c0db47750c" satisfied condition "success or failure"
Dec 30 13:44:46.405: INFO: Trying to get logs from node iruya-node pod downward-api-9f45f864-27fd-4ffa-ae71-89c0db47750c container dapi-container: 
STEP: delete the pod
Dec 30 13:44:46.485: INFO: Waiting for pod downward-api-9f45f864-27fd-4ffa-ae71-89c0db47750c to disappear
Dec 30 13:44:46.508: INFO: Pod downward-api-9f45f864-27fd-4ffa-ae71-89c0db47750c no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 13:44:46.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1536" for this suite.
Dec 30 13:44:52.587: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 13:44:52.720: INFO: namespace downward-api-1536 deletion completed in 6.20529089s

• [SLOW TEST:18.571 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 13:44:52.720: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 30 13:44:52.847: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Dec 30 13:44:52.888: INFO: Number of nodes with available pods: 0
Dec 30 13:44:52.888: INFO: Node iruya-node is running more than one daemon pod
Dec 30 13:44:53.914: INFO: Number of nodes with available pods: 0
Dec 30 13:44:53.914: INFO: Node iruya-node is running more than one daemon pod
Dec 30 13:44:54.904: INFO: Number of nodes with available pods: 0
Dec 30 13:44:54.904: INFO: Node iruya-node is running more than one daemon pod
Dec 30 13:44:55.905: INFO: Number of nodes with available pods: 0
Dec 30 13:44:55.905: INFO: Node iruya-node is running more than one daemon pod
Dec 30 13:44:56.910: INFO: Number of nodes with available pods: 0
Dec 30 13:44:56.910: INFO: Node iruya-node is running more than one daemon pod
Dec 30 13:44:57.914: INFO: Number of nodes with available pods: 0
Dec 30 13:44:57.914: INFO: Node iruya-node is running more than one daemon pod
Dec 30 13:45:00.317: INFO: Number of nodes with available pods: 0
Dec 30 13:45:00.317: INFO: Node iruya-node is running more than one daemon pod
Dec 30 13:45:01.684: INFO: Number of nodes with available pods: 0
Dec 30 13:45:01.684: INFO: Node iruya-node is running more than one daemon pod
Dec 30 13:45:02.004: INFO: Number of nodes with available pods: 0
Dec 30 13:45:02.004: INFO: Node iruya-node is running more than one daemon pod
Dec 30 13:45:02.903: INFO: Number of nodes with available pods: 0
Dec 30 13:45:02.903: INFO: Node iruya-node is running more than one daemon pod
Dec 30 13:45:03.918: INFO: Number of nodes with available pods: 2
Dec 30 13:45:03.918: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Dec 30 13:45:03.984: INFO: Wrong image for pod: daemon-set-922bl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 30 13:45:03.984: INFO: Wrong image for pod: daemon-set-kz5sv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 30 13:45:05.029: INFO: Wrong image for pod: daemon-set-922bl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 30 13:45:05.029: INFO: Wrong image for pod: daemon-set-kz5sv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 30 13:45:06.028: INFO: Wrong image for pod: daemon-set-922bl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 30 13:45:06.028: INFO: Wrong image for pod: daemon-set-kz5sv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 30 13:45:07.031: INFO: Wrong image for pod: daemon-set-922bl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 30 13:45:07.031: INFO: Wrong image for pod: daemon-set-kz5sv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 30 13:45:08.033: INFO: Wrong image for pod: daemon-set-922bl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 30 13:45:08.033: INFO: Wrong image for pod: daemon-set-kz5sv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 30 13:45:09.031: INFO: Wrong image for pod: daemon-set-922bl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 30 13:45:09.031: INFO: Wrong image for pod: daemon-set-kz5sv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 30 13:45:10.028: INFO: Wrong image for pod: daemon-set-922bl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 30 13:45:10.028: INFO: Wrong image for pod: daemon-set-kz5sv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 30 13:45:10.028: INFO: Pod daemon-set-kz5sv is not available
Dec 30 13:45:11.028: INFO: Wrong image for pod: daemon-set-922bl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 30 13:45:11.028: INFO: Pod daemon-set-mtnbj is not available
Dec 30 13:45:12.030: INFO: Wrong image for pod: daemon-set-922bl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 30 13:45:12.030: INFO: Pod daemon-set-mtnbj is not available
Dec 30 13:45:13.027: INFO: Wrong image for pod: daemon-set-922bl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 30 13:45:13.027: INFO: Pod daemon-set-mtnbj is not available
Dec 30 13:45:14.039: INFO: Wrong image for pod: daemon-set-922bl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 30 13:45:14.039: INFO: Pod daemon-set-mtnbj is not available
Dec 30 13:45:15.269: INFO: Wrong image for pod: daemon-set-922bl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 30 13:45:15.269: INFO: Pod daemon-set-mtnbj is not available
Dec 30 13:45:16.028: INFO: Wrong image for pod: daemon-set-922bl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 30 13:45:16.028: INFO: Pod daemon-set-mtnbj is not available
Dec 30 13:45:17.041: INFO: Wrong image for pod: daemon-set-922bl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 30 13:45:17.041: INFO: Pod daemon-set-mtnbj is not available
Dec 30 13:45:18.033: INFO: Wrong image for pod: daemon-set-922bl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 30 13:45:18.033: INFO: Pod daemon-set-mtnbj is not available
Dec 30 13:45:19.056: INFO: Wrong image for pod: daemon-set-922bl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 30 13:45:20.026: INFO: Wrong image for pod: daemon-set-922bl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 30 13:45:21.036: INFO: Wrong image for pod: daemon-set-922bl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 30 13:45:22.030: INFO: Wrong image for pod: daemon-set-922bl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 30 13:45:23.032: INFO: Wrong image for pod: daemon-set-922bl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 30 13:45:24.036: INFO: Wrong image for pod: daemon-set-922bl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 30 13:45:24.036: INFO: Pod daemon-set-922bl is not available
Dec 30 13:45:25.029: INFO: Wrong image for pod: daemon-set-922bl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 30 13:45:25.029: INFO: Pod daemon-set-922bl is not available
Dec 30 13:45:26.031: INFO: Wrong image for pod: daemon-set-922bl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 30 13:45:26.031: INFO: Pod daemon-set-922bl is not available
Dec 30 13:45:27.033: INFO: Pod daemon-set-8vs8x is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Dec 30 13:45:27.050: INFO: Number of nodes with available pods: 1
Dec 30 13:45:27.050: INFO: Node iruya-node is running more than one daemon pod
Dec 30 13:45:28.075: INFO: Number of nodes with available pods: 1
Dec 30 13:45:28.075: INFO: Node iruya-node is running more than one daemon pod
Dec 30 13:45:29.066: INFO: Number of nodes with available pods: 1
Dec 30 13:45:29.067: INFO: Node iruya-node is running more than one daemon pod
Dec 30 13:45:30.067: INFO: Number of nodes with available pods: 1
Dec 30 13:45:30.067: INFO: Node iruya-node is running more than one daemon pod
Dec 30 13:45:31.062: INFO: Number of nodes with available pods: 1
Dec 30 13:45:31.062: INFO: Node iruya-node is running more than one daemon pod
Dec 30 13:45:32.063: INFO: Number of nodes with available pods: 1
Dec 30 13:45:32.063: INFO: Node iruya-node is running more than one daemon pod
Dec 30 13:45:33.063: INFO: Number of nodes with available pods: 1
Dec 30 13:45:33.063: INFO: Node iruya-node is running more than one daemon pod
Dec 30 13:45:34.062: INFO: Number of nodes with available pods: 1
Dec 30 13:45:34.062: INFO: Node iruya-node is running more than one daemon pod
Dec 30 13:45:35.066: INFO: Number of nodes with available pods: 2
Dec 30 13:45:35.066: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1001, will wait for the garbage collector to delete the pods
Dec 30 13:45:35.168: INFO: Deleting DaemonSet.extensions daemon-set took: 18.736301ms
Dec 30 13:45:35.468: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.516457ms
Dec 30 13:45:47.879: INFO: Number of nodes with available pods: 0
Dec 30 13:45:47.879: INFO: Number of running nodes: 0, number of available pods: 0
Dec 30 13:45:47.911: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1001/daemonsets","resourceVersion":"18647393"},"items":null}

Dec 30 13:45:47.917: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1001/pods","resourceVersion":"18647393"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 13:45:47.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-1001" for this suite.
Dec 30 13:45:53.960: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 13:45:54.091: INFO: namespace daemonsets-1001 deletion completed in 6.160423334s

• [SLOW TEST:61.371 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 13:45:54.092: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Dec 30 13:46:02.216: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-552e59a5-573c-4598-8e70-b59bf0a8426a,GenerateName:,Namespace:events-8369,SelfLink:/api/v1/namespaces/events-8369/pods/send-events-552e59a5-573c-4598-8e70-b59bf0a8426a,UID:9b1a1323-9f41-4e17-9190-592f570c1e22,ResourceVersion:18647456,Generation:0,CreationTimestamp:2019-12-30 13:45:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 166013537,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-fznzj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fznzj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-fznzj true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00309c980} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00309c9a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 13:45:54 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 13:46:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 13:46:02 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 13:45:54 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2019-12-30 13:45:54 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2019-12-30 13:46:01 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://3b8f31f77b660d6aa807db7373e054cedde19797af177ef424a98c572ec26590}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Dec 30 13:46:04.226: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Dec 30 13:46:06.232: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 13:46:06.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-8369" for this suite.
Dec 30 13:46:48.537: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 13:46:48.691: INFO: namespace events-8369 deletion completed in 42.433312605s

• [SLOW TEST:54.599 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 13:46:48.692: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-a0e2cf32-6738-4c07-ac83-37af1e204adb
STEP: Creating a pod to test consume configMaps
Dec 30 13:46:48.817: INFO: Waiting up to 5m0s for pod "pod-configmaps-5bf3fb24-2148-4f39-ab13-bf49106f127b" in namespace "configmap-2816" to be "success or failure"
Dec 30 13:46:48.831: INFO: Pod "pod-configmaps-5bf3fb24-2148-4f39-ab13-bf49106f127b": Phase="Pending", Reason="", readiness=false. Elapsed: 13.56736ms
Dec 30 13:46:50.843: INFO: Pod "pod-configmaps-5bf3fb24-2148-4f39-ab13-bf49106f127b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02553018s
Dec 30 13:46:52.852: INFO: Pod "pod-configmaps-5bf3fb24-2148-4f39-ab13-bf49106f127b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034221438s
Dec 30 13:46:54.861: INFO: Pod "pod-configmaps-5bf3fb24-2148-4f39-ab13-bf49106f127b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043170866s
Dec 30 13:46:56.875: INFO: Pod "pod-configmaps-5bf3fb24-2148-4f39-ab13-bf49106f127b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.057634861s
Dec 30 13:46:58.886: INFO: Pod "pod-configmaps-5bf3fb24-2148-4f39-ab13-bf49106f127b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.068502853s
STEP: Saw pod success
Dec 30 13:46:58.886: INFO: Pod "pod-configmaps-5bf3fb24-2148-4f39-ab13-bf49106f127b" satisfied condition "success or failure"
Dec 30 13:46:58.894: INFO: Trying to get logs from node iruya-node pod pod-configmaps-5bf3fb24-2148-4f39-ab13-bf49106f127b container configmap-volume-test: 
STEP: delete the pod
Dec 30 13:46:58.997: INFO: Waiting for pod pod-configmaps-5bf3fb24-2148-4f39-ab13-bf49106f127b to disappear
Dec 30 13:46:59.012: INFO: Pod pod-configmaps-5bf3fb24-2148-4f39-ab13-bf49106f127b no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 13:46:59.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2816" for this suite.
Dec 30 13:47:05.071: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 13:47:05.184: INFO: namespace configmap-2816 deletion completed in 6.161838447s

• [SLOW TEST:16.492 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 13:47:05.184: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-downwardapi-qg5t
STEP: Creating a pod to test atomic-volume-subpath
Dec 30 13:47:05.308: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-qg5t" in namespace "subpath-1621" to be "success or failure"
Dec 30 13:47:05.314: INFO: Pod "pod-subpath-test-downwardapi-qg5t": Phase="Pending", Reason="", readiness=false. Elapsed: 5.792528ms
Dec 30 13:47:07.356: INFO: Pod "pod-subpath-test-downwardapi-qg5t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048127591s
Dec 30 13:47:09.372: INFO: Pod "pod-subpath-test-downwardapi-qg5t": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064591114s
Dec 30 13:47:11.380: INFO: Pod "pod-subpath-test-downwardapi-qg5t": Phase="Pending", Reason="", readiness=false. Elapsed: 6.072604805s
Dec 30 13:47:13.391: INFO: Pod "pod-subpath-test-downwardapi-qg5t": Phase="Pending", Reason="", readiness=false. Elapsed: 8.082903304s
Dec 30 13:47:15.409: INFO: Pod "pod-subpath-test-downwardapi-qg5t": Phase="Running", Reason="", readiness=true. Elapsed: 10.101160882s
Dec 30 13:47:17.418: INFO: Pod "pod-subpath-test-downwardapi-qg5t": Phase="Running", Reason="", readiness=true. Elapsed: 12.110357319s
Dec 30 13:47:19.426: INFO: Pod "pod-subpath-test-downwardapi-qg5t": Phase="Running", Reason="", readiness=true. Elapsed: 14.117798234s
Dec 30 13:47:21.455: INFO: Pod "pod-subpath-test-downwardapi-qg5t": Phase="Running", Reason="", readiness=true. Elapsed: 16.146886968s
Dec 30 13:47:23.472: INFO: Pod "pod-subpath-test-downwardapi-qg5t": Phase="Running", Reason="", readiness=true. Elapsed: 18.164531408s
Dec 30 13:47:25.479: INFO: Pod "pod-subpath-test-downwardapi-qg5t": Phase="Running", Reason="", readiness=true. Elapsed: 20.171554995s
Dec 30 13:47:27.488: INFO: Pod "pod-subpath-test-downwardapi-qg5t": Phase="Running", Reason="", readiness=true. Elapsed: 22.18003582s
Dec 30 13:47:29.495: INFO: Pod "pod-subpath-test-downwardapi-qg5t": Phase="Running", Reason="", readiness=true. Elapsed: 24.187514749s
Dec 30 13:47:31.523: INFO: Pod "pod-subpath-test-downwardapi-qg5t": Phase="Running", Reason="", readiness=true. Elapsed: 26.215327136s
Dec 30 13:47:33.530: INFO: Pod "pod-subpath-test-downwardapi-qg5t": Phase="Running", Reason="", readiness=true. Elapsed: 28.221738178s
Dec 30 13:47:35.555: INFO: Pod "pod-subpath-test-downwardapi-qg5t": Phase="Running", Reason="", readiness=true. Elapsed: 30.247249493s
Dec 30 13:47:37.563: INFO: Pod "pod-subpath-test-downwardapi-qg5t": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.255303441s
STEP: Saw pod success
Dec 30 13:47:37.563: INFO: Pod "pod-subpath-test-downwardapi-qg5t" satisfied condition "success or failure"
Dec 30 13:47:37.567: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-downwardapi-qg5t container test-container-subpath-downwardapi-qg5t: 
STEP: delete the pod
Dec 30 13:47:37.673: INFO: Waiting for pod pod-subpath-test-downwardapi-qg5t to disappear
Dec 30 13:47:37.685: INFO: Pod pod-subpath-test-downwardapi-qg5t no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-qg5t
Dec 30 13:47:37.685: INFO: Deleting pod "pod-subpath-test-downwardapi-qg5t" in namespace "subpath-1621"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 13:47:37.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-1621" for this suite.
Dec 30 13:47:45.723: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 13:47:45.842: INFO: namespace subpath-1621 deletion completed in 8.143888891s

• [SLOW TEST:40.657 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 13:47:45.842: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-18f1b932-f3a8-4196-936b-c043765abadf
STEP: Creating a pod to test consume secrets
Dec 30 13:47:45.955: INFO: Waiting up to 5m0s for pod "pod-secrets-03b7eee7-7801-4fd3-b6fb-240a4ac3da78" in namespace "secrets-9245" to be "success or failure"
Dec 30 13:47:45.965: INFO: Pod "pod-secrets-03b7eee7-7801-4fd3-b6fb-240a4ac3da78": Phase="Pending", Reason="", readiness=false. Elapsed: 9.342947ms
Dec 30 13:47:47.971: INFO: Pod "pod-secrets-03b7eee7-7801-4fd3-b6fb-240a4ac3da78": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015652674s
Dec 30 13:47:49.981: INFO: Pod "pod-secrets-03b7eee7-7801-4fd3-b6fb-240a4ac3da78": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025536092s
Dec 30 13:47:52.002: INFO: Pod "pod-secrets-03b7eee7-7801-4fd3-b6fb-240a4ac3da78": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046869637s
Dec 30 13:47:54.013: INFO: Pod "pod-secrets-03b7eee7-7801-4fd3-b6fb-240a4ac3da78": Phase="Running", Reason="", readiness=true. Elapsed: 8.057662436s
Dec 30 13:47:56.019: INFO: Pod "pod-secrets-03b7eee7-7801-4fd3-b6fb-240a4ac3da78": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.063756923s
STEP: Saw pod success
Dec 30 13:47:56.019: INFO: Pod "pod-secrets-03b7eee7-7801-4fd3-b6fb-240a4ac3da78" satisfied condition "success or failure"
Dec 30 13:47:56.025: INFO: Trying to get logs from node iruya-node pod pod-secrets-03b7eee7-7801-4fd3-b6fb-240a4ac3da78 container secret-volume-test: 
STEP: delete the pod
Dec 30 13:47:56.088: INFO: Waiting for pod pod-secrets-03b7eee7-7801-4fd3-b6fb-240a4ac3da78 to disappear
Dec 30 13:47:56.096: INFO: Pod pod-secrets-03b7eee7-7801-4fd3-b6fb-240a4ac3da78 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 13:47:56.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9245" for this suite.
Dec 30 13:48:02.151: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 13:48:02.319: INFO: namespace secrets-9245 deletion completed in 6.219354385s

• [SLOW TEST:16.476 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 13:48:02.319: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-4870, will wait for the garbage collector to delete the pods
Dec 30 13:48:14.472: INFO: Deleting Job.batch foo took: 12.78232ms
Dec 30 13:48:14.773: INFO: Terminating Job.batch foo pods took: 300.698043ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 13:48:56.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-4870" for this suite.
Dec 30 13:49:02.665: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 13:49:02.787: INFO: namespace job-4870 deletion completed in 6.161294102s

• [SLOW TEST:60.468 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 13:49:02.787: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Dec 30 13:49:02.877: INFO: Pod name pod-release: Found 0 pods out of 1
Dec 30 13:49:07.887: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 13:49:08.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-2862" for this suite.
Dec 30 13:49:15.125: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 13:49:15.254: INFO: namespace replication-controller-2862 deletion completed in 6.289428394s

• [SLOW TEST:12.467 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 13:49:15.255: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 30 13:49:15.416: INFO: Waiting up to 5m0s for pod "downwardapi-volume-04e2cc52-18e8-47de-9586-687e5dce6c4b" in namespace "projected-4788" to be "success or failure"
Dec 30 13:49:15.523: INFO: Pod "downwardapi-volume-04e2cc52-18e8-47de-9586-687e5dce6c4b": Phase="Pending", Reason="", readiness=false. Elapsed: 107.298151ms
Dec 30 13:49:17.534: INFO: Pod "downwardapi-volume-04e2cc52-18e8-47de-9586-687e5dce6c4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118022552s
Dec 30 13:49:19.548: INFO: Pod "downwardapi-volume-04e2cc52-18e8-47de-9586-687e5dce6c4b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.131886709s
Dec 30 13:49:21.564: INFO: Pod "downwardapi-volume-04e2cc52-18e8-47de-9586-687e5dce6c4b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.14856937s
Dec 30 13:49:23.576: INFO: Pod "downwardapi-volume-04e2cc52-18e8-47de-9586-687e5dce6c4b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.159904625s
Dec 30 13:49:25.584: INFO: Pod "downwardapi-volume-04e2cc52-18e8-47de-9586-687e5dce6c4b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.168492285s
Dec 30 13:49:27.597: INFO: Pod "downwardapi-volume-04e2cc52-18e8-47de-9586-687e5dce6c4b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.180668828s
STEP: Saw pod success
Dec 30 13:49:27.597: INFO: Pod "downwardapi-volume-04e2cc52-18e8-47de-9586-687e5dce6c4b" satisfied condition "success or failure"
Dec 30 13:49:27.610: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-04e2cc52-18e8-47de-9586-687e5dce6c4b container client-container: 
STEP: delete the pod
Dec 30 13:49:27.684: INFO: Waiting for pod downwardapi-volume-04e2cc52-18e8-47de-9586-687e5dce6c4b to disappear
Dec 30 13:49:27.692: INFO: Pod downwardapi-volume-04e2cc52-18e8-47de-9586-687e5dce6c4b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 13:49:27.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4788" for this suite.
Dec 30 13:49:33.909: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 13:49:34.176: INFO: namespace projected-4788 deletion completed in 6.469252073s

• [SLOW TEST:18.921 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 13:49:34.178: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Dec 30 13:49:42.895: INFO: Successfully updated pod "pod-update-activedeadlineseconds-bb439347-43ef-43e5-b103-77fa96dab64d"
Dec 30 13:49:42.895: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-bb439347-43ef-43e5-b103-77fa96dab64d" in namespace "pods-2091" to be "terminated due to deadline exceeded"
Dec 30 13:49:42.906: INFO: Pod "pod-update-activedeadlineseconds-bb439347-43ef-43e5-b103-77fa96dab64d": Phase="Running", Reason="", readiness=true. Elapsed: 9.985176ms
Dec 30 13:49:44.915: INFO: Pod "pod-update-activedeadlineseconds-bb439347-43ef-43e5-b103-77fa96dab64d": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.018897923s
Dec 30 13:49:44.915: INFO: Pod "pod-update-activedeadlineseconds-bb439347-43ef-43e5-b103-77fa96dab64d" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 13:49:44.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2091" for this suite.
Dec 30 13:49:50.958: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 13:49:51.103: INFO: namespace pods-2091 deletion completed in 6.182721564s

• [SLOW TEST:16.926 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 13:49:51.104: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Dec 30 13:49:51.270: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-2448,SelfLink:/api/v1/namespaces/watch-2448/configmaps/e2e-watch-test-resource-version,UID:a85e312a-22f2-4fec-a244-64e1d41f8885,ResourceVersion:18648002,Generation:0,CreationTimestamp:2019-12-30 13:49:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 30 13:49:51.270: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-2448,SelfLink:/api/v1/namespaces/watch-2448/configmaps/e2e-watch-test-resource-version,UID:a85e312a-22f2-4fec-a244-64e1d41f8885,ResourceVersion:18648003,Generation:0,CreationTimestamp:2019-12-30 13:49:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 13:49:51.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-2448" for this suite.
Dec 30 13:49:57.317: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 13:49:57.485: INFO: namespace watch-2448 deletion completed in 6.209271073s

• [SLOW TEST:6.382 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 13:49:57.487: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-7711
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 30 13:49:57.659: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 30 13:50:29.837: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7711 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 30 13:50:29.837: INFO: >>> kubeConfig: /root/.kube/config
Dec 30 13:50:31.258: INFO: Found all expected endpoints: [netserver-0]
Dec 30 13:50:31.268: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7711 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 30 13:50:31.268: INFO: >>> kubeConfig: /root/.kube/config
Dec 30 13:50:32.917: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 13:50:32.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-7711" for this suite.
Dec 30 13:50:56.948: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 13:50:57.063: INFO: namespace pod-network-test-7711 deletion completed in 24.13575629s

• [SLOW TEST:59.577 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 13:50:57.064: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 13:51:07.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9372" for this suite.
Dec 30 13:51:51.191: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 13:51:51.322: INFO: namespace kubelet-test-9372 deletion completed in 44.152290603s

• [SLOW TEST:54.258 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 13:51:51.323: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Dec 30 13:51:58.015: INFO: 0 pods remaining
Dec 30 13:51:58.016: INFO: 0 pods has nil DeletionTimestamp
Dec 30 13:51:58.016: INFO: 
STEP: Gathering metrics
W1230 13:51:58.794403       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 30 13:51:58.794: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 13:51:58.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2978" for this suite.
Dec 30 13:52:08.849: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 13:52:08.958: INFO: namespace gc-2978 deletion completed in 10.158883388s

• [SLOW TEST:17.635 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 13:52:08.958: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Dec 30 13:52:09.048: INFO: Waiting up to 5m0s for pod "pod-d3570e3f-07ab-4833-becf-9f927ecf5c87" in namespace "emptydir-6653" to be "success or failure"
Dec 30 13:52:09.166: INFO: Pod "pod-d3570e3f-07ab-4833-becf-9f927ecf5c87": Phase="Pending", Reason="", readiness=false. Elapsed: 117.469274ms
Dec 30 13:52:11.174: INFO: Pod "pod-d3570e3f-07ab-4833-becf-9f927ecf5c87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125601524s
Dec 30 13:52:13.179: INFO: Pod "pod-d3570e3f-07ab-4833-becf-9f927ecf5c87": Phase="Pending", Reason="", readiness=false. Elapsed: 4.131325111s
Dec 30 13:52:15.196: INFO: Pod "pod-d3570e3f-07ab-4833-becf-9f927ecf5c87": Phase="Pending", Reason="", readiness=false. Elapsed: 6.147804733s
Dec 30 13:52:17.204: INFO: Pod "pod-d3570e3f-07ab-4833-becf-9f927ecf5c87": Phase="Pending", Reason="", readiness=false. Elapsed: 8.156096837s
Dec 30 13:52:19.212: INFO: Pod "pod-d3570e3f-07ab-4833-becf-9f927ecf5c87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.163550025s
STEP: Saw pod success
Dec 30 13:52:19.212: INFO: Pod "pod-d3570e3f-07ab-4833-becf-9f927ecf5c87" satisfied condition "success or failure"
Dec 30 13:52:19.216: INFO: Trying to get logs from node iruya-node pod pod-d3570e3f-07ab-4833-becf-9f927ecf5c87 container test-container: 
STEP: delete the pod
Dec 30 13:52:19.480: INFO: Waiting for pod pod-d3570e3f-07ab-4833-becf-9f927ecf5c87 to disappear
Dec 30 13:52:19.487: INFO: Pod pod-d3570e3f-07ab-4833-becf-9f927ecf5c87 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 13:52:19.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6653" for this suite.
Dec 30 13:52:25.635: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 13:52:25.858: INFO: namespace emptydir-6653 deletion completed in 6.363420052s

• [SLOW TEST:16.900 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 13:52:25.858: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 13:53:25.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9631" for this suite.
Dec 30 13:53:48.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 13:53:48.200: INFO: namespace container-probe-9631 deletion completed in 22.20038691s

• [SLOW TEST:82.342 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 13:53:48.201: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating cluster-info
Dec 30 13:53:48.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Dec 30 13:53:50.711: INFO: stderr: ""
Dec 30 13:53:50.711: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 13:53:50.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5682" for this suite.
Dec 30 13:53:56.734: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 13:53:56.827: INFO: namespace kubectl-5682 deletion completed in 6.110495723s

• [SLOW TEST:8.626 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 13:53:56.828: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Dec 30 13:53:56.918: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 13:54:14.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-2813" for this suite.
Dec 30 13:54:36.189: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 13:54:36.325: INFO: namespace init-container-2813 deletion completed in 22.243469692s

• [SLOW TEST:39.497 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 13:54:36.325: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 30 13:54:36.417: INFO: Waiting up to 5m0s for pod "downwardapi-volume-da1b04fa-50db-4f87-b22f-ef1e60c180be" in namespace "downward-api-1373" to be "success or failure"
Dec 30 13:54:36.423: INFO: Pod "downwardapi-volume-da1b04fa-50db-4f87-b22f-ef1e60c180be": Phase="Pending", Reason="", readiness=false. Elapsed: 6.020023ms
Dec 30 13:54:38.657: INFO: Pod "downwardapi-volume-da1b04fa-50db-4f87-b22f-ef1e60c180be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.240004613s
Dec 30 13:54:40.698: INFO: Pod "downwardapi-volume-da1b04fa-50db-4f87-b22f-ef1e60c180be": Phase="Pending", Reason="", readiness=false. Elapsed: 4.28079171s
Dec 30 13:54:42.712: INFO: Pod "downwardapi-volume-da1b04fa-50db-4f87-b22f-ef1e60c180be": Phase="Pending", Reason="", readiness=false. Elapsed: 6.294840588s
Dec 30 13:54:44.720: INFO: Pod "downwardapi-volume-da1b04fa-50db-4f87-b22f-ef1e60c180be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.302262585s
STEP: Saw pod success
Dec 30 13:54:44.720: INFO: Pod "downwardapi-volume-da1b04fa-50db-4f87-b22f-ef1e60c180be" satisfied condition "success or failure"
Dec 30 13:54:44.724: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-da1b04fa-50db-4f87-b22f-ef1e60c180be container client-container: 
STEP: delete the pod
Dec 30 13:54:44.836: INFO: Waiting for pod downwardapi-volume-da1b04fa-50db-4f87-b22f-ef1e60c180be to disappear
Dec 30 13:54:44.889: INFO: Pod downwardapi-volume-da1b04fa-50db-4f87-b22f-ef1e60c180be no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 13:54:44.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1373" for this suite.
Dec 30 13:54:51.007: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 13:54:51.079: INFO: namespace downward-api-1373 deletion completed in 6.182917568s

• [SLOW TEST:14.754 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 13:54:51.080: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service multi-endpoint-test in namespace services-7687
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7687 to expose endpoints map[]
Dec 30 13:54:51.397: INFO: successfully validated that service multi-endpoint-test in namespace services-7687 exposes endpoints map[] (28.020431ms elapsed)
STEP: Creating pod pod1 in namespace services-7687
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7687 to expose endpoints map[pod1:[100]]
Dec 30 13:54:55.500: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.087037324s elapsed, will retry)
Dec 30 13:54:59.565: INFO: successfully validated that service multi-endpoint-test in namespace services-7687 exposes endpoints map[pod1:[100]] (8.152173488s elapsed)
STEP: Creating pod pod2 in namespace services-7687
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7687 to expose endpoints map[pod1:[100] pod2:[101]]
Dec 30 13:55:03.843: INFO: Unexpected endpoints: found map[ca5cdcf7-8489-4ddc-8212-0e0f36d5d5c6:[100]], expected map[pod1:[100] pod2:[101]] (4.262536378s elapsed, will retry)
Dec 30 13:55:07.647: INFO: successfully validated that service multi-endpoint-test in namespace services-7687 exposes endpoints map[pod1:[100] pod2:[101]] (8.066279158s elapsed)
STEP: Deleting pod pod1 in namespace services-7687
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7687 to expose endpoints map[pod2:[101]]
Dec 30 13:55:08.714: INFO: successfully validated that service multi-endpoint-test in namespace services-7687 exposes endpoints map[pod2:[101]] (1.05558139s elapsed)
STEP: Deleting pod pod2 in namespace services-7687
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7687 to expose endpoints map[]
Dec 30 13:55:08.832: INFO: successfully validated that service multi-endpoint-test in namespace services-7687 exposes endpoints map[] (75.171829ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 13:55:08.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7687" for this suite.
Dec 30 13:55:30.985: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 13:55:31.056: INFO: namespace services-7687 deletion completed in 22.146865991s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:39.976 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 13:55:31.057: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Dec 30 13:58:36.828: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 30 13:58:36.833: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 30 13:58:38.833: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 30 13:58:38.990: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 30 13:58:40.833: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 30 13:58:40.853: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 30 13:58:42.833: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 30 13:58:42.840: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 30 13:58:44.833: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 30 13:58:44.845: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 30 13:58:46.833: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 30 13:58:46.844: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 30 13:58:48.833: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 30 13:58:48.840: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 30 13:58:50.833: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 30 13:58:50.851: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 30 13:58:52.833: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 30 13:58:52.847: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 30 13:58:54.833: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 30 13:58:54.845: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 30 13:58:56.833: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 30 13:58:56.842: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 30 13:58:58.833: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 30 13:58:58.838: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 30 13:59:00.833: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 30 13:59:00.841: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 13:59:00.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-9375" for this suite.
Dec 30 13:59:22.901: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 13:59:23.029: INFO: namespace container-lifecycle-hook-9375 deletion completed in 22.178642989s

• [SLOW TEST:231.972 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 13:59:23.030: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 30 13:59:23.219: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e8112ae0-b352-45de-8bf8-989d27b30908" in namespace "projected-9560" to be "success or failure"
Dec 30 13:59:23.280: INFO: Pod "downwardapi-volume-e8112ae0-b352-45de-8bf8-989d27b30908": Phase="Pending", Reason="", readiness=false. Elapsed: 61.589383ms
Dec 30 13:59:25.288: INFO: Pod "downwardapi-volume-e8112ae0-b352-45de-8bf8-989d27b30908": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069302403s
Dec 30 13:59:27.303: INFO: Pod "downwardapi-volume-e8112ae0-b352-45de-8bf8-989d27b30908": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083985657s
Dec 30 13:59:29.319: INFO: Pod "downwardapi-volume-e8112ae0-b352-45de-8bf8-989d27b30908": Phase="Pending", Reason="", readiness=false. Elapsed: 6.100144401s
Dec 30 13:59:31.328: INFO: Pod "downwardapi-volume-e8112ae0-b352-45de-8bf8-989d27b30908": Phase="Pending", Reason="", readiness=false. Elapsed: 8.109261648s
Dec 30 13:59:33.338: INFO: Pod "downwardapi-volume-e8112ae0-b352-45de-8bf8-989d27b30908": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.119159907s
STEP: Saw pod success
Dec 30 13:59:33.338: INFO: Pod "downwardapi-volume-e8112ae0-b352-45de-8bf8-989d27b30908" satisfied condition "success or failure"
Dec 30 13:59:33.346: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-e8112ae0-b352-45de-8bf8-989d27b30908 container client-container: 
STEP: delete the pod
Dec 30 13:59:33.488: INFO: Waiting for pod downwardapi-volume-e8112ae0-b352-45de-8bf8-989d27b30908 to disappear
Dec 30 13:59:33.506: INFO: Pod downwardapi-volume-e8112ae0-b352-45de-8bf8-989d27b30908 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 13:59:33.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9560" for this suite.
Dec 30 13:59:39.542: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 13:59:39.723: INFO: namespace projected-9560 deletion completed in 6.208487675s

• [SLOW TEST:16.694 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 13:59:39.724: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-c995fd42-c6dc-4024-aea1-7ac6db854959
STEP: Creating a pod to test consume secrets
Dec 30 13:59:39.874: INFO: Waiting up to 5m0s for pod "pod-secrets-e55e60e4-6bde-49c5-9d63-ffd694302b29" in namespace "secrets-4052" to be "success or failure"
Dec 30 13:59:39.885: INFO: Pod "pod-secrets-e55e60e4-6bde-49c5-9d63-ffd694302b29": Phase="Pending", Reason="", readiness=false. Elapsed: 10.332704ms
Dec 30 13:59:41.904: INFO: Pod "pod-secrets-e55e60e4-6bde-49c5-9d63-ffd694302b29": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029307241s
Dec 30 13:59:43.922: INFO: Pod "pod-secrets-e55e60e4-6bde-49c5-9d63-ffd694302b29": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04787908s
Dec 30 13:59:45.933: INFO: Pod "pod-secrets-e55e60e4-6bde-49c5-9d63-ffd694302b29": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058440283s
Dec 30 13:59:47.945: INFO: Pod "pod-secrets-e55e60e4-6bde-49c5-9d63-ffd694302b29": Phase="Pending", Reason="", readiness=false. Elapsed: 8.070641286s
Dec 30 13:59:49.954: INFO: Pod "pod-secrets-e55e60e4-6bde-49c5-9d63-ffd694302b29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.079765786s
STEP: Saw pod success
Dec 30 13:59:49.954: INFO: Pod "pod-secrets-e55e60e4-6bde-49c5-9d63-ffd694302b29" satisfied condition "success or failure"
Dec 30 13:59:49.963: INFO: Trying to get logs from node iruya-node pod pod-secrets-e55e60e4-6bde-49c5-9d63-ffd694302b29 container secret-volume-test: 
STEP: delete the pod
Dec 30 13:59:50.154: INFO: Waiting for pod pod-secrets-e55e60e4-6bde-49c5-9d63-ffd694302b29 to disappear
Dec 30 13:59:50.182: INFO: Pod pod-secrets-e55e60e4-6bde-49c5-9d63-ffd694302b29 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 13:59:50.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4052" for this suite.
Dec 30 13:59:56.235: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 13:59:56.458: INFO: namespace secrets-4052 deletion completed in 6.270925955s

• [SLOW TEST:16.734 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 13:59:56.459: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-ecc6ce2d-b4f7-4117-92d8-99311c87c350
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:00:06.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5490" for this suite.
Dec 30 14:00:28.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:00:28.859: INFO: namespace configmap-5490 deletion completed in 22.171443068s

• [SLOW TEST:32.400 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:00:28.859: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-3ae9dbee-39a7-469f-addf-08dcb044ea21
STEP: Creating a pod to test consume secrets
Dec 30 14:00:29.021: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5554be04-1a32-4dc3-958c-b18025565420" in namespace "projected-6148" to be "success or failure"
Dec 30 14:00:29.025: INFO: Pod "pod-projected-secrets-5554be04-1a32-4dc3-958c-b18025565420": Phase="Pending", Reason="", readiness=false. Elapsed: 3.734889ms
Dec 30 14:00:31.062: INFO: Pod "pod-projected-secrets-5554be04-1a32-4dc3-958c-b18025565420": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040614038s
Dec 30 14:00:33.066: INFO: Pod "pod-projected-secrets-5554be04-1a32-4dc3-958c-b18025565420": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044195109s
Dec 30 14:00:35.106: INFO: Pod "pod-projected-secrets-5554be04-1a32-4dc3-958c-b18025565420": Phase="Pending", Reason="", readiness=false. Elapsed: 6.084990374s
Dec 30 14:00:37.123: INFO: Pod "pod-projected-secrets-5554be04-1a32-4dc3-958c-b18025565420": Phase="Pending", Reason="", readiness=false. Elapsed: 8.101495646s
Dec 30 14:00:39.159: INFO: Pod "pod-projected-secrets-5554be04-1a32-4dc3-958c-b18025565420": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.13760683s
STEP: Saw pod success
Dec 30 14:00:39.159: INFO: Pod "pod-projected-secrets-5554be04-1a32-4dc3-958c-b18025565420" satisfied condition "success or failure"
Dec 30 14:00:39.163: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-5554be04-1a32-4dc3-958c-b18025565420 container projected-secret-volume-test: 
STEP: delete the pod
Dec 30 14:00:39.221: INFO: Waiting for pod pod-projected-secrets-5554be04-1a32-4dc3-958c-b18025565420 to disappear
Dec 30 14:00:39.253: INFO: Pod pod-projected-secrets-5554be04-1a32-4dc3-958c-b18025565420 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:00:39.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6148" for this suite.
Dec 30 14:00:45.353: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:00:45.502: INFO: namespace projected-6148 deletion completed in 6.174388877s

• [SLOW TEST:16.643 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:00:45.503: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-projected-7nfl
STEP: Creating a pod to test atomic-volume-subpath
Dec 30 14:00:45.741: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-7nfl" in namespace "subpath-1746" to be "success or failure"
Dec 30 14:00:45.765: INFO: Pod "pod-subpath-test-projected-7nfl": Phase="Pending", Reason="", readiness=false. Elapsed: 23.774488ms
Dec 30 14:00:47.772: INFO: Pod "pod-subpath-test-projected-7nfl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031265337s
Dec 30 14:00:49.782: INFO: Pod "pod-subpath-test-projected-7nfl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040875309s
Dec 30 14:00:51.791: INFO: Pod "pod-subpath-test-projected-7nfl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05043866s
Dec 30 14:00:53.803: INFO: Pod "pod-subpath-test-projected-7nfl": Phase="Pending", Reason="", readiness=false. Elapsed: 8.061716276s
Dec 30 14:00:55.815: INFO: Pod "pod-subpath-test-projected-7nfl": Phase="Running", Reason="", readiness=true. Elapsed: 10.074245405s
Dec 30 14:00:57.830: INFO: Pod "pod-subpath-test-projected-7nfl": Phase="Running", Reason="", readiness=true. Elapsed: 12.088795958s
Dec 30 14:00:59.846: INFO: Pod "pod-subpath-test-projected-7nfl": Phase="Running", Reason="", readiness=true. Elapsed: 14.105045539s
Dec 30 14:01:01.860: INFO: Pod "pod-subpath-test-projected-7nfl": Phase="Running", Reason="", readiness=true. Elapsed: 16.118710515s
Dec 30 14:01:03.874: INFO: Pod "pod-subpath-test-projected-7nfl": Phase="Running", Reason="", readiness=true. Elapsed: 18.132671076s
Dec 30 14:01:05.894: INFO: Pod "pod-subpath-test-projected-7nfl": Phase="Running", Reason="", readiness=true. Elapsed: 20.152667845s
Dec 30 14:01:07.909: INFO: Pod "pod-subpath-test-projected-7nfl": Phase="Running", Reason="", readiness=true. Elapsed: 22.167885008s
Dec 30 14:01:09.917: INFO: Pod "pod-subpath-test-projected-7nfl": Phase="Running", Reason="", readiness=true. Elapsed: 24.176586656s
Dec 30 14:01:11.927: INFO: Pod "pod-subpath-test-projected-7nfl": Phase="Running", Reason="", readiness=true. Elapsed: 26.18606058s
Dec 30 14:01:13.934: INFO: Pod "pod-subpath-test-projected-7nfl": Phase="Running", Reason="", readiness=true. Elapsed: 28.193180261s
Dec 30 14:01:15.944: INFO: Pod "pod-subpath-test-projected-7nfl": Phase="Running", Reason="", readiness=true. Elapsed: 30.203077434s
Dec 30 14:01:17.952: INFO: Pod "pod-subpath-test-projected-7nfl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.211350125s
STEP: Saw pod success
Dec 30 14:01:17.952: INFO: Pod "pod-subpath-test-projected-7nfl" satisfied condition "success or failure"
Dec 30 14:01:17.956: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-projected-7nfl container test-container-subpath-projected-7nfl: 
STEP: delete the pod
Dec 30 14:01:18.170: INFO: Waiting for pod pod-subpath-test-projected-7nfl to disappear
Dec 30 14:01:18.183: INFO: Pod pod-subpath-test-projected-7nfl no longer exists
STEP: Deleting pod pod-subpath-test-projected-7nfl
Dec 30 14:01:18.183: INFO: Deleting pod "pod-subpath-test-projected-7nfl" in namespace "subpath-1746"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:01:18.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-1746" for this suite.
Dec 30 14:01:24.214: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:01:24.350: INFO: namespace subpath-1746 deletion completed in 6.153896143s

• [SLOW TEST:38.848 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:01:24.351: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Dec 30 14:01:24.486: INFO: Waiting up to 5m0s for pod "pod-dbb3922e-af40-4bb3-8118-54ae59a5ba00" in namespace "emptydir-6702" to be "success or failure"
Dec 30 14:01:24.493: INFO: Pod "pod-dbb3922e-af40-4bb3-8118-54ae59a5ba00": Phase="Pending", Reason="", readiness=false. Elapsed: 7.377599ms
Dec 30 14:01:26.508: INFO: Pod "pod-dbb3922e-af40-4bb3-8118-54ae59a5ba00": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02160431s
Dec 30 14:01:28.518: INFO: Pod "pod-dbb3922e-af40-4bb3-8118-54ae59a5ba00": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032499722s
Dec 30 14:01:30.546: INFO: Pod "pod-dbb3922e-af40-4bb3-8118-54ae59a5ba00": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060504186s
Dec 30 14:01:32.558: INFO: Pod "pod-dbb3922e-af40-4bb3-8118-54ae59a5ba00": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.072076474s
STEP: Saw pod success
Dec 30 14:01:32.558: INFO: Pod "pod-dbb3922e-af40-4bb3-8118-54ae59a5ba00" satisfied condition "success or failure"
Dec 30 14:01:32.563: INFO: Trying to get logs from node iruya-node pod pod-dbb3922e-af40-4bb3-8118-54ae59a5ba00 container test-container: 
STEP: delete the pod
Dec 30 14:01:32.655: INFO: Waiting for pod pod-dbb3922e-af40-4bb3-8118-54ae59a5ba00 to disappear
Dec 30 14:01:32.671: INFO: Pod pod-dbb3922e-af40-4bb3-8118-54ae59a5ba00 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:01:32.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6702" for this suite.
Dec 30 14:01:38.757: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:01:38.922: INFO: namespace emptydir-6702 deletion completed in 6.201699746s

• [SLOW TEST:14.571 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:01:38.923: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 30 14:01:39.032: INFO: Waiting up to 5m0s for pod "downwardapi-volume-806ca59a-5b66-49cc-b8d3-29a0e4319335" in namespace "projected-8449" to be "success or failure"
Dec 30 14:01:39.102: INFO: Pod "downwardapi-volume-806ca59a-5b66-49cc-b8d3-29a0e4319335": Phase="Pending", Reason="", readiness=false. Elapsed: 69.695678ms
Dec 30 14:01:41.109: INFO: Pod "downwardapi-volume-806ca59a-5b66-49cc-b8d3-29a0e4319335": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0769896s
Dec 30 14:01:43.125: INFO: Pod "downwardapi-volume-806ca59a-5b66-49cc-b8d3-29a0e4319335": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092936958s
Dec 30 14:01:45.131: INFO: Pod "downwardapi-volume-806ca59a-5b66-49cc-b8d3-29a0e4319335": Phase="Pending", Reason="", readiness=false. Elapsed: 6.099426515s
Dec 30 14:01:47.140: INFO: Pod "downwardapi-volume-806ca59a-5b66-49cc-b8d3-29a0e4319335": Phase="Pending", Reason="", readiness=false. Elapsed: 8.108143477s
Dec 30 14:01:49.149: INFO: Pod "downwardapi-volume-806ca59a-5b66-49cc-b8d3-29a0e4319335": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.117442003s
STEP: Saw pod success
Dec 30 14:01:49.149: INFO: Pod "downwardapi-volume-806ca59a-5b66-49cc-b8d3-29a0e4319335" satisfied condition "success or failure"
Dec 30 14:01:49.152: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-806ca59a-5b66-49cc-b8d3-29a0e4319335 container client-container: 
STEP: delete the pod
Dec 30 14:01:49.218: INFO: Waiting for pod downwardapi-volume-806ca59a-5b66-49cc-b8d3-29a0e4319335 to disappear
Dec 30 14:01:49.224: INFO: Pod downwardapi-volume-806ca59a-5b66-49cc-b8d3-29a0e4319335 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:01:49.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8449" for this suite.
Dec 30 14:01:55.261: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:01:55.455: INFO: namespace projected-8449 deletion completed in 6.225724105s

• [SLOW TEST:16.533 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:01:55.460: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W1230 14:02:37.017424       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 30 14:02:37.017: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:02:37.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9280" for this suite.
Dec 30 14:02:45.525: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:02:46.082: INFO: namespace gc-9280 deletion completed in 9.055534019s

• [SLOW TEST:50.622 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:02:46.086: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 30 14:02:47.066: INFO: Creating deployment "nginx-deployment"
Dec 30 14:02:47.281: INFO: Waiting for observed generation 1
Dec 30 14:02:52.100: INFO: Waiting for all required pods to come up
Dec 30 14:02:53.551: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Dec 30 14:03:29.896: INFO: Waiting for deployment "nginx-deployment" to complete
Dec 30 14:03:29.906: INFO: Updating deployment "nginx-deployment" with a non-existent image
Dec 30 14:03:29.916: INFO: Updating deployment nginx-deployment
Dec 30 14:03:29.917: INFO: Waiting for observed generation 2
Dec 30 14:03:33.510: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Dec 30 14:03:33.562: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Dec 30 14:03:33.699: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Dec 30 14:03:34.875: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Dec 30 14:03:34.875: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Dec 30 14:03:34.886: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Dec 30 14:03:34.899: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Dec 30 14:03:34.899: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Dec 30 14:03:34.909: INFO: Updating deployment nginx-deployment
Dec 30 14:03:34.909: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Dec 30 14:03:35.044: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Dec 30 14:03:35.318: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Dec 30 14:03:38.387: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-7628,SelfLink:/apis/apps/v1/namespaces/deployment-7628/deployments/nginx-deployment,UID:401d4cbb-2868-486d-8214-aad6b9d1f6e6,ResourceVersion:18650102,Generation:3,CreationTimestamp:2019-12-30 14:02:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2019-12-30 14:03:35 +0000 UTC 2019-12-30 14:03:35 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2019-12-30 14:03:37 +0000 UTC 2019-12-30 14:02:47 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},}

Dec 30 14:03:39.924: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-7628,SelfLink:/apis/apps/v1/namespaces/deployment-7628/replicasets/nginx-deployment-55fb7cb77f,UID:e2de9d4b-f9c7-45c7-8b5b-294b3ffcfbcb,ResourceVersion:18650097,Generation:3,CreationTimestamp:2019-12-30 14:03:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 401d4cbb-2868-486d-8214-aad6b9d1f6e6 0xc00309d717 0xc00309d718}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 30 14:03:39.925: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Dec 30 14:03:39.926: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-7628,SelfLink:/apis/apps/v1/namespaces/deployment-7628/replicasets/nginx-deployment-7b8c6f4498,UID:837bffdf-54d4-46e5-a84a-e3b60d349e7e,ResourceVersion:18650089,Generation:3,CreationTimestamp:2019-12-30 14:02:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 401d4cbb-2868-486d-8214-aad6b9d1f6e6 0xc00309d7e7 0xc00309d7e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Dec 30 14:03:40.851: INFO: Pod "nginx-deployment-55fb7cb77f-5jkz5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-5jkz5,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7628,SelfLink:/api/v1/namespaces/deployment-7628/pods/nginx-deployment-55fb7cb77f-5jkz5,UID:aefc19c6-0288-41bc-844d-530290b260ae,ResourceVersion:18650067,Generation:0,CreationTimestamp:2019-12-30 14:03:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e2de9d4b-f9c7-45c7-8b5b-294b3ffcfbcb 0xc002c60847 0xc002c60848}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7ztgq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7ztgq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-7ztgq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002c608b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002c608d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:35 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 30 14:03:40.852: INFO: Pod "nginx-deployment-55fb7cb77f-5v84b" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-5v84b,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7628,SelfLink:/api/v1/namespaces/deployment-7628/pods/nginx-deployment-55fb7cb77f-5v84b,UID:03045566-7784-4572-850c-8584692afcd6,ResourceVersion:18650083,Generation:0,CreationTimestamp:2019-12-30 14:03:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e2de9d4b-f9c7-45c7-8b5b-294b3ffcfbcb 0xc002c60957 0xc002c60958}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7ztgq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7ztgq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-7ztgq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002c609c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002c609e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:35 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 30 14:03:40.852: INFO: Pod "nginx-deployment-55fb7cb77f-7qgbx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-7qgbx,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7628,SelfLink:/api/v1/namespaces/deployment-7628/pods/nginx-deployment-55fb7cb77f-7qgbx,UID:b87df1d8-a993-4d88-9d31-cb61e0c3d4aa,ResourceVersion:18650091,Generation:0,CreationTimestamp:2019-12-30 14:03:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e2de9d4b-f9c7-45c7-8b5b-294b3ffcfbcb 0xc002c60a67 0xc002c60a68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7ztgq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7ztgq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-7ztgq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002c60ad0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002c60af0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:35 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 30 14:03:40.853: INFO: Pod "nginx-deployment-55fb7cb77f-btl27" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-btl27,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7628,SelfLink:/api/v1/namespaces/deployment-7628/pods/nginx-deployment-55fb7cb77f-btl27,UID:751b664a-d547-4a5b-a553-2a7e5935cd2a,ResourceVersion:18650026,Generation:0,CreationTimestamp:2019-12-30 14:03:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e2de9d4b-f9c7-45c7-8b5b-294b3ffcfbcb 0xc002c60b77 0xc002c60b78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7ztgq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7ztgq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-7ztgq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002c60be0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002c60c00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:30 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2019-12-30 14:03:30 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 30 14:03:40.853: INFO: Pod "nginx-deployment-55fb7cb77f-bz5pn" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-bz5pn,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7628,SelfLink:/api/v1/namespaces/deployment-7628/pods/nginx-deployment-55fb7cb77f-bz5pn,UID:24321d39-78b1-4338-895d-ea337a1276a4,ResourceVersion:18650098,Generation:0,CreationTimestamp:2019-12-30 14:03:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e2de9d4b-f9c7-45c7-8b5b-294b3ffcfbcb 0xc002c60cd7 0xc002c60cd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7ztgq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7ztgq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-7ztgq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002c60d40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002c60d60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:35 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2019-12-30 14:03:35 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 30 14:03:40.854: INFO: Pod "nginx-deployment-55fb7cb77f-c65ct" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-c65ct,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7628,SelfLink:/api/v1/namespaces/deployment-7628/pods/nginx-deployment-55fb7cb77f-c65ct,UID:8e7cda8d-795f-471e-ae2f-f1d31ae8cbec,ResourceVersion:18650077,Generation:0,CreationTimestamp:2019-12-30 14:03:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e2de9d4b-f9c7-45c7-8b5b-294b3ffcfbcb 0xc002c60e37 0xc002c60e38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7ztgq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7ztgq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-7ztgq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002c60eb0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002c60ed0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:35 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 30 14:03:40.854: INFO: Pod "nginx-deployment-55fb7cb77f-ds795" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-ds795,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7628,SelfLink:/api/v1/namespaces/deployment-7628/pods/nginx-deployment-55fb7cb77f-ds795,UID:d077aed4-ad90-4498-9f36-e639202ab31a,ResourceVersion:18650079,Generation:0,CreationTimestamp:2019-12-30 14:03:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e2de9d4b-f9c7-45c7-8b5b-294b3ffcfbcb 0xc002c60f57 0xc002c60f58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7ztgq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7ztgq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-7ztgq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002c60fd0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002c60ff0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:35 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 30 14:03:40.855: INFO: Pod "nginx-deployment-55fb7cb77f-lvsnf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-lvsnf,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7628,SelfLink:/api/v1/namespaces/deployment-7628/pods/nginx-deployment-55fb7cb77f-lvsnf,UID:0a38c7dd-2b7e-4040-936f-1abf8960ac75,ResourceVersion:18650034,Generation:0,CreationTimestamp:2019-12-30 14:03:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e2de9d4b-f9c7-45c7-8b5b-294b3ffcfbcb 0xc002c61077 0xc002c61078}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7ztgq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7ztgq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-7ztgq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002c610f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002c61110}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:30 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-30 14:03:32 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 30 14:03:40.855: INFO: Pod "nginx-deployment-55fb7cb77f-rmsgd" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-rmsgd,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7628,SelfLink:/api/v1/namespaces/deployment-7628/pods/nginx-deployment-55fb7cb77f-rmsgd,UID:d9f4fc4e-fd05-4ca7-8594-5c4d60598f57,ResourceVersion:18650078,Generation:0,CreationTimestamp:2019-12-30 14:03:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e2de9d4b-f9c7-45c7-8b5b-294b3ffcfbcb 0xc002c611e7 0xc002c611e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7ztgq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7ztgq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-7ztgq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002c61260} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002c61280}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:35 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 30 14:03:40.856: INFO: Pod "nginx-deployment-55fb7cb77f-vcxxn" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-vcxxn,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7628,SelfLink:/api/v1/namespaces/deployment-7628/pods/nginx-deployment-55fb7cb77f-vcxxn,UID:078191e9-426d-4001-8382-809ad0525f4f,ResourceVersion:18650006,Generation:0,CreationTimestamp:2019-12-30 14:03:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e2de9d4b-f9c7-45c7-8b5b-294b3ffcfbcb 0xc002c61307 0xc002c61308}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7ztgq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7ztgq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-7ztgq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002c61390} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002c613b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:29 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-30 14:03:30 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 30 14:03:40.856: INFO: Pod "nginx-deployment-55fb7cb77f-w62mv" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-w62mv,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7628,SelfLink:/api/v1/namespaces/deployment-7628/pods/nginx-deployment-55fb7cb77f-w62mv,UID:267e582b-e52c-4f0f-b64b-44d33e446e45,ResourceVersion:18650027,Generation:0,CreationTimestamp:2019-12-30 14:03:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e2de9d4b-f9c7-45c7-8b5b-294b3ffcfbcb 0xc002c61487 0xc002c61488}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7ztgq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7ztgq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-7ztgq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002c61500} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002c61520}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:30 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-30 14:03:30 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 30 14:03:40.856: INFO: Pod "nginx-deployment-55fb7cb77f-xfrrc" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-xfrrc,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7628,SelfLink:/api/v1/namespaces/deployment-7628/pods/nginx-deployment-55fb7cb77f-xfrrc,UID:9247183f-7cf3-4251-8780-0d6c62ab5d09,ResourceVersion:18650010,Generation:0,CreationTimestamp:2019-12-30 14:03:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e2de9d4b-f9c7-45c7-8b5b-294b3ffcfbcb 0xc002c615f7 0xc002c615f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7ztgq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7ztgq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-7ztgq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002c61660} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002c61680}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:30 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2019-12-30 14:03:30 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 30 14:03:40.857: INFO: Pod "nginx-deployment-55fb7cb77f-xs2jg" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-xs2jg,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7628,SelfLink:/api/v1/namespaces/deployment-7628/pods/nginx-deployment-55fb7cb77f-xs2jg,UID:333b2c53-1705-4a79-99d4-1d29ee8a263e,ResourceVersion:18650063,Generation:0,CreationTimestamp:2019-12-30 14:03:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e2de9d4b-f9c7-45c7-8b5b-294b3ffcfbcb 0xc002c61757 0xc002c61758}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7ztgq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7ztgq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-7ztgq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002c617d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002c617f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:35 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 30 14:03:40.857: INFO: Pod "nginx-deployment-7b8c6f4498-295b6" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-295b6,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7628,SelfLink:/api/v1/namespaces/deployment-7628/pods/nginx-deployment-7b8c6f4498-295b6,UID:aeab591a-a3a6-49bf-ace9-ca2e05a7006d,ResourceVersion:18649960,Generation:0,CreationTimestamp:2019-12-30 14:02:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 837bffdf-54d4-46e5-a84a-e3b60d349e7e 0xc002c61877 0xc002c61878}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7ztgq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7ztgq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-7ztgq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002c618f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002c61910}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:02:48 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:29 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:29 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:02:47 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2019-12-30 14:02:48 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-30 14:03:27 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://2301bbbb9bffb912534b6e7d33445e34270b004176db8a554b48813b56e0d3b6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 30 14:03:40.858: INFO: Pod "nginx-deployment-7b8c6f4498-4cv57" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-4cv57,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7628,SelfLink:/api/v1/namespaces/deployment-7628/pods/nginx-deployment-7b8c6f4498-4cv57,UID:ad34fcef-ec3b-4691-afe9-fb12442fb85b,ResourceVersion:18649973,Generation:0,CreationTimestamp:2019-12-30 14:02:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 837bffdf-54d4-46e5-a84a-e3b60d349e7e 0xc002c619e7 0xc002c619e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7ztgq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7ztgq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-7ztgq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002c61a60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002c61a80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:02:53 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:29 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:29 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:02:48 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.6,StartTime:2019-12-30 14:02:53 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-30 14:03:28 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://32ad9503ca235edfc54eb5127004a03b321eda7473c9b7850a4b6470be214dae}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 30 14:03:40.858: INFO: Pod "nginx-deployment-7b8c6f4498-6pqgz" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6pqgz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7628,SelfLink:/api/v1/namespaces/deployment-7628/pods/nginx-deployment-7b8c6f4498-6pqgz,UID:36b91ac1-54ee-4eec-ab5a-c01656371cdd,ResourceVersion:18650107,Generation:0,CreationTimestamp:2019-12-30 14:03:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 837bffdf-54d4-46e5-a84a-e3b60d349e7e 0xc002c61b67 0xc002c61b68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7ztgq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7ztgq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-7ztgq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002c61be0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002c61c00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:35 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-30 14:03:36 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 30 14:03:40.859: INFO: Pod "nginx-deployment-7b8c6f4498-8ncgl" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8ncgl,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7628,SelfLink:/api/v1/namespaces/deployment-7628/pods/nginx-deployment-7b8c6f4498-8ncgl,UID:9d1532ce-7813-4ab2-92f5-55ed2159626a,ResourceVersion:18650086,Generation:0,CreationTimestamp:2019-12-30 14:03:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 837bffdf-54d4-46e5-a84a-e3b60d349e7e 0xc002c61cc7 0xc002c61cc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7ztgq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7ztgq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-7ztgq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002c61d40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002c61d60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:35 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 30 14:03:40.859: INFO: Pod "nginx-deployment-7b8c6f4498-9bc2w" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-9bc2w,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7628,SelfLink:/api/v1/namespaces/deployment-7628/pods/nginx-deployment-7b8c6f4498-9bc2w,UID:fb2dbd57-3324-4b87-ac94-fd0aa7f87693,ResourceVersion:18650085,Generation:0,CreationTimestamp:2019-12-30 14:03:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 837bffdf-54d4-46e5-a84a-e3b60d349e7e 0xc002c61de7 0xc002c61de8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7ztgq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7ztgq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-7ztgq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002c61e50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002c61e70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:35 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 30 14:03:40.860: INFO: Pod "nginx-deployment-7b8c6f4498-bdgsz" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bdgsz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7628,SelfLink:/api/v1/namespaces/deployment-7628/pods/nginx-deployment-7b8c6f4498-bdgsz,UID:5897cab8-a8f8-4ae7-b034-6d7b12c17803,ResourceVersion:18650068,Generation:0,CreationTimestamp:2019-12-30 14:03:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 837bffdf-54d4-46e5-a84a-e3b60d349e7e 0xc002c61ef7 0xc002c61ef8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7ztgq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7ztgq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-7ztgq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002c61f60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002c61f80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:35 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 30 14:03:40.860: INFO: Pod "nginx-deployment-7b8c6f4498-bjrkw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bjrkw,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7628,SelfLink:/api/v1/namespaces/deployment-7628/pods/nginx-deployment-7b8c6f4498-bjrkw,UID:258c1e67-1075-4232-ac69-98f7500c71a4,ResourceVersion:18650073,Generation:0,CreationTimestamp:2019-12-30 14:03:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 837bffdf-54d4-46e5-a84a-e3b60d349e7e 0xc000414027 0xc000414028}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7ztgq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7ztgq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-7ztgq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0004141c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000414290}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:35 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 30 14:03:40.860: INFO: Pod "nginx-deployment-7b8c6f4498-fb9bh" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-fb9bh,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7628,SelfLink:/api/v1/namespaces/deployment-7628/pods/nginx-deployment-7b8c6f4498-fb9bh,UID:3b3810e1-59c2-497a-86d0-eac890e9cd3b,ResourceVersion:18649920,Generation:0,CreationTimestamp:2019-12-30 14:02:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 837bffdf-54d4-46e5-a84a-e3b60d349e7e 0xc000414817 0xc000414818}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7ztgq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7ztgq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-7ztgq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000414940} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000414b50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:02:52 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:18 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:18 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:02:48 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.7,StartTime:2019-12-30 14:02:52 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-30 14:03:17 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://3c4e05f0eb3e5caa40f8018433e65301d92a842b3741903307e52bceb9eba469}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 30 14:03:40.861: INFO: Pod "nginx-deployment-7b8c6f4498-fpfw8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-fpfw8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7628,SelfLink:/api/v1/namespaces/deployment-7628/pods/nginx-deployment-7b8c6f4498-fpfw8,UID:35c8f4a0-5ecb-467e-898d-b077138c2732,ResourceVersion:18650109,Generation:0,CreationTimestamp:2019-12-30 14:03:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 837bffdf-54d4-46e5-a84a-e3b60d349e7e 0xc0004154c7 0xc0004154c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7ztgq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7ztgq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-7ztgq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000415700} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000415850}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:35 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2019-12-30 14:03:35 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 30 14:03:40.861: INFO: Pod "nginx-deployment-7b8c6f4498-frx7l" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-frx7l,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7628,SelfLink:/api/v1/namespaces/deployment-7628/pods/nginx-deployment-7b8c6f4498-frx7l,UID:23d50132-053d-41e2-98bf-43308b154829,ResourceVersion:18649917,Generation:0,CreationTimestamp:2019-12-30 14:02:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 837bffdf-54d4-46e5-a84a-e3b60d349e7e 0xc000415e37 0xc000415e38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7ztgq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7ztgq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-7ztgq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000415f00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000415f70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:02:50 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:18 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:18 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:02:47 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2019-12-30 14:02:50 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-30 14:03:16 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://791fbd6b6ac715aae16a77db4f6033c9de7b45844200057c617a31045d017ef9}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 30 14:03:40.861: INFO: Pod "nginx-deployment-7b8c6f4498-hbddt" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-hbddt,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7628,SelfLink:/api/v1/namespaces/deployment-7628/pods/nginx-deployment-7b8c6f4498-hbddt,UID:f637a82d-da00-4e55-b228-d95ec554728a,ResourceVersion:18649925,Generation:0,CreationTimestamp:2019-12-30 14:02:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 837bffdf-54d4-46e5-a84a-e3b60d349e7e 0xc000592117 0xc000592118}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7ztgq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7ztgq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-7ztgq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000592210} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000592230}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:02:54 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:18 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:18 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:02:51 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.6,StartTime:2019-12-30 14:02:54 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-30 14:03:17 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://60fd3e75e0e98c482492ae7dfc240fa341e2f4dfb316b16dbb515c93b0e56d47}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 30 14:03:40.862: INFO: Pod "nginx-deployment-7b8c6f4498-hgt2l" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-hgt2l,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7628,SelfLink:/api/v1/namespaces/deployment-7628/pods/nginx-deployment-7b8c6f4498-hgt2l,UID:f1d3a8ef-d051-4e9c-9535-7b2d58ef0bcb,ResourceVersion:18650069,Generation:0,CreationTimestamp:2019-12-30 14:03:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 837bffdf-54d4-46e5-a84a-e3b60d349e7e 0xc0005923c7 0xc0005923c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7ztgq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7ztgq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-7ztgq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0005924a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000592510}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:35 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 30 14:03:40.862: INFO: Pod "nginx-deployment-7b8c6f4498-nrp7d" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-nrp7d,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7628,SelfLink:/api/v1/namespaces/deployment-7628/pods/nginx-deployment-7b8c6f4498-nrp7d,UID:6d4cbc01-e014-43a7-8146-c832d3d093ee,ResourceVersion:18650096,Generation:0,CreationTimestamp:2019-12-30 14:03:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 837bffdf-54d4-46e5-a84a-e3b60d349e7e 0xc0005928b7 0xc0005928b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7ztgq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7ztgq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-7ztgq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000592bf0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000592c20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:35 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-30 14:03:36 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 30 14:03:40.862: INFO: Pod "nginx-deployment-7b8c6f4498-pvm7m" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-pvm7m,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7628,SelfLink:/api/v1/namespaces/deployment-7628/pods/nginx-deployment-7b8c6f4498-pvm7m,UID:d2d53f7f-a1d9-4a19-82bd-e9ab6670eba1,ResourceVersion:18650080,Generation:0,CreationTimestamp:2019-12-30 14:03:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 837bffdf-54d4-46e5-a84a-e3b60d349e7e 0xc000593017 0xc000593018}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7ztgq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7ztgq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-7ztgq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000593310} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000593340}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:35 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 30 14:03:40.863: INFO: Pod "nginx-deployment-7b8c6f4498-rnfnr" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rnfnr,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7628,SelfLink:/api/v1/namespaces/deployment-7628/pods/nginx-deployment-7b8c6f4498-rnfnr,UID:af2adcfa-80b1-4e60-b7e5-a9f14f028830,ResourceVersion:18649976,Generation:0,CreationTimestamp:2019-12-30 14:02:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 837bffdf-54d4-46e5-a84a-e3b60d349e7e 0xc000593657 0xc000593658}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7ztgq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7ztgq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-7ztgq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000622070} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0006220a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:02:54 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:29 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:29 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:02:48 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.3,StartTime:2019-12-30 14:02:54 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-30 14:03:28 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://522786951da4add9c19864c7f783f152c4712f513eaa779e1810e18cd461279e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 30 14:03:40.863: INFO: Pod "nginx-deployment-7b8c6f4498-tpflz" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-tpflz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7628,SelfLink:/api/v1/namespaces/deployment-7628/pods/nginx-deployment-7b8c6f4498-tpflz,UID:4a30bf06-c39d-401d-9992-4a3d84234070,ResourceVersion:18650084,Generation:0,CreationTimestamp:2019-12-30 14:03:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 837bffdf-54d4-46e5-a84a-e3b60d349e7e 0xc000623a27 0xc000623a28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7ztgq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7ztgq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-7ztgq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000623ad0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000623b00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:35 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 30 14:03:40.864: INFO: Pod "nginx-deployment-7b8c6f4498-twsld" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-twsld,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7628,SelfLink:/api/v1/namespaces/deployment-7628/pods/nginx-deployment-7b8c6f4498-twsld,UID:ca5cd1c3-ad93-4489-a797-b3810aeb4ad9,ResourceVersion:18650061,Generation:0,CreationTimestamp:2019-12-30 14:03:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 837bffdf-54d4-46e5-a84a-e3b60d349e7e 0xc000623bb7 0xc000623bb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7ztgq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7ztgq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-7ztgq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000623c30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000623c50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:35 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 30 14:03:40.865: INFO: Pod "nginx-deployment-7b8c6f4498-z6bk9" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-z6bk9,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7628,SelfLink:/api/v1/namespaces/deployment-7628/pods/nginx-deployment-7b8c6f4498-z6bk9,UID:55541245-f63e-4605-87ed-8ec646c74b66,ResourceVersion:18649962,Generation:0,CreationTimestamp:2019-12-30 14:02:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 837bffdf-54d4-46e5-a84a-e3b60d349e7e 0xc000623d67 0xc000623d68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7ztgq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7ztgq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-7ztgq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000623f10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000623f50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:02:48 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:29 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:29 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:02:47 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.4,StartTime:2019-12-30 14:02:48 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-30 14:03:28 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://55afa3c0a83689bc9ab42d9fe95b62aedc33e703bb113af9a2f5718f035a7212}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 30 14:03:40.865: INFO: Pod "nginx-deployment-7b8c6f4498-zc827" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zc827,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7628,SelfLink:/api/v1/namespaces/deployment-7628/pods/nginx-deployment-7b8c6f4498-zc827,UID:3fb2a88f-8221-44ba-a159-33437debb143,ResourceVersion:18649923,Generation:0,CreationTimestamp:2019-12-30 14:02:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 837bffdf-54d4-46e5-a84a-e3b60d349e7e 0xc002ea0117 0xc002ea0118}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7ztgq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7ztgq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-7ztgq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ea02d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ea02f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:02:52 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:18 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:18 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:02:48 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.5,StartTime:2019-12-30 14:02:52 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-30 14:03:17 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://00a9b3b2765d761e047f2703fe382dd2a3541babb7f2d9a8e437e0d91f14eb69}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 30 14:03:40.865: INFO: Pod "nginx-deployment-7b8c6f4498-zrpbh" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zrpbh,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7628,SelfLink:/api/v1/namespaces/deployment-7628/pods/nginx-deployment-7b8c6f4498-zrpbh,UID:0d4f5b62-cf3f-4df5-9847-64d3da7badb5,ResourceVersion:18650087,Generation:0,CreationTimestamp:2019-12-30 14:03:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 837bffdf-54d4-46e5-a84a-e3b60d349e7e 0xc002ea0507 0xc002ea0508}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7ztgq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7ztgq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-7ztgq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ea0680} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ea06a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:03:35 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:03:40.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-7628" for this suite.
Dec 30 14:05:11.363: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:05:11.519: INFO: namespace deployment-7628 deletion completed in 1m29.700409208s

• [SLOW TEST:145.433 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:05:11.520: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 30 14:05:13.610: INFO: Creating ReplicaSet my-hostname-basic-deb91610-04e0-46c0-a24b-5c64a99fe47b
Dec 30 14:05:14.041: INFO: Pod name my-hostname-basic-deb91610-04e0-46c0-a24b-5c64a99fe47b: Found 0 pods out of 1
Dec 30 14:05:19.204: INFO: Pod name my-hostname-basic-deb91610-04e0-46c0-a24b-5c64a99fe47b: Found 1 pods out of 1
Dec 30 14:05:19.204: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-deb91610-04e0-46c0-a24b-5c64a99fe47b" is running
Dec 30 14:05:35.220: INFO: Pod "my-hostname-basic-deb91610-04e0-46c0-a24b-5c64a99fe47b-v297l" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-30 14:05:14 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-30 14:05:14 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-deb91610-04e0-46c0-a24b-5c64a99fe47b]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-30 14:05:14 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-deb91610-04e0-46c0-a24b-5c64a99fe47b]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-30 14:05:14 +0000 UTC Reason: Message:}])
Dec 30 14:05:35.220: INFO: Trying to dial the pod
Dec 30 14:05:40.260: INFO: Controller my-hostname-basic-deb91610-04e0-46c0-a24b-5c64a99fe47b: Got expected result from replica 1 [my-hostname-basic-deb91610-04e0-46c0-a24b-5c64a99fe47b-v297l]: "my-hostname-basic-deb91610-04e0-46c0-a24b-5c64a99fe47b-v297l", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:05:40.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-7627" for this suite.
Dec 30 14:05:46.295: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:05:46.447: INFO: namespace replicaset-7627 deletion completed in 6.180151586s

• [SLOW TEST:34.927 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:05:46.449: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Dec 30 14:05:59.068: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:05:59.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7171" for this suite.
Dec 30 14:06:05.443: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:06:05.551: INFO: namespace container-runtime-7171 deletion completed in 6.396356134s

• [SLOW TEST:19.102 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:06:05.551: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test env composition
Dec 30 14:06:05.789: INFO: Waiting up to 5m0s for pod "var-expansion-c8ef9a31-c6ad-4cb0-b183-9409b4fb48e4" in namespace "var-expansion-5589" to be "success or failure"
Dec 30 14:06:05.827: INFO: Pod "var-expansion-c8ef9a31-c6ad-4cb0-b183-9409b4fb48e4": Phase="Pending", Reason="", readiness=false. Elapsed: 38.210548ms
Dec 30 14:06:07.840: INFO: Pod "var-expansion-c8ef9a31-c6ad-4cb0-b183-9409b4fb48e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051018055s
Dec 30 14:06:09.852: INFO: Pod "var-expansion-c8ef9a31-c6ad-4cb0-b183-9409b4fb48e4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063161402s
Dec 30 14:06:11.871: INFO: Pod "var-expansion-c8ef9a31-c6ad-4cb0-b183-9409b4fb48e4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.081722077s
Dec 30 14:06:13.885: INFO: Pod "var-expansion-c8ef9a31-c6ad-4cb0-b183-9409b4fb48e4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.096156566s
Dec 30 14:06:15.901: INFO: Pod "var-expansion-c8ef9a31-c6ad-4cb0-b183-9409b4fb48e4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.111323013s
Dec 30 14:06:17.926: INFO: Pod "var-expansion-c8ef9a31-c6ad-4cb0-b183-9409b4fb48e4": Phase="Pending", Reason="", readiness=false. Elapsed: 12.136379881s
Dec 30 14:06:19.947: INFO: Pod "var-expansion-c8ef9a31-c6ad-4cb0-b183-9409b4fb48e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.157966004s
STEP: Saw pod success
Dec 30 14:06:19.947: INFO: Pod "var-expansion-c8ef9a31-c6ad-4cb0-b183-9409b4fb48e4" satisfied condition "success or failure"
Dec 30 14:06:19.952: INFO: Trying to get logs from node iruya-node pod var-expansion-c8ef9a31-c6ad-4cb0-b183-9409b4fb48e4 container dapi-container: 
STEP: delete the pod
Dec 30 14:06:20.009: INFO: Waiting for pod var-expansion-c8ef9a31-c6ad-4cb0-b183-9409b4fb48e4 to disappear
Dec 30 14:06:20.012: INFO: Pod var-expansion-c8ef9a31-c6ad-4cb0-b183-9409b4fb48e4 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:06:20.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-5589" for this suite.
Dec 30 14:06:26.040: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:06:26.159: INFO: namespace var-expansion-5589 deletion completed in 6.1434462s

• [SLOW TEST:20.607 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:06:26.161: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-138
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 30 14:06:26.274: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 30 14:07:02.450: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-138 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 30 14:07:02.450: INFO: >>> kubeConfig: /root/.kube/config
Dec 30 14:07:03.072: INFO: Found all expected endpoints: [netserver-0]
Dec 30 14:07:03.080: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-138 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 30 14:07:03.081: INFO: >>> kubeConfig: /root/.kube/config
Dec 30 14:07:03.412: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:07:03.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-138" for this suite.
Dec 30 14:07:27.452: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:07:27.590: INFO: namespace pod-network-test-138 deletion completed in 24.161628418s

• [SLOW TEST:61.429 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:07:27.591: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:07:34.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-6564" for this suite.
Dec 30 14:07:40.085: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:07:40.209: INFO: namespace namespaces-6564 deletion completed in 6.188484926s
STEP: Destroying namespace "nsdeletetest-3751" for this suite.
Dec 30 14:07:40.211: INFO: Namespace nsdeletetest-3751 was already deleted
STEP: Destroying namespace "nsdeletetest-9442" for this suite.
Dec 30 14:07:46.243: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:07:46.401: INFO: namespace nsdeletetest-9442 deletion completed in 6.190142638s

• [SLOW TEST:18.811 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:07:46.403: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:07:56.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5039" for this suite.
Dec 30 14:08:48.576: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:08:48.748: INFO: namespace kubelet-test-5039 deletion completed in 52.203257419s

• [SLOW TEST:62.346 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:08:48.749: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-6e84390c-95ab-4ea5-8f60-31feba04ca8f
STEP: Creating a pod to test consume configMaps
Dec 30 14:08:48.970: INFO: Waiting up to 5m0s for pod "pod-configmaps-1308bc0d-7148-4462-ac13-10e5ec942370" in namespace "configmap-2424" to be "success or failure"
Dec 30 14:08:49.078: INFO: Pod "pod-configmaps-1308bc0d-7148-4462-ac13-10e5ec942370": Phase="Pending", Reason="", readiness=false. Elapsed: 107.681428ms
Dec 30 14:08:51.086: INFO: Pod "pod-configmaps-1308bc0d-7148-4462-ac13-10e5ec942370": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115699654s
Dec 30 14:08:53.092: INFO: Pod "pod-configmaps-1308bc0d-7148-4462-ac13-10e5ec942370": Phase="Pending", Reason="", readiness=false. Elapsed: 4.121200375s
Dec 30 14:08:55.099: INFO: Pod "pod-configmaps-1308bc0d-7148-4462-ac13-10e5ec942370": Phase="Pending", Reason="", readiness=false. Elapsed: 6.128775129s
Dec 30 14:08:57.108: INFO: Pod "pod-configmaps-1308bc0d-7148-4462-ac13-10e5ec942370": Phase="Pending", Reason="", readiness=false. Elapsed: 8.137596953s
Dec 30 14:08:59.113: INFO: Pod "pod-configmaps-1308bc0d-7148-4462-ac13-10e5ec942370": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.143016269s
STEP: Saw pod success
Dec 30 14:08:59.113: INFO: Pod "pod-configmaps-1308bc0d-7148-4462-ac13-10e5ec942370" satisfied condition "success or failure"
Dec 30 14:08:59.116: INFO: Trying to get logs from node iruya-node pod pod-configmaps-1308bc0d-7148-4462-ac13-10e5ec942370 container configmap-volume-test: 
STEP: delete the pod
Dec 30 14:08:59.162: INFO: Waiting for pod pod-configmaps-1308bc0d-7148-4462-ac13-10e5ec942370 to disappear
Dec 30 14:08:59.176: INFO: Pod pod-configmaps-1308bc0d-7148-4462-ac13-10e5ec942370 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:08:59.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2424" for this suite.
Dec 30 14:09:05.206: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:09:05.367: INFO: namespace configmap-2424 deletion completed in 6.186649038s

• [SLOW TEST:16.619 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:09:05.369: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating all guestbook components
Dec 30 14:09:05.469: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Dec 30 14:09:05.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8451'
Dec 30 14:09:08.105: INFO: stderr: ""
Dec 30 14:09:08.105: INFO: stdout: "service/redis-slave created\n"
Dec 30 14:09:08.106: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Dec 30 14:09:08.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8451'
Dec 30 14:09:08.860: INFO: stderr: ""
Dec 30 14:09:08.861: INFO: stdout: "service/redis-master created\n"
Dec 30 14:09:08.862: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Dec 30 14:09:08.862: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8451'
Dec 30 14:09:09.546: INFO: stderr: ""
Dec 30 14:09:09.546: INFO: stdout: "service/frontend created\n"
Dec 30 14:09:09.547: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Dec 30 14:09:09.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8451'
Dec 30 14:09:10.021: INFO: stderr: ""
Dec 30 14:09:10.021: INFO: stdout: "deployment.apps/frontend created\n"
Dec 30 14:09:10.022: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Dec 30 14:09:10.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8451'
Dec 30 14:09:10.535: INFO: stderr: ""
Dec 30 14:09:10.535: INFO: stdout: "deployment.apps/redis-master created\n"
Dec 30 14:09:10.536: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Dec 30 14:09:10.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8451'
Dec 30 14:09:11.423: INFO: stderr: ""
Dec 30 14:09:11.423: INFO: stdout: "deployment.apps/redis-slave created\n"
STEP: validating guestbook app
Dec 30 14:09:11.424: INFO: Waiting for all frontend pods to be Running.
Dec 30 14:09:36.479: INFO: Waiting for frontend to serve content.
Dec 30 14:09:36.679: INFO: Trying to add a new entry to the guestbook.
Dec 30 14:09:36.734: INFO: Verifying that added entry can be retrieved.
Dec 30 14:09:39.736: INFO: Failed to get response from guestbook. err: , response: {"data": ""}
STEP: using delete to clean up resources
Dec 30 14:09:44.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8451'
Dec 30 14:09:45.009: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 30 14:09:45.009: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Dec 30 14:09:45.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8451'
Dec 30 14:09:45.204: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 30 14:09:45.204: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Dec 30 14:09:45.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8451'
Dec 30 14:09:45.400: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 30 14:09:45.400: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Dec 30 14:09:45.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8451'
Dec 30 14:09:45.538: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 30 14:09:45.538: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Dec 30 14:09:45.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8451'
Dec 30 14:09:45.681: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 30 14:09:45.682: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Dec 30 14:09:45.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8451'
Dec 30 14:09:45.989: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 30 14:09:45.989: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:09:45.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8451" for this suite.
Dec 30 14:10:30.069: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:10:30.171: INFO: namespace kubectl-8451 deletion completed in 44.154249412s

• [SLOW TEST:84.802 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:10:30.172: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W1230 14:10:42.146512       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 30 14:10:42.146: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:10:42.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9636" for this suite.
Dec 30 14:10:58.543: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:10:58.940: INFO: namespace gc-9636 deletion completed in 16.791495349s

• [SLOW TEST:28.769 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:10:58.942: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7561.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7561.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7561.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7561.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7561.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-7561.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7561.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-7561.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7561.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-7561.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7561.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-7561.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7561.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 184.164.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.164.184_udp@PTR;check="$$(dig +tcp +noall +answer +search 184.164.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.164.184_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7561.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7561.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7561.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7561.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7561.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-7561.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7561.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-7561.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7561.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-7561.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7561.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-7561.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7561.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 184.164.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.164.184_udp@PTR;check="$$(dig +tcp +noall +answer +search 184.164.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.164.184_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 30 14:11:16.912: INFO: Unable to read wheezy_udp@dns-test-service.dns-7561.svc.cluster.local from pod dns-7561/dns-test-b75afe79-8436-4892-b7b2-a6fd872feaac: the server could not find the requested resource (get pods dns-test-b75afe79-8436-4892-b7b2-a6fd872feaac)
Dec 30 14:11:16.919: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7561.svc.cluster.local from pod dns-7561/dns-test-b75afe79-8436-4892-b7b2-a6fd872feaac: the server could not find the requested resource (get pods dns-test-b75afe79-8436-4892-b7b2-a6fd872feaac)
Dec 30 14:11:16.925: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7561.svc.cluster.local from pod dns-7561/dns-test-b75afe79-8436-4892-b7b2-a6fd872feaac: the server could not find the requested resource (get pods dns-test-b75afe79-8436-4892-b7b2-a6fd872feaac)
Dec 30 14:11:16.933: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7561.svc.cluster.local from pod dns-7561/dns-test-b75afe79-8436-4892-b7b2-a6fd872feaac: the server could not find the requested resource (get pods dns-test-b75afe79-8436-4892-b7b2-a6fd872feaac)
Dec 30 14:11:16.937: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-7561.svc.cluster.local from pod dns-7561/dns-test-b75afe79-8436-4892-b7b2-a6fd872feaac: the server could not find the requested resource (get pods dns-test-b75afe79-8436-4892-b7b2-a6fd872feaac)
Dec 30 14:11:16.943: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-7561.svc.cluster.local from pod dns-7561/dns-test-b75afe79-8436-4892-b7b2-a6fd872feaac: the server could not find the requested resource (get pods dns-test-b75afe79-8436-4892-b7b2-a6fd872feaac)
Dec 30 14:11:16.945: INFO: Unable to read wheezy_udp@PodARecord from pod dns-7561/dns-test-b75afe79-8436-4892-b7b2-a6fd872feaac: the server could not find the requested resource (get pods dns-test-b75afe79-8436-4892-b7b2-a6fd872feaac)
Dec 30 14:11:16.948: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7561/dns-test-b75afe79-8436-4892-b7b2-a6fd872feaac: the server could not find the requested resource (get pods dns-test-b75afe79-8436-4892-b7b2-a6fd872feaac)
Dec 30 14:11:16.951: INFO: Unable to read 10.96.164.184_udp@PTR from pod dns-7561/dns-test-b75afe79-8436-4892-b7b2-a6fd872feaac: the server could not find the requested resource (get pods dns-test-b75afe79-8436-4892-b7b2-a6fd872feaac)
Dec 30 14:11:16.954: INFO: Unable to read 10.96.164.184_tcp@PTR from pod dns-7561/dns-test-b75afe79-8436-4892-b7b2-a6fd872feaac: the server could not find the requested resource (get pods dns-test-b75afe79-8436-4892-b7b2-a6fd872feaac)
Dec 30 14:11:16.957: INFO: Unable to read jessie_udp@dns-test-service.dns-7561.svc.cluster.local from pod dns-7561/dns-test-b75afe79-8436-4892-b7b2-a6fd872feaac: the server could not find the requested resource (get pods dns-test-b75afe79-8436-4892-b7b2-a6fd872feaac)
Dec 30 14:11:16.961: INFO: Unable to read jessie_tcp@dns-test-service.dns-7561.svc.cluster.local from pod dns-7561/dns-test-b75afe79-8436-4892-b7b2-a6fd872feaac: the server could not find the requested resource (get pods dns-test-b75afe79-8436-4892-b7b2-a6fd872feaac)
Dec 30 14:11:16.964: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7561.svc.cluster.local from pod dns-7561/dns-test-b75afe79-8436-4892-b7b2-a6fd872feaac: the server could not find the requested resource (get pods dns-test-b75afe79-8436-4892-b7b2-a6fd872feaac)
Dec 30 14:11:16.966: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7561.svc.cluster.local from pod dns-7561/dns-test-b75afe79-8436-4892-b7b2-a6fd872feaac: the server could not find the requested resource (get pods dns-test-b75afe79-8436-4892-b7b2-a6fd872feaac)
Dec 30 14:11:16.970: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-7561.svc.cluster.local from pod dns-7561/dns-test-b75afe79-8436-4892-b7b2-a6fd872feaac: the server could not find the requested resource (get pods dns-test-b75afe79-8436-4892-b7b2-a6fd872feaac)
Dec 30 14:11:16.973: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-7561.svc.cluster.local from pod dns-7561/dns-test-b75afe79-8436-4892-b7b2-a6fd872feaac: the server could not find the requested resource (get pods dns-test-b75afe79-8436-4892-b7b2-a6fd872feaac)
Dec 30 14:11:16.976: INFO: Unable to read jessie_udp@PodARecord from pod dns-7561/dns-test-b75afe79-8436-4892-b7b2-a6fd872feaac: the server could not find the requested resource (get pods dns-test-b75afe79-8436-4892-b7b2-a6fd872feaac)
Dec 30 14:11:16.979: INFO: Unable to read jessie_tcp@PodARecord from pod dns-7561/dns-test-b75afe79-8436-4892-b7b2-a6fd872feaac: the server could not find the requested resource (get pods dns-test-b75afe79-8436-4892-b7b2-a6fd872feaac)
Dec 30 14:11:16.982: INFO: Unable to read 10.96.164.184_udp@PTR from pod dns-7561/dns-test-b75afe79-8436-4892-b7b2-a6fd872feaac: the server could not find the requested resource (get pods dns-test-b75afe79-8436-4892-b7b2-a6fd872feaac)
Dec 30 14:11:16.984: INFO: Unable to read 10.96.164.184_tcp@PTR from pod dns-7561/dns-test-b75afe79-8436-4892-b7b2-a6fd872feaac: the server could not find the requested resource (get pods dns-test-b75afe79-8436-4892-b7b2-a6fd872feaac)
Dec 30 14:11:16.984: INFO: Lookups using dns-7561/dns-test-b75afe79-8436-4892-b7b2-a6fd872feaac failed for: [wheezy_udp@dns-test-service.dns-7561.svc.cluster.local wheezy_tcp@dns-test-service.dns-7561.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7561.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7561.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-7561.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-7561.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.96.164.184_udp@PTR 10.96.164.184_tcp@PTR jessie_udp@dns-test-service.dns-7561.svc.cluster.local jessie_tcp@dns-test-service.dns-7561.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7561.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7561.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-7561.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-7561.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.96.164.184_udp@PTR 10.96.164.184_tcp@PTR]

Dec 30 14:11:22.254: INFO: DNS probes using dns-7561/dns-test-b75afe79-8436-4892-b7b2-a6fd872feaac succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:11:22.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7561" for this suite.
Dec 30 14:11:28.626: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:11:28.730: INFO: namespace dns-7561 deletion completed in 6.175055427s

• [SLOW TEST:29.788 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:11:28.730: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 30 14:11:28.879: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a62858ad-ec9c-493f-ac21-7d38d5c8ee25" in namespace "projected-4870" to be "success or failure"
Dec 30 14:11:28.883: INFO: Pod "downwardapi-volume-a62858ad-ec9c-493f-ac21-7d38d5c8ee25": Phase="Pending", Reason="", readiness=false. Elapsed: 3.865029ms
Dec 30 14:11:30.889: INFO: Pod "downwardapi-volume-a62858ad-ec9c-493f-ac21-7d38d5c8ee25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00948455s
Dec 30 14:11:32.896: INFO: Pod "downwardapi-volume-a62858ad-ec9c-493f-ac21-7d38d5c8ee25": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016789015s
Dec 30 14:11:34.918: INFO: Pod "downwardapi-volume-a62858ad-ec9c-493f-ac21-7d38d5c8ee25": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038879457s
Dec 30 14:11:36.928: INFO: Pod "downwardapi-volume-a62858ad-ec9c-493f-ac21-7d38d5c8ee25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.048336576s
STEP: Saw pod success
Dec 30 14:11:36.928: INFO: Pod "downwardapi-volume-a62858ad-ec9c-493f-ac21-7d38d5c8ee25" satisfied condition "success or failure"
Dec 30 14:11:36.933: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-a62858ad-ec9c-493f-ac21-7d38d5c8ee25 container client-container: 
STEP: delete the pod
Dec 30 14:11:37.143: INFO: Waiting for pod downwardapi-volume-a62858ad-ec9c-493f-ac21-7d38d5c8ee25 to disappear
Dec 30 14:11:37.158: INFO: Pod downwardapi-volume-a62858ad-ec9c-493f-ac21-7d38d5c8ee25 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:11:37.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4870" for this suite.
Dec 30 14:11:43.197: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:11:43.302: INFO: namespace projected-4870 deletion completed in 6.137002765s

• [SLOW TEST:14.571 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:11:43.302: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-5d77bec4-0fcf-4324-90f7-21ea4098bb29
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-5d77bec4-0fcf-4324-90f7-21ea4098bb29
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:13:00.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6244" for this suite.
Dec 30 14:13:22.871: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:13:23.016: INFO: namespace configmap-6244 deletion completed in 22.176199651s

• [SLOW TEST:99.714 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:13:23.016: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Dec 30 14:13:23.170: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:13:36.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-899" for this suite.
Dec 30 14:13:42.286: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:13:42.453: INFO: namespace init-container-899 deletion completed in 6.201028798s

• [SLOW TEST:19.437 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:13:42.453: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:13:54.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-4361" for this suite.
Dec 30 14:14:00.715: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:14:00.849: INFO: namespace kubelet-test-4361 deletion completed in 6.200976183s

• [SLOW TEST:18.396 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:14:00.849: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on node default medium
Dec 30 14:14:01.025: INFO: Waiting up to 5m0s for pod "pod-9d837ab3-351b-4036-9692-da4279ee2db9" in namespace "emptydir-5443" to be "success or failure"
Dec 30 14:14:01.116: INFO: Pod "pod-9d837ab3-351b-4036-9692-da4279ee2db9": Phase="Pending", Reason="", readiness=false. Elapsed: 90.669863ms
Dec 30 14:14:03.127: INFO: Pod "pod-9d837ab3-351b-4036-9692-da4279ee2db9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100967866s
Dec 30 14:14:05.162: INFO: Pod "pod-9d837ab3-351b-4036-9692-da4279ee2db9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.136764193s
Dec 30 14:14:07.176: INFO: Pod "pod-9d837ab3-351b-4036-9692-da4279ee2db9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.150366459s
Dec 30 14:14:09.187: INFO: Pod "pod-9d837ab3-351b-4036-9692-da4279ee2db9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.161528076s
STEP: Saw pod success
Dec 30 14:14:09.187: INFO: Pod "pod-9d837ab3-351b-4036-9692-da4279ee2db9" satisfied condition "success or failure"
Dec 30 14:14:09.192: INFO: Trying to get logs from node iruya-node pod pod-9d837ab3-351b-4036-9692-da4279ee2db9 container test-container: 
STEP: delete the pod
Dec 30 14:14:09.237: INFO: Waiting for pod pod-9d837ab3-351b-4036-9692-da4279ee2db9 to disappear
Dec 30 14:14:09.287: INFO: Pod pod-9d837ab3-351b-4036-9692-da4279ee2db9 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:14:09.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5443" for this suite.
Dec 30 14:14:15.332: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:14:15.482: INFO: namespace emptydir-5443 deletion completed in 6.189404098s

• [SLOW TEST:14.632 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:14:15.482: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 30 14:14:15.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-939'
Dec 30 14:14:15.697: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 30 14:14:15.697: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Dec 30 14:14:15.738: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Dec 30 14:14:15.752: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Dec 30 14:14:15.787: INFO: scanned /root for discovery docs: 
Dec 30 14:14:15.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-939'
Dec 30 14:14:39.120: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Dec 30 14:14:39.120: INFO: stdout: "Created e2e-test-nginx-rc-bc2fcfe817a9159456695beaffc69359\nScaling up e2e-test-nginx-rc-bc2fcfe817a9159456695beaffc69359 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-bc2fcfe817a9159456695beaffc69359 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-bc2fcfe817a9159456695beaffc69359 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Dec 30 14:14:39.120: INFO: stdout: "Created e2e-test-nginx-rc-bc2fcfe817a9159456695beaffc69359\nScaling up e2e-test-nginx-rc-bc2fcfe817a9159456695beaffc69359 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-bc2fcfe817a9159456695beaffc69359 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-bc2fcfe817a9159456695beaffc69359 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Dec 30 14:14:39.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-939'
Dec 30 14:14:39.308: INFO: stderr: ""
Dec 30 14:14:39.308: INFO: stdout: "e2e-test-nginx-rc-bc2fcfe817a9159456695beaffc69359-5vbvp "
Dec 30 14:14:39.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-bc2fcfe817a9159456695beaffc69359-5vbvp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-939'
Dec 30 14:14:39.403: INFO: stderr: ""
Dec 30 14:14:39.403: INFO: stdout: "true"
Dec 30 14:14:39.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-bc2fcfe817a9159456695beaffc69359-5vbvp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-939'
Dec 30 14:14:39.510: INFO: stderr: ""
Dec 30 14:14:39.511: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Dec 30 14:14:39.511: INFO: e2e-test-nginx-rc-bc2fcfe817a9159456695beaffc69359-5vbvp is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522
Dec 30 14:14:39.511: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-939'
Dec 30 14:14:39.622: INFO: stderr: ""
Dec 30 14:14:39.623: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:14:39.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-939" for this suite.
Dec 30 14:14:45.658: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:14:45.803: INFO: namespace kubectl-939 deletion completed in 6.173425307s

• [SLOW TEST:30.321 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:14:45.803: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 30 14:14:45.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-7644'
Dec 30 14:14:46.086: INFO: stderr: ""
Dec 30 14:14:46.086: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Dec 30 14:14:46.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-7644'
Dec 30 14:14:50.840: INFO: stderr: ""
Dec 30 14:14:50.840: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:14:50.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7644" for this suite.
Dec 30 14:14:56.924: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:14:57.095: INFO: namespace kubectl-7644 deletion completed in 6.233450179s

• [SLOW TEST:11.292 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:14:57.096: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Dec 30 14:14:57.249: INFO: Waiting up to 5m0s for pod "pod-30c3b78a-baf8-4885-9244-9292f6487c87" in namespace "emptydir-9305" to be "success or failure"
Dec 30 14:14:57.276: INFO: Pod "pod-30c3b78a-baf8-4885-9244-9292f6487c87": Phase="Pending", Reason="", readiness=false. Elapsed: 27.270879ms
Dec 30 14:14:59.286: INFO: Pod "pod-30c3b78a-baf8-4885-9244-9292f6487c87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036551108s
Dec 30 14:15:01.348: INFO: Pod "pod-30c3b78a-baf8-4885-9244-9292f6487c87": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09922945s
Dec 30 14:15:03.356: INFO: Pod "pod-30c3b78a-baf8-4885-9244-9292f6487c87": Phase="Pending", Reason="", readiness=false. Elapsed: 6.106811991s
Dec 30 14:15:05.364: INFO: Pod "pod-30c3b78a-baf8-4885-9244-9292f6487c87": Phase="Pending", Reason="", readiness=false. Elapsed: 8.1146904s
Dec 30 14:15:07.371: INFO: Pod "pod-30c3b78a-baf8-4885-9244-9292f6487c87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.121606731s
STEP: Saw pod success
Dec 30 14:15:07.371: INFO: Pod "pod-30c3b78a-baf8-4885-9244-9292f6487c87" satisfied condition "success or failure"
Dec 30 14:15:07.376: INFO: Trying to get logs from node iruya-node pod pod-30c3b78a-baf8-4885-9244-9292f6487c87 container test-container: 
STEP: delete the pod
Dec 30 14:15:07.534: INFO: Waiting for pod pod-30c3b78a-baf8-4885-9244-9292f6487c87 to disappear
Dec 30 14:15:07.538: INFO: Pod pod-30c3b78a-baf8-4885-9244-9292f6487c87 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:15:07.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9305" for this suite.
Dec 30 14:15:13.565: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:15:13.701: INFO: namespace emptydir-9305 deletion completed in 6.159171432s

• [SLOW TEST:16.606 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:15:13.703: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test use defaults
Dec 30 14:15:13.828: INFO: Waiting up to 5m0s for pod "client-containers-8cab2ad2-9930-4472-87cd-f9a0c3e777c0" in namespace "containers-7402" to be "success or failure"
Dec 30 14:15:13.832: INFO: Pod "client-containers-8cab2ad2-9930-4472-87cd-f9a0c3e777c0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055488ms
Dec 30 14:15:15.843: INFO: Pod "client-containers-8cab2ad2-9930-4472-87cd-f9a0c3e777c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015686935s
Dec 30 14:15:17.856: INFO: Pod "client-containers-8cab2ad2-9930-4472-87cd-f9a0c3e777c0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028206395s
Dec 30 14:15:19.868: INFO: Pod "client-containers-8cab2ad2-9930-4472-87cd-f9a0c3e777c0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039908762s
Dec 30 14:15:21.889: INFO: Pod "client-containers-8cab2ad2-9930-4472-87cd-f9a0c3e777c0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.060892456s
Dec 30 14:15:23.901: INFO: Pod "client-containers-8cab2ad2-9930-4472-87cd-f9a0c3e777c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.073629956s
STEP: Saw pod success
Dec 30 14:15:23.901: INFO: Pod "client-containers-8cab2ad2-9930-4472-87cd-f9a0c3e777c0" satisfied condition "success or failure"
Dec 30 14:15:23.906: INFO: Trying to get logs from node iruya-node pod client-containers-8cab2ad2-9930-4472-87cd-f9a0c3e777c0 container test-container: 
STEP: delete the pod
Dec 30 14:15:24.493: INFO: Waiting for pod client-containers-8cab2ad2-9930-4472-87cd-f9a0c3e777c0 to disappear
Dec 30 14:15:24.527: INFO: Pod client-containers-8cab2ad2-9930-4472-87cd-f9a0c3e777c0 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:15:24.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-7402" for this suite.
Dec 30 14:15:30.665: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:15:30.783: INFO: namespace containers-7402 deletion completed in 6.234114005s

• [SLOW TEST:17.080 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:15:30.783: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-2812
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Dec 30 14:15:30.899: INFO: Found 0 stateful pods, waiting for 3
Dec 30 14:15:40.908: INFO: Found 2 stateful pods, waiting for 3
Dec 30 14:15:50.920: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 30 14:15:50.920: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 30 14:15:50.920: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 30 14:16:00.950: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 30 14:16:00.951: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 30 14:16:00.951: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Dec 30 14:16:00.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2812 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 30 14:16:01.453: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 30 14:16:01.453: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 30 14:16:01.453: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Dec 30 14:16:11.513: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Dec 30 14:16:21.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2812 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 14:16:22.051: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Dec 30 14:16:22.052: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 30 14:16:22.052: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 30 14:16:32.120: INFO: Waiting for StatefulSet statefulset-2812/ss2 to complete update
Dec 30 14:16:32.120: INFO: Waiting for Pod statefulset-2812/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 30 14:16:32.120: INFO: Waiting for Pod statefulset-2812/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 30 14:16:32.120: INFO: Waiting for Pod statefulset-2812/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 30 14:16:42.140: INFO: Waiting for StatefulSet statefulset-2812/ss2 to complete update
Dec 30 14:16:42.140: INFO: Waiting for Pod statefulset-2812/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 30 14:16:42.140: INFO: Waiting for Pod statefulset-2812/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 30 14:16:52.129: INFO: Waiting for StatefulSet statefulset-2812/ss2 to complete update
Dec 30 14:16:52.129: INFO: Waiting for Pod statefulset-2812/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 30 14:16:52.129: INFO: Waiting for Pod statefulset-2812/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 30 14:17:02.131: INFO: Waiting for StatefulSet statefulset-2812/ss2 to complete update
Dec 30 14:17:02.131: INFO: Waiting for Pod statefulset-2812/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 30 14:17:12.148: INFO: Waiting for StatefulSet statefulset-2812/ss2 to complete update
Dec 30 14:17:12.148: INFO: Waiting for Pod statefulset-2812/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Rolling back to a previous revision
Dec 30 14:17:22.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2812 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 30 14:17:22.654: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 30 14:17:22.654: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 30 14:17:22.654: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 30 14:17:32.705: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Dec 30 14:17:42.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2812 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 14:17:43.203: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Dec 30 14:17:43.204: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 30 14:17:43.204: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 30 14:17:53.271: INFO: Waiting for StatefulSet statefulset-2812/ss2 to complete update
Dec 30 14:17:53.271: INFO: Waiting for Pod statefulset-2812/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 30 14:17:53.271: INFO: Waiting for Pod statefulset-2812/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 30 14:17:53.271: INFO: Waiting for Pod statefulset-2812/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 30 14:18:03.285: INFO: Waiting for StatefulSet statefulset-2812/ss2 to complete update
Dec 30 14:18:03.285: INFO: Waiting for Pod statefulset-2812/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 30 14:18:03.285: INFO: Waiting for Pod statefulset-2812/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 30 14:18:13.283: INFO: Waiting for StatefulSet statefulset-2812/ss2 to complete update
Dec 30 14:18:13.284: INFO: Waiting for Pod statefulset-2812/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 30 14:18:23.287: INFO: Waiting for StatefulSet statefulset-2812/ss2 to complete update
Dec 30 14:18:23.287: INFO: Waiting for Pod statefulset-2812/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 30 14:18:33.356: INFO: Waiting for StatefulSet statefulset-2812/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Dec 30 14:18:43.292: INFO: Deleting all statefulset in ns statefulset-2812
Dec 30 14:18:43.297: INFO: Scaling statefulset ss2 to 0
Dec 30 14:19:13.338: INFO: Waiting for statefulset status.replicas updated to 0
Dec 30 14:19:13.343: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:19:13.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-2812" for this suite.
Dec 30 14:19:21.459: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:19:21.610: INFO: namespace statefulset-2812 deletion completed in 8.173608808s

• [SLOW TEST:230.827 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:19:21.611: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Dec 30 14:19:21.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2057'
Dec 30 14:19:24.692: INFO: stderr: ""
Dec 30 14:19:24.693: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 30 14:19:24.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2057'
Dec 30 14:19:24.862: INFO: stderr: ""
Dec 30 14:19:24.862: INFO: stdout: "update-demo-nautilus-2rvzp update-demo-nautilus-sg2zd "
Dec 30 14:19:24.862: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2rvzp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2057'
Dec 30 14:19:25.103: INFO: stderr: ""
Dec 30 14:19:25.103: INFO: stdout: ""
Dec 30 14:19:25.103: INFO: update-demo-nautilus-2rvzp is created but not running
Dec 30 14:19:30.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2057'
Dec 30 14:19:31.083: INFO: stderr: ""
Dec 30 14:19:31.083: INFO: stdout: "update-demo-nautilus-2rvzp update-demo-nautilus-sg2zd "
Dec 30 14:19:31.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2rvzp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2057'
Dec 30 14:19:31.692: INFO: stderr: ""
Dec 30 14:19:31.693: INFO: stdout: ""
Dec 30 14:19:31.693: INFO: update-demo-nautilus-2rvzp is created but not running
Dec 30 14:19:36.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2057'
Dec 30 14:19:36.879: INFO: stderr: ""
Dec 30 14:19:36.879: INFO: stdout: "update-demo-nautilus-2rvzp update-demo-nautilus-sg2zd "
Dec 30 14:19:36.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2rvzp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2057'
Dec 30 14:19:37.011: INFO: stderr: ""
Dec 30 14:19:37.011: INFO: stdout: "true"
Dec 30 14:19:37.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2rvzp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2057'
Dec 30 14:19:37.102: INFO: stderr: ""
Dec 30 14:19:37.102: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 30 14:19:37.102: INFO: validating pod update-demo-nautilus-2rvzp
Dec 30 14:19:37.125: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 30 14:19:37.126: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 30 14:19:37.126: INFO: update-demo-nautilus-2rvzp is verified up and running
Dec 30 14:19:37.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sg2zd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2057'
Dec 30 14:19:37.223: INFO: stderr: ""
Dec 30 14:19:37.223: INFO: stdout: "true"
Dec 30 14:19:37.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sg2zd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2057'
Dec 30 14:19:37.340: INFO: stderr: ""
Dec 30 14:19:37.340: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 30 14:19:37.340: INFO: validating pod update-demo-nautilus-sg2zd
Dec 30 14:19:37.359: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 30 14:19:37.359: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 30 14:19:37.359: INFO: update-demo-nautilus-sg2zd is verified up and running
STEP: using delete to clean up resources
Dec 30 14:19:37.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2057'
Dec 30 14:19:37.480: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 30 14:19:37.480: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Dec 30 14:19:37.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2057'
Dec 30 14:19:37.581: INFO: stderr: "No resources found.\n"
Dec 30 14:19:37.581: INFO: stdout: ""
Dec 30 14:19:37.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2057 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 30 14:19:37.744: INFO: stderr: ""
Dec 30 14:19:37.744: INFO: stdout: "update-demo-nautilus-2rvzp\nupdate-demo-nautilus-sg2zd\n"
Dec 30 14:19:38.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2057'
Dec 30 14:19:39.580: INFO: stderr: "No resources found.\n"
Dec 30 14:19:39.580: INFO: stdout: ""
Dec 30 14:19:39.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2057 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 30 14:19:39.846: INFO: stderr: ""
Dec 30 14:19:39.846: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:19:39.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2057" for this suite.
Dec 30 14:20:01.915: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:20:02.051: INFO: namespace kubectl-2057 deletion completed in 22.167245027s

• [SLOW TEST:40.440 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:20:02.052: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Dec 30 14:20:02.174: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 30 14:20:02.187: INFO: Waiting for terminating namespaces to be deleted...
Dec 30 14:20:02.191: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Dec 30 14:20:02.212: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Dec 30 14:20:02.212: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 30 14:20:02.212: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Dec 30 14:20:02.213: INFO: 	Container weave ready: true, restart count 0
Dec 30 14:20:02.213: INFO: 	Container weave-npc ready: true, restart count 0
Dec 30 14:20:02.213: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Dec 30 14:20:02.226: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Dec 30 14:20:02.226: INFO: 	Container kube-scheduler ready: true, restart count 10
Dec 30 14:20:02.226: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Dec 30 14:20:02.226: INFO: 	Container coredns ready: true, restart count 0
Dec 30 14:20:02.226: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Dec 30 14:20:02.226: INFO: 	Container etcd ready: true, restart count 0
Dec 30 14:20:02.226: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Dec 30 14:20:02.226: INFO: 	Container weave ready: true, restart count 0
Dec 30 14:20:02.226: INFO: 	Container weave-npc ready: true, restart count 0
Dec 30 14:20:02.226: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Dec 30 14:20:02.226: INFO: 	Container coredns ready: true, restart count 0
Dec 30 14:20:02.226: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Dec 30 14:20:02.226: INFO: 	Container kube-controller-manager ready: true, restart count 14
Dec 30 14:20:02.226: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Dec 30 14:20:02.226: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 30 14:20:02.226: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Dec 30 14:20:02.226: INFO: 	Container kube-apiserver ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15e52bfb2ffeee2b], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:20:03.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-5409" for this suite.
Dec 30 14:20:09.363: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:20:09.499: INFO: namespace sched-pred-5409 deletion completed in 6.167577358s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:7.447 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:20:09.500: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Dec 30 14:20:09.737: INFO: Waiting up to 5m0s for pod "downward-api-663a3903-5c7c-48b4-8040-00eae5667d18" in namespace "downward-api-426" to be "success or failure"
Dec 30 14:20:09.753: INFO: Pod "downward-api-663a3903-5c7c-48b4-8040-00eae5667d18": Phase="Pending", Reason="", readiness=false. Elapsed: 16.269649ms
Dec 30 14:20:11.765: INFO: Pod "downward-api-663a3903-5c7c-48b4-8040-00eae5667d18": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028045489s
Dec 30 14:20:13.776: INFO: Pod "downward-api-663a3903-5c7c-48b4-8040-00eae5667d18": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039300322s
Dec 30 14:20:15.787: INFO: Pod "downward-api-663a3903-5c7c-48b4-8040-00eae5667d18": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050457241s
Dec 30 14:20:17.817: INFO: Pod "downward-api-663a3903-5c7c-48b4-8040-00eae5667d18": Phase="Pending", Reason="", readiness=false. Elapsed: 8.080705529s
Dec 30 14:20:19.831: INFO: Pod "downward-api-663a3903-5c7c-48b4-8040-00eae5667d18": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.093777709s
STEP: Saw pod success
Dec 30 14:20:19.831: INFO: Pod "downward-api-663a3903-5c7c-48b4-8040-00eae5667d18" satisfied condition "success or failure"
Dec 30 14:20:19.835: INFO: Trying to get logs from node iruya-node pod downward-api-663a3903-5c7c-48b4-8040-00eae5667d18 container dapi-container: 
STEP: delete the pod
Dec 30 14:20:19.956: INFO: Waiting for pod downward-api-663a3903-5c7c-48b4-8040-00eae5667d18 to disappear
Dec 30 14:20:19.968: INFO: Pod downward-api-663a3903-5c7c-48b4-8040-00eae5667d18 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:20:19.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-426" for this suite.
Dec 30 14:20:26.000: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:20:26.201: INFO: namespace downward-api-426 deletion completed in 6.227607461s

• [SLOW TEST:16.701 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:20:26.202: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's args
Dec 30 14:20:26.285: INFO: Waiting up to 5m0s for pod "var-expansion-4e3af133-5203-41cf-97b7-8be752ab0451" in namespace "var-expansion-5059" to be "success or failure"
Dec 30 14:20:26.295: INFO: Pod "var-expansion-4e3af133-5203-41cf-97b7-8be752ab0451": Phase="Pending", Reason="", readiness=false. Elapsed: 9.461953ms
Dec 30 14:20:28.369: INFO: Pod "var-expansion-4e3af133-5203-41cf-97b7-8be752ab0451": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083703683s
Dec 30 14:20:30.387: INFO: Pod "var-expansion-4e3af133-5203-41cf-97b7-8be752ab0451": Phase="Pending", Reason="", readiness=false. Elapsed: 4.101686062s
Dec 30 14:20:32.395: INFO: Pod "var-expansion-4e3af133-5203-41cf-97b7-8be752ab0451": Phase="Pending", Reason="", readiness=false. Elapsed: 6.109510647s
Dec 30 14:20:34.404: INFO: Pod "var-expansion-4e3af133-5203-41cf-97b7-8be752ab0451": Phase="Pending", Reason="", readiness=false. Elapsed: 8.11829686s
Dec 30 14:20:36.417: INFO: Pod "var-expansion-4e3af133-5203-41cf-97b7-8be752ab0451": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.132158728s
STEP: Saw pod success
Dec 30 14:20:36.418: INFO: Pod "var-expansion-4e3af133-5203-41cf-97b7-8be752ab0451" satisfied condition "success or failure"
Dec 30 14:20:36.422: INFO: Trying to get logs from node iruya-node pod var-expansion-4e3af133-5203-41cf-97b7-8be752ab0451 container dapi-container: 
STEP: delete the pod
Dec 30 14:20:36.469: INFO: Waiting for pod var-expansion-4e3af133-5203-41cf-97b7-8be752ab0451 to disappear
Dec 30 14:20:36.522: INFO: Pod var-expansion-4e3af133-5203-41cf-97b7-8be752ab0451 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:20:36.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-5059" for this suite.
Dec 30 14:20:42.559: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:20:42.664: INFO: namespace var-expansion-5059 deletion completed in 6.132454803s

• [SLOW TEST:16.462 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:20:42.664: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:20:42.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5883" for this suite.
Dec 30 14:20:48.867: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:20:49.074: INFO: namespace services-5883 deletion completed in 6.336912098s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:6.410 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:20:49.074: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override all
Dec 30 14:20:49.214: INFO: Waiting up to 5m0s for pod "client-containers-d3094411-1ce1-4650-98fa-e6af2d369365" in namespace "containers-7152" to be "success or failure"
Dec 30 14:20:49.238: INFO: Pod "client-containers-d3094411-1ce1-4650-98fa-e6af2d369365": Phase="Pending", Reason="", readiness=false. Elapsed: 23.730695ms
Dec 30 14:20:51.253: INFO: Pod "client-containers-d3094411-1ce1-4650-98fa-e6af2d369365": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038522772s
Dec 30 14:20:53.262: INFO: Pod "client-containers-d3094411-1ce1-4650-98fa-e6af2d369365": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047939683s
Dec 30 14:20:55.272: INFO: Pod "client-containers-d3094411-1ce1-4650-98fa-e6af2d369365": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057524975s
Dec 30 14:20:57.283: INFO: Pod "client-containers-d3094411-1ce1-4650-98fa-e6af2d369365": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.068743415s
STEP: Saw pod success
Dec 30 14:20:57.283: INFO: Pod "client-containers-d3094411-1ce1-4650-98fa-e6af2d369365" satisfied condition "success or failure"
Dec 30 14:20:57.288: INFO: Trying to get logs from node iruya-node pod client-containers-d3094411-1ce1-4650-98fa-e6af2d369365 container test-container: 
STEP: delete the pod
Dec 30 14:20:57.361: INFO: Waiting for pod client-containers-d3094411-1ce1-4650-98fa-e6af2d369365 to disappear
Dec 30 14:20:57.459: INFO: Pod client-containers-d3094411-1ce1-4650-98fa-e6af2d369365 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:20:57.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-7152" for this suite.
Dec 30 14:21:03.491: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:21:03.648: INFO: namespace containers-7152 deletion completed in 6.180877139s

• [SLOW TEST:14.574 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:21:03.649: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name projected-secret-test-48beb47c-005e-47a3-b02c-65801601e2df
STEP: Creating a pod to test consume secrets
Dec 30 14:21:03.868: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3b11b94f-0d72-4459-8790-01ef2ea42886" in namespace "projected-4928" to be "success or failure"
Dec 30 14:21:03.884: INFO: Pod "pod-projected-secrets-3b11b94f-0d72-4459-8790-01ef2ea42886": Phase="Pending", Reason="", readiness=false. Elapsed: 16.379961ms
Dec 30 14:21:05.897: INFO: Pod "pod-projected-secrets-3b11b94f-0d72-4459-8790-01ef2ea42886": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028996087s
Dec 30 14:21:07.973: INFO: Pod "pod-projected-secrets-3b11b94f-0d72-4459-8790-01ef2ea42886": Phase="Pending", Reason="", readiness=false. Elapsed: 4.104781604s
Dec 30 14:21:09.981: INFO: Pod "pod-projected-secrets-3b11b94f-0d72-4459-8790-01ef2ea42886": Phase="Pending", Reason="", readiness=false. Elapsed: 6.112877737s
Dec 30 14:21:11.990: INFO: Pod "pod-projected-secrets-3b11b94f-0d72-4459-8790-01ef2ea42886": Phase="Pending", Reason="", readiness=false. Elapsed: 8.121919522s
Dec 30 14:21:14.009: INFO: Pod "pod-projected-secrets-3b11b94f-0d72-4459-8790-01ef2ea42886": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.141607702s
STEP: Saw pod success
Dec 30 14:21:14.010: INFO: Pod "pod-projected-secrets-3b11b94f-0d72-4459-8790-01ef2ea42886" satisfied condition "success or failure"
Dec 30 14:21:14.014: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-3b11b94f-0d72-4459-8790-01ef2ea42886 container secret-volume-test: 
STEP: delete the pod
Dec 30 14:21:14.095: INFO: Waiting for pod pod-projected-secrets-3b11b94f-0d72-4459-8790-01ef2ea42886 to disappear
Dec 30 14:21:14.126: INFO: Pod pod-projected-secrets-3b11b94f-0d72-4459-8790-01ef2ea42886 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:21:14.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4928" for this suite.
Dec 30 14:21:20.151: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:21:20.279: INFO: namespace projected-4928 deletion completed in 6.147263069s

• [SLOW TEST:16.630 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:21:20.280: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-29b162b4-5ab2-4715-a82c-0370db2641c6
STEP: Creating a pod to test consume configMaps
Dec 30 14:21:20.408: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1273cfe2-c2f9-450c-a8e5-0479177d0978" in namespace "projected-9003" to be "success or failure"
Dec 30 14:21:20.431: INFO: Pod "pod-projected-configmaps-1273cfe2-c2f9-450c-a8e5-0479177d0978": Phase="Pending", Reason="", readiness=false. Elapsed: 22.424129ms
Dec 30 14:21:22.441: INFO: Pod "pod-projected-configmaps-1273cfe2-c2f9-450c-a8e5-0479177d0978": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032596107s
Dec 30 14:21:24.449: INFO: Pod "pod-projected-configmaps-1273cfe2-c2f9-450c-a8e5-0479177d0978": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041177347s
Dec 30 14:21:26.459: INFO: Pod "pod-projected-configmaps-1273cfe2-c2f9-450c-a8e5-0479177d0978": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051283552s
Dec 30 14:21:28.478: INFO: Pod "pod-projected-configmaps-1273cfe2-c2f9-450c-a8e5-0479177d0978": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.069766577s
STEP: Saw pod success
Dec 30 14:21:28.478: INFO: Pod "pod-projected-configmaps-1273cfe2-c2f9-450c-a8e5-0479177d0978" satisfied condition "success or failure"
Dec 30 14:21:28.487: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-1273cfe2-c2f9-450c-a8e5-0479177d0978 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 30 14:21:28.613: INFO: Waiting for pod pod-projected-configmaps-1273cfe2-c2f9-450c-a8e5-0479177d0978 to disappear
Dec 30 14:21:28.627: INFO: Pod pod-projected-configmaps-1273cfe2-c2f9-450c-a8e5-0479177d0978 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:21:28.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9003" for this suite.
Dec 30 14:21:34.694: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:21:34.811: INFO: namespace projected-9003 deletion completed in 6.175021098s

• [SLOW TEST:14.531 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:21:34.811: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:21:44.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-8347" for this suite.
Dec 30 14:22:06.135: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:22:06.251: INFO: namespace replication-controller-8347 deletion completed in 22.172454288s

• [SLOW TEST:31.440 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:22:06.252: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Dec 30 14:22:06.368: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:22:21.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5924" for this suite.
Dec 30 14:22:27.200: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:22:27.350: INFO: namespace pods-5924 deletion completed in 6.184863428s

• [SLOW TEST:21.099 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:22:27.351: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Dec 30 14:22:27.431: INFO: Waiting up to 5m0s for pod "downward-api-f0673988-8ad2-4f61-b07a-3a62efae4f76" in namespace "downward-api-2251" to be "success or failure"
Dec 30 14:22:27.451: INFO: Pod "downward-api-f0673988-8ad2-4f61-b07a-3a62efae4f76": Phase="Pending", Reason="", readiness=false. Elapsed: 19.915644ms
Dec 30 14:22:29.465: INFO: Pod "downward-api-f0673988-8ad2-4f61-b07a-3a62efae4f76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034260687s
Dec 30 14:22:31.480: INFO: Pod "downward-api-f0673988-8ad2-4f61-b07a-3a62efae4f76": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049178209s
Dec 30 14:22:33.495: INFO: Pod "downward-api-f0673988-8ad2-4f61-b07a-3a62efae4f76": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063349224s
Dec 30 14:22:35.502: INFO: Pod "downward-api-f0673988-8ad2-4f61-b07a-3a62efae4f76": Phase="Pending", Reason="", readiness=false. Elapsed: 8.071265677s
Dec 30 14:22:37.510: INFO: Pod "downward-api-f0673988-8ad2-4f61-b07a-3a62efae4f76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.078695516s
STEP: Saw pod success
Dec 30 14:22:37.510: INFO: Pod "downward-api-f0673988-8ad2-4f61-b07a-3a62efae4f76" satisfied condition "success or failure"
Dec 30 14:22:37.514: INFO: Trying to get logs from node iruya-node pod downward-api-f0673988-8ad2-4f61-b07a-3a62efae4f76 container dapi-container: 
STEP: delete the pod
Dec 30 14:22:37.567: INFO: Waiting for pod downward-api-f0673988-8ad2-4f61-b07a-3a62efae4f76 to disappear
Dec 30 14:22:37.599: INFO: Pod downward-api-f0673988-8ad2-4f61-b07a-3a62efae4f76 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:22:37.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2251" for this suite.
Dec 30 14:22:43.632: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:22:43.764: INFO: namespace downward-api-2251 deletion completed in 6.158728008s

• [SLOW TEST:16.413 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:22:43.764: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
Dec 30 14:22:44.500: INFO: created pod pod-service-account-defaultsa
Dec 30 14:22:44.500: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Dec 30 14:22:44.517: INFO: created pod pod-service-account-mountsa
Dec 30 14:22:44.517: INFO: pod pod-service-account-mountsa service account token volume mount: true
Dec 30 14:22:44.540: INFO: created pod pod-service-account-nomountsa
Dec 30 14:22:44.540: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Dec 30 14:22:44.552: INFO: created pod pod-service-account-defaultsa-mountspec
Dec 30 14:22:44.552: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Dec 30 14:22:44.611: INFO: created pod pod-service-account-mountsa-mountspec
Dec 30 14:22:44.611: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Dec 30 14:22:44.681: INFO: created pod pod-service-account-nomountsa-mountspec
Dec 30 14:22:44.681: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Dec 30 14:22:44.820: INFO: created pod pod-service-account-defaultsa-nomountspec
Dec 30 14:22:44.820: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Dec 30 14:22:44.838: INFO: created pod pod-service-account-mountsa-nomountspec
Dec 30 14:22:44.838: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Dec 30 14:22:44.876: INFO: created pod pod-service-account-nomountsa-nomountspec
Dec 30 14:22:44.876: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:22:44.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-3854" for this suite.
Dec 30 14:23:10.332: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:23:10.477: INFO: namespace svcaccounts-3854 deletion completed in 25.439808633s

• [SLOW TEST:26.713 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:23:10.478: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-m8nz
STEP: Creating a pod to test atomic-volume-subpath
Dec 30 14:23:10.646: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-m8nz" in namespace "subpath-8412" to be "success or failure"
Dec 30 14:23:10.663: INFO: Pod "pod-subpath-test-configmap-m8nz": Phase="Pending", Reason="", readiness=false. Elapsed: 17.563414ms
Dec 30 14:23:12.671: INFO: Pod "pod-subpath-test-configmap-m8nz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024987801s
Dec 30 14:23:14.687: INFO: Pod "pod-subpath-test-configmap-m8nz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041618792s
Dec 30 14:23:16.706: INFO: Pod "pod-subpath-test-configmap-m8nz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060412955s
Dec 30 14:23:18.729: INFO: Pod "pod-subpath-test-configmap-m8nz": Phase="Pending", Reason="", readiness=false. Elapsed: 8.083312357s
Dec 30 14:23:20.738: INFO: Pod "pod-subpath-test-configmap-m8nz": Phase="Running", Reason="", readiness=true. Elapsed: 10.092229277s
Dec 30 14:23:22.776: INFO: Pod "pod-subpath-test-configmap-m8nz": Phase="Running", Reason="", readiness=true. Elapsed: 12.130341422s
Dec 30 14:23:24.793: INFO: Pod "pod-subpath-test-configmap-m8nz": Phase="Running", Reason="", readiness=true. Elapsed: 14.14745579s
Dec 30 14:23:26.803: INFO: Pod "pod-subpath-test-configmap-m8nz": Phase="Running", Reason="", readiness=true. Elapsed: 16.157069002s
Dec 30 14:23:28.809: INFO: Pod "pod-subpath-test-configmap-m8nz": Phase="Running", Reason="", readiness=true. Elapsed: 18.163619144s
Dec 30 14:23:30.816: INFO: Pod "pod-subpath-test-configmap-m8nz": Phase="Running", Reason="", readiness=true. Elapsed: 20.17053359s
Dec 30 14:23:32.825: INFO: Pod "pod-subpath-test-configmap-m8nz": Phase="Running", Reason="", readiness=true. Elapsed: 22.179719017s
Dec 30 14:23:34.838: INFO: Pod "pod-subpath-test-configmap-m8nz": Phase="Running", Reason="", readiness=true. Elapsed: 24.191964768s
Dec 30 14:23:36.849: INFO: Pod "pod-subpath-test-configmap-m8nz": Phase="Running", Reason="", readiness=true. Elapsed: 26.203429131s
Dec 30 14:23:38.891: INFO: Pod "pod-subpath-test-configmap-m8nz": Phase="Running", Reason="", readiness=true. Elapsed: 28.245754446s
Dec 30 14:23:40.898: INFO: Pod "pod-subpath-test-configmap-m8nz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.251996628s
STEP: Saw pod success
Dec 30 14:23:40.898: INFO: Pod "pod-subpath-test-configmap-m8nz" satisfied condition "success or failure"
Dec 30 14:23:40.901: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-m8nz container test-container-subpath-configmap-m8nz: 
STEP: delete the pod
Dec 30 14:23:40.966: INFO: Waiting for pod pod-subpath-test-configmap-m8nz to disappear
Dec 30 14:23:41.028: INFO: Pod pod-subpath-test-configmap-m8nz no longer exists
STEP: Deleting pod pod-subpath-test-configmap-m8nz
Dec 30 14:23:41.029: INFO: Deleting pod "pod-subpath-test-configmap-m8nz" in namespace "subpath-8412"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:23:41.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-8412" for this suite.
Dec 30 14:23:47.061: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:23:47.206: INFO: namespace subpath-8412 deletion completed in 6.159042244s

• [SLOW TEST:36.728 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:23:47.207: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W1230 14:23:57.730718       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 30 14:23:57.730: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:23:57.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6751" for this suite.
Dec 30 14:24:03.819: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:24:04.002: INFO: namespace gc-6751 deletion completed in 6.263544852s

• [SLOW TEST:16.795 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:24:04.003: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating api versions
Dec 30 14:24:04.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Dec 30 14:24:04.236: INFO: stderr: ""
Dec 30 14:24:04.236: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:24:04.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4019" for this suite.
Dec 30 14:24:10.284: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:24:10.397: INFO: namespace kubectl-4019 deletion completed in 6.152929604s

• [SLOW TEST:6.394 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:24:10.398: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-628b70e9-3d6f-4427-8b0c-a53bc0bdcd4a in namespace container-probe-9058
Dec 30 14:24:20.589: INFO: Started pod liveness-628b70e9-3d6f-4427-8b0c-a53bc0bdcd4a in namespace container-probe-9058
STEP: checking the pod's current state and verifying that restartCount is present
Dec 30 14:24:20.597: INFO: Initial restart count of pod liveness-628b70e9-3d6f-4427-8b0c-a53bc0bdcd4a is 0
Dec 30 14:24:34.718: INFO: Restart count of pod container-probe-9058/liveness-628b70e9-3d6f-4427-8b0c-a53bc0bdcd4a is now 1 (14.121077475s elapsed)
Dec 30 14:24:54.828: INFO: Restart count of pod container-probe-9058/liveness-628b70e9-3d6f-4427-8b0c-a53bc0bdcd4a is now 2 (34.230457947s elapsed)
Dec 30 14:25:16.954: INFO: Restart count of pod container-probe-9058/liveness-628b70e9-3d6f-4427-8b0c-a53bc0bdcd4a is now 3 (56.357201201s elapsed)
Dec 30 14:25:37.482: INFO: Restart count of pod container-probe-9058/liveness-628b70e9-3d6f-4427-8b0c-a53bc0bdcd4a is now 4 (1m16.884952793s elapsed)
Dec 30 14:26:49.022: INFO: Restart count of pod container-probe-9058/liveness-628b70e9-3d6f-4427-8b0c-a53bc0bdcd4a is now 5 (2m28.424375769s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:26:49.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9058" for this suite.
Dec 30 14:26:55.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:26:55.288: INFO: namespace container-probe-9058 deletion completed in 6.179894663s

• [SLOW TEST:164.890 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:26:55.289: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service endpoint-test2 in namespace services-93
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-93 to expose endpoints map[]
Dec 30 14:26:55.657: INFO: successfully validated that service endpoint-test2 in namespace services-93 exposes endpoints map[] (8.545914ms elapsed)
STEP: Creating pod pod1 in namespace services-93
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-93 to expose endpoints map[pod1:[80]]
Dec 30 14:26:59.886: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.205596207s elapsed, will retry)
Dec 30 14:27:04.967: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (9.286999167s elapsed, will retry)
Dec 30 14:27:09.026: INFO: successfully validated that service endpoint-test2 in namespace services-93 exposes endpoints map[pod1:[80]] (13.34609381s elapsed)
STEP: Creating pod pod2 in namespace services-93
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-93 to expose endpoints map[pod1:[80] pod2:[80]]
Dec 30 14:27:14.469: INFO: Unexpected endpoints: found map[12738972-f189-4133-8bd2-fa3c523e80cb:[80]], expected map[pod1:[80] pod2:[80]] (5.423065453s elapsed, will retry)
Dec 30 14:27:22.863: INFO: Unexpected endpoints: found map[12738972-f189-4133-8bd2-fa3c523e80cb:[80]], expected map[pod1:[80] pod2:[80]] (13.81711354s elapsed, will retry)
Dec 30 14:27:23.893: INFO: successfully validated that service endpoint-test2 in namespace services-93 exposes endpoints map[pod1:[80] pod2:[80]] (14.847625143s elapsed)
STEP: Deleting pod pod1 in namespace services-93
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-93 to expose endpoints map[pod2:[80]]
Dec 30 14:27:25.034: INFO: successfully validated that service endpoint-test2 in namespace services-93 exposes endpoints map[pod2:[80]] (1.133680573s elapsed)
STEP: Deleting pod pod2 in namespace services-93
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-93 to expose endpoints map[]
Dec 30 14:27:27.677: INFO: successfully validated that service endpoint-test2 in namespace services-93 exposes endpoints map[] (2.638198396s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:27:28.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-93" for this suite.
Dec 30 14:27:50.670: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:27:50.891: INFO: namespace services-93 deletion completed in 22.344869418s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:55.602 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:27:50.891: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 30 14:27:51.080: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:28:05.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-13" for this suite.
Dec 30 14:28:49.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:28:49.395: INFO: namespace pods-13 deletion completed in 44.146551591s

• [SLOW TEST:58.504 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:28:49.398: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override command
Dec 30 14:28:49.707: INFO: Waiting up to 5m0s for pod "client-containers-80efec4a-7790-4231-bcd5-a43a51b154df" in namespace "containers-5529" to be "success or failure"
Dec 30 14:28:49.726: INFO: Pod "client-containers-80efec4a-7790-4231-bcd5-a43a51b154df": Phase="Pending", Reason="", readiness=false. Elapsed: 18.881461ms
Dec 30 14:28:51.751: INFO: Pod "client-containers-80efec4a-7790-4231-bcd5-a43a51b154df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043741348s
Dec 30 14:28:53.764: INFO: Pod "client-containers-80efec4a-7790-4231-bcd5-a43a51b154df": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056151785s
Dec 30 14:28:55.771: INFO: Pod "client-containers-80efec4a-7790-4231-bcd5-a43a51b154df": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063996882s
Dec 30 14:28:57.782: INFO: Pod "client-containers-80efec4a-7790-4231-bcd5-a43a51b154df": Phase="Pending", Reason="", readiness=false. Elapsed: 8.074517148s
Dec 30 14:28:59.800: INFO: Pod "client-containers-80efec4a-7790-4231-bcd5-a43a51b154df": Phase="Pending", Reason="", readiness=false. Elapsed: 10.092517987s
Dec 30 14:29:01.812: INFO: Pod "client-containers-80efec4a-7790-4231-bcd5-a43a51b154df": Phase="Pending", Reason="", readiness=false. Elapsed: 12.104205688s
Dec 30 14:29:03.826: INFO: Pod "client-containers-80efec4a-7790-4231-bcd5-a43a51b154df": Phase="Pending", Reason="", readiness=false. Elapsed: 14.118234142s
Dec 30 14:29:05.840: INFO: Pod "client-containers-80efec4a-7790-4231-bcd5-a43a51b154df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.132689402s
STEP: Saw pod success
Dec 30 14:29:05.840: INFO: Pod "client-containers-80efec4a-7790-4231-bcd5-a43a51b154df" satisfied condition "success or failure"
Dec 30 14:29:05.845: INFO: Trying to get logs from node iruya-node pod client-containers-80efec4a-7790-4231-bcd5-a43a51b154df container test-container: 
STEP: delete the pod
Dec 30 14:29:05.957: INFO: Waiting for pod client-containers-80efec4a-7790-4231-bcd5-a43a51b154df to disappear
Dec 30 14:29:06.090: INFO: Pod client-containers-80efec4a-7790-4231-bcd5-a43a51b154df no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:29:06.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-5529" for this suite.
Dec 30 14:29:12.134: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:29:12.274: INFO: namespace containers-5529 deletion completed in 6.175880542s

• [SLOW TEST:22.876 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:29:12.275: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-b719cb5a-685a-4506-bda2-0793e5f404d4
STEP: Creating a pod to test consume configMaps
Dec 30 14:29:12.576: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9ee727ca-746d-451b-b126-751ac598742d" in namespace "projected-199" to be "success or failure"
Dec 30 14:29:12.586: INFO: Pod "pod-projected-configmaps-9ee727ca-746d-451b-b126-751ac598742d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.188892ms
Dec 30 14:29:14.603: INFO: Pod "pod-projected-configmaps-9ee727ca-746d-451b-b126-751ac598742d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027388798s
Dec 30 14:29:16.621: INFO: Pod "pod-projected-configmaps-9ee727ca-746d-451b-b126-751ac598742d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045193719s
Dec 30 14:29:18.631: INFO: Pod "pod-projected-configmaps-9ee727ca-746d-451b-b126-751ac598742d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05517376s
Dec 30 14:29:20.639: INFO: Pod "pod-projected-configmaps-9ee727ca-746d-451b-b126-751ac598742d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.063066707s
Dec 30 14:29:22.652: INFO: Pod "pod-projected-configmaps-9ee727ca-746d-451b-b126-751ac598742d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.076284338s
Dec 30 14:29:24.678: INFO: Pod "pod-projected-configmaps-9ee727ca-746d-451b-b126-751ac598742d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.102427955s
Dec 30 14:29:26.684: INFO: Pod "pod-projected-configmaps-9ee727ca-746d-451b-b126-751ac598742d": Phase="Pending", Reason="", readiness=false. Elapsed: 14.108376154s
Dec 30 14:29:28.704: INFO: Pod "pod-projected-configmaps-9ee727ca-746d-451b-b126-751ac598742d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.127800458s
STEP: Saw pod success
Dec 30 14:29:28.704: INFO: Pod "pod-projected-configmaps-9ee727ca-746d-451b-b126-751ac598742d" satisfied condition "success or failure"
Dec 30 14:29:28.710: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-9ee727ca-746d-451b-b126-751ac598742d container projected-configmap-volume-test: 
STEP: delete the pod
Dec 30 14:29:28.831: INFO: Waiting for pod pod-projected-configmaps-9ee727ca-746d-451b-b126-751ac598742d to disappear
Dec 30 14:29:28.852: INFO: Pod pod-projected-configmaps-9ee727ca-746d-451b-b126-751ac598742d no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:29:28.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-199" for this suite.
Dec 30 14:29:34.890: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:29:35.008: INFO: namespace projected-199 deletion completed in 6.146638102s

• [SLOW TEST:22.733 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:29:35.008: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 30 14:29:35.189: INFO: Pod name rollover-pod: Found 0 pods out of 1
Dec 30 14:29:40.200: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 30 14:29:50.215: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Dec 30 14:29:52.223: INFO: Creating deployment "test-rollover-deployment"
Dec 30 14:29:52.251: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Dec 30 14:29:54.263: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Dec 30 14:29:54.272: INFO: Ensure that both replica sets have 1 created replica
Dec 30 14:29:54.278: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Dec 30 14:29:54.286: INFO: Updating deployment test-rollover-deployment
Dec 30 14:29:54.286: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Dec 30 14:29:56.311: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Dec 30 14:29:56.323: INFO: Make sure deployment "test-rollover-deployment" is complete
Dec 30 14:29:56.334: INFO: all replica sets need to contain the pod-template-hash label
Dec 30 14:29:56.334: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713312992, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713312992, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713312994, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713312992, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 30 14:29:58.352: INFO: all replica sets need to contain the pod-template-hash label
Dec 30 14:29:58.352: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713312992, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713312992, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713312994, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713312992, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 30 14:30:00.374: INFO: all replica sets need to contain the pod-template-hash label
Dec 30 14:30:00.374: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713312992, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713312992, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713312994, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713312992, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 30 14:30:02.354: INFO: all replica sets need to contain the pod-template-hash label
Dec 30 14:30:02.354: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713312992, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713312992, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713312994, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713312992, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 30 14:30:04.346: INFO: all replica sets need to contain the pod-template-hash label
Dec 30 14:30:04.347: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713312992, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713312992, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713312994, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713312992, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 30 14:30:06.348: INFO: all replica sets need to contain the pod-template-hash label
Dec 30 14:30:06.348: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713312992, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713312992, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713312994, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713312992, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 30 14:30:08.357: INFO: all replica sets need to contain the pod-template-hash label
Dec 30 14:30:08.357: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713312992, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713312992, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713312994, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713312992, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 30 14:30:10.352: INFO: all replica sets need to contain the pod-template-hash label
Dec 30 14:30:10.352: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713312992, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713312992, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713312994, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713312992, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 30 14:30:12.345: INFO: all replica sets need to contain the pod-template-hash label
Dec 30 14:30:12.345: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713312992, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713312992, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713313010, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713312992, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 30 14:30:14.618: INFO: all replica sets need to contain the pod-template-hash label
Dec 30 14:30:14.618: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713312992, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713312992, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713313010, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713312992, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 30 14:30:16.352: INFO: all replica sets need to contain the pod-template-hash label
Dec 30 14:30:16.352: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713312992, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713312992, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713313010, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713312992, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 30 14:30:18.349: INFO: all replica sets need to contain the pod-template-hash label
Dec 30 14:30:18.349: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713312992, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713312992, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713313010, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713312992, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 30 14:30:20.376: INFO: all replica sets need to contain the pod-template-hash label
Dec 30 14:30:20.376: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713312992, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713312992, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713313010, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713312992, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 30 14:30:22.353: INFO: 
Dec 30 14:30:22.353: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Dec 30 14:30:22.376: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-8884,SelfLink:/apis/apps/v1/namespaces/deployment-8884/deployments/test-rollover-deployment,UID:548bab95-8439-410c-839f-7c60ffeb3083,ResourceVersion:18654316,Generation:2,CreationTimestamp:2019-12-30 14:29:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-30 14:29:52 +0000 UTC 2019-12-30 14:29:52 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-30 14:30:20 +0000 UTC 2019-12-30 14:29:52 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Dec 30 14:30:22.394: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-8884,SelfLink:/apis/apps/v1/namespaces/deployment-8884/replicasets/test-rollover-deployment-854595fc44,UID:8cbfdc29-98ed-40b8-8746-03ff800d30c4,ResourceVersion:18654303,Generation:2,CreationTimestamp:2019-12-30 14:29:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 548bab95-8439-410c-839f-7c60ffeb3083 0xc002bd2f97 0xc002bd2f98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Dec 30 14:30:22.394: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Dec 30 14:30:22.394: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-8884,SelfLink:/apis/apps/v1/namespaces/deployment-8884/replicasets/test-rollover-controller,UID:a1b48f74-5925-48fc-9c9d-f7242e2eb2e9,ResourceVersion:18654314,Generation:2,CreationTimestamp:2019-12-30 14:29:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 548bab95-8439-410c-839f-7c60ffeb3083 0xc002bd2e8f 0xc002bd2ea0}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 30 14:30:22.395: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-8884,SelfLink:/apis/apps/v1/namespaces/deployment-8884/replicasets/test-rollover-deployment-9b8b997cf,UID:91746472-76cb-44fc-a78e-4a55873d0855,ResourceVersion:18654260,Generation:2,CreationTimestamp:2019-12-30 14:29:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 548bab95-8439-410c-839f-7c60ffeb3083 0xc002bd3070 0xc002bd3071}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 30 14:30:22.404: INFO: Pod "test-rollover-deployment-854595fc44-2fdhq" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-2fdhq,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-8884,SelfLink:/api/v1/namespaces/deployment-8884/pods/test-rollover-deployment-854595fc44-2fdhq,UID:48d9f655-f46f-44a8-9d92-7a404ae893a7,ResourceVersion:18654288,Generation:0,CreationTimestamp:2019-12-30 14:29:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 8cbfdc29-98ed-40b8-8746-03ff800d30c4 0xc0031e25e7 0xc0031e25e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hv56x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hv56x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-hv56x true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0031e2660} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0031e2680}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:29:55 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:30:10 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:30:10 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:29:54 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2019-12-30 14:29:55 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-30 14:30:09 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://850efb63af4944efb6b2f469ab2653311f0458c32b5afb727d04ba2e7af018b3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:30:22.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-8884" for this suite.
Dec 30 14:30:30.455: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:30:30.606: INFO: namespace deployment-8884 deletion completed in 8.196061015s

• [SLOW TEST:55.598 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:30:30.608: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Dec 30 14:30:48.340: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:30:48.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6402" for this suite.
Dec 30 14:30:56.437: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:30:56.605: INFO: namespace container-runtime-6402 deletion completed in 8.207754802s

• [SLOW TEST:25.998 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:30:56.607: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Dec 30 14:30:56.812: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-1934,SelfLink:/api/v1/namespaces/watch-1934/configmaps/e2e-watch-test-watch-closed,UID:146bccbc-1cd2-42f2-a3e3-bc6005e01b48,ResourceVersion:18654420,Generation:0,CreationTimestamp:2019-12-30 14:30:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 30 14:30:56.813: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-1934,SelfLink:/api/v1/namespaces/watch-1934/configmaps/e2e-watch-test-watch-closed,UID:146bccbc-1cd2-42f2-a3e3-bc6005e01b48,ResourceVersion:18654421,Generation:0,CreationTimestamp:2019-12-30 14:30:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Dec 30 14:30:56.863: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-1934,SelfLink:/api/v1/namespaces/watch-1934/configmaps/e2e-watch-test-watch-closed,UID:146bccbc-1cd2-42f2-a3e3-bc6005e01b48,ResourceVersion:18654422,Generation:0,CreationTimestamp:2019-12-30 14:30:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 30 14:30:56.863: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-1934,SelfLink:/api/v1/namespaces/watch-1934/configmaps/e2e-watch-test-watch-closed,UID:146bccbc-1cd2-42f2-a3e3-bc6005e01b48,ResourceVersion:18654423,Generation:0,CreationTimestamp:2019-12-30 14:30:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:30:56.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-1934" for this suite.
Dec 30 14:31:03.025: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:31:03.212: INFO: namespace watch-1934 deletion completed in 6.334245223s

• [SLOW TEST:6.604 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:31:03.212: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-a1c76374-ea1a-4d46-af1a-fc7871020703
STEP: Creating a pod to test consume configMaps
Dec 30 14:31:03.475: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4ca66b2a-7198-4bc6-8529-f5d1c3f5acca" in namespace "projected-7685" to be "success or failure"
Dec 30 14:31:03.529: INFO: Pod "pod-projected-configmaps-4ca66b2a-7198-4bc6-8529-f5d1c3f5acca": Phase="Pending", Reason="", readiness=false. Elapsed: 53.788355ms
Dec 30 14:31:05.541: INFO: Pod "pod-projected-configmaps-4ca66b2a-7198-4bc6-8529-f5d1c3f5acca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066102981s
Dec 30 14:31:07.550: INFO: Pod "pod-projected-configmaps-4ca66b2a-7198-4bc6-8529-f5d1c3f5acca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07528735s
Dec 30 14:31:09.558: INFO: Pod "pod-projected-configmaps-4ca66b2a-7198-4bc6-8529-f5d1c3f5acca": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083449671s
Dec 30 14:31:11.569: INFO: Pod "pod-projected-configmaps-4ca66b2a-7198-4bc6-8529-f5d1c3f5acca": Phase="Pending", Reason="", readiness=false. Elapsed: 8.094078166s
Dec 30 14:31:13.585: INFO: Pod "pod-projected-configmaps-4ca66b2a-7198-4bc6-8529-f5d1c3f5acca": Phase="Pending", Reason="", readiness=false. Elapsed: 10.109916076s
Dec 30 14:31:15.592: INFO: Pod "pod-projected-configmaps-4ca66b2a-7198-4bc6-8529-f5d1c3f5acca": Phase="Pending", Reason="", readiness=false. Elapsed: 12.117274862s
Dec 30 14:31:17.839: INFO: Pod "pod-projected-configmaps-4ca66b2a-7198-4bc6-8529-f5d1c3f5acca": Phase="Pending", Reason="", readiness=false. Elapsed: 14.363948181s
Dec 30 14:31:19.851: INFO: Pod "pod-projected-configmaps-4ca66b2a-7198-4bc6-8529-f5d1c3f5acca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.37641462s
STEP: Saw pod success
Dec 30 14:31:19.852: INFO: Pod "pod-projected-configmaps-4ca66b2a-7198-4bc6-8529-f5d1c3f5acca" satisfied condition "success or failure"
Dec 30 14:31:19.863: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-4ca66b2a-7198-4bc6-8529-f5d1c3f5acca container projected-configmap-volume-test: 
STEP: delete the pod
Dec 30 14:31:20.101: INFO: Waiting for pod pod-projected-configmaps-4ca66b2a-7198-4bc6-8529-f5d1c3f5acca to disappear
Dec 30 14:31:20.123: INFO: Pod pod-projected-configmaps-4ca66b2a-7198-4bc6-8529-f5d1c3f5acca no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:31:20.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7685" for this suite.
Dec 30 14:31:26.192: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:31:26.328: INFO: namespace projected-7685 deletion completed in 6.186771555s

• [SLOW TEST:23.116 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:31:26.329: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Dec 30 14:31:52.972: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 30 14:31:52.985: INFO: Pod pod-with-poststart-http-hook still exists
Dec 30 14:31:54.985: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 30 14:31:55.003: INFO: Pod pod-with-poststart-http-hook still exists
Dec 30 14:31:56.985: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 30 14:31:56.992: INFO: Pod pod-with-poststart-http-hook still exists
Dec 30 14:31:58.985: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 30 14:31:58.993: INFO: Pod pod-with-poststart-http-hook still exists
Dec 30 14:32:00.985: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 30 14:32:00.995: INFO: Pod pod-with-poststart-http-hook still exists
Dec 30 14:32:02.985: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 30 14:32:02.991: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:32:02.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-9893" for this suite.
Dec 30 14:32:43.088: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:32:43.343: INFO: namespace container-lifecycle-hook-9893 deletion completed in 40.349082309s

• [SLOW TEST:77.014 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:32:43.343: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-9d26908d-b52e-432a-8e86-29c7df22ff4e
STEP: Creating configMap with name cm-test-opt-upd-3272a665-e5a7-4811-8b9c-fc4eeb93e411
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-9d26908d-b52e-432a-8e86-29c7df22ff4e
STEP: Updating configmap cm-test-opt-upd-3272a665-e5a7-4811-8b9c-fc4eeb93e411
STEP: Creating configMap with name cm-test-opt-create-6e5ff076-bb3b-4df1-adda-e7f43b1ec2c1
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:34:34.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3315" for this suite.
Dec 30 14:34:58.502: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:34:58.697: INFO: namespace configmap-3315 deletion completed in 24.438528668s

• [SLOW TEST:135.354 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:34:58.699: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Dec 30 14:34:58.972: INFO: Waiting up to 5m0s for pod "pod-5127ad40-d4f0-491a-a2d5-8695044966b9" in namespace "emptydir-6223" to be "success or failure"
Dec 30 14:34:59.128: INFO: Pod "pod-5127ad40-d4f0-491a-a2d5-8695044966b9": Phase="Pending", Reason="", readiness=false. Elapsed: 156.541987ms
Dec 30 14:35:01.137: INFO: Pod "pod-5127ad40-d4f0-491a-a2d5-8695044966b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.165898664s
Dec 30 14:35:03.217: INFO: Pod "pod-5127ad40-d4f0-491a-a2d5-8695044966b9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.245114269s
Dec 30 14:35:05.223: INFO: Pod "pod-5127ad40-d4f0-491a-a2d5-8695044966b9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.251331152s
Dec 30 14:35:07.235: INFO: Pod "pod-5127ad40-d4f0-491a-a2d5-8695044966b9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.26311293s
Dec 30 14:35:09.267: INFO: Pod "pod-5127ad40-d4f0-491a-a2d5-8695044966b9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.2950043s
Dec 30 14:35:11.595: INFO: Pod "pod-5127ad40-d4f0-491a-a2d5-8695044966b9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.623493773s
Dec 30 14:35:13.608: INFO: Pod "pod-5127ad40-d4f0-491a-a2d5-8695044966b9": Phase="Pending", Reason="", readiness=false. Elapsed: 14.636001975s
Dec 30 14:35:15.620: INFO: Pod "pod-5127ad40-d4f0-491a-a2d5-8695044966b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.648370223s
STEP: Saw pod success
Dec 30 14:35:15.620: INFO: Pod "pod-5127ad40-d4f0-491a-a2d5-8695044966b9" satisfied condition "success or failure"
Dec 30 14:35:15.625: INFO: Trying to get logs from node iruya-node pod pod-5127ad40-d4f0-491a-a2d5-8695044966b9 container test-container: 
STEP: delete the pod
Dec 30 14:35:15.869: INFO: Waiting for pod pod-5127ad40-d4f0-491a-a2d5-8695044966b9 to disappear
Dec 30 14:35:15.908: INFO: Pod pod-5127ad40-d4f0-491a-a2d5-8695044966b9 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:35:15.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6223" for this suite.
Dec 30 14:35:22.524: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:35:22.699: INFO: namespace emptydir-6223 deletion completed in 6.759236072s

• [SLOW TEST:24.000 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:35:22.701: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 30 14:35:22.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-3225'
Dec 30 14:35:25.184: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 30 14:35:25.185: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426
Dec 30 14:35:25.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-3225'
Dec 30 14:35:25.731: INFO: stderr: ""
Dec 30 14:35:25.731: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:35:25.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3225" for this suite.
Dec 30 14:35:47.775: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:35:47.920: INFO: namespace kubectl-3225 deletion completed in 22.180971218s

• [SLOW TEST:25.219 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:35:47.920: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 30 14:35:48.429: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2c0e6eaa-f91a-4916-875f-18f2f4bd307a" in namespace "downward-api-1559" to be "success or failure"
Dec 30 14:35:48.464: INFO: Pod "downwardapi-volume-2c0e6eaa-f91a-4916-875f-18f2f4bd307a": Phase="Pending", Reason="", readiness=false. Elapsed: 34.88461ms
Dec 30 14:35:50.565: INFO: Pod "downwardapi-volume-2c0e6eaa-f91a-4916-875f-18f2f4bd307a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.13578099s
Dec 30 14:35:52.585: INFO: Pod "downwardapi-volume-2c0e6eaa-f91a-4916-875f-18f2f4bd307a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.156030908s
Dec 30 14:35:54.603: INFO: Pod "downwardapi-volume-2c0e6eaa-f91a-4916-875f-18f2f4bd307a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.173944477s
Dec 30 14:35:56.899: INFO: Pod "downwardapi-volume-2c0e6eaa-f91a-4916-875f-18f2f4bd307a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.470099941s
Dec 30 14:35:58.906: INFO: Pod "downwardapi-volume-2c0e6eaa-f91a-4916-875f-18f2f4bd307a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.477682407s
Dec 30 14:36:00.917: INFO: Pod "downwardapi-volume-2c0e6eaa-f91a-4916-875f-18f2f4bd307a": Phase="Pending", Reason="", readiness=false. Elapsed: 12.487890213s
Dec 30 14:36:02.932: INFO: Pod "downwardapi-volume-2c0e6eaa-f91a-4916-875f-18f2f4bd307a": Phase="Pending", Reason="", readiness=false. Elapsed: 14.503180913s
Dec 30 14:36:04.941: INFO: Pod "downwardapi-volume-2c0e6eaa-f91a-4916-875f-18f2f4bd307a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.51269416s
STEP: Saw pod success
Dec 30 14:36:04.942: INFO: Pod "downwardapi-volume-2c0e6eaa-f91a-4916-875f-18f2f4bd307a" satisfied condition "success or failure"
Dec 30 14:36:04.945: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-2c0e6eaa-f91a-4916-875f-18f2f4bd307a container client-container: 
STEP: delete the pod
Dec 30 14:36:05.041: INFO: Waiting for pod downwardapi-volume-2c0e6eaa-f91a-4916-875f-18f2f4bd307a to disappear
Dec 30 14:36:05.245: INFO: Pod downwardapi-volume-2c0e6eaa-f91a-4916-875f-18f2f4bd307a no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:36:05.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1559" for this suite.
Dec 30 14:36:11.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:36:11.470: INFO: namespace downward-api-1559 deletion completed in 6.217312011s

• [SLOW TEST:23.551 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:36:11.471: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-355.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-355.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-355.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-355.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-355.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-355.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 30 14:36:35.861: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-355.svc.cluster.local from pod dns-355/dns-test-232f3ad9-00df-4e07-b70d-b04ae76d7b75: the server could not find the requested resource (get pods dns-test-232f3ad9-00df-4e07-b70d-b04ae76d7b75)
Dec 30 14:36:35.889: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-355/dns-test-232f3ad9-00df-4e07-b70d-b04ae76d7b75: the server could not find the requested resource (get pods dns-test-232f3ad9-00df-4e07-b70d-b04ae76d7b75)
Dec 30 14:36:35.894: INFO: Unable to read jessie_udp@PodARecord from pod dns-355/dns-test-232f3ad9-00df-4e07-b70d-b04ae76d7b75: the server could not find the requested resource (get pods dns-test-232f3ad9-00df-4e07-b70d-b04ae76d7b75)
Dec 30 14:36:35.901: INFO: Unable to read jessie_tcp@PodARecord from pod dns-355/dns-test-232f3ad9-00df-4e07-b70d-b04ae76d7b75: the server could not find the requested resource (get pods dns-test-232f3ad9-00df-4e07-b70d-b04ae76d7b75)
Dec 30 14:36:35.901: INFO: Lookups using dns-355/dns-test-232f3ad9-00df-4e07-b70d-b04ae76d7b75 failed for: [jessie_hosts@dns-querier-1.dns-test-service.dns-355.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Dec 30 14:36:41.033: INFO: DNS probes using dns-355/dns-test-232f3ad9-00df-4e07-b70d-b04ae76d7b75 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:36:41.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-355" for this suite.
Dec 30 14:36:49.250: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:36:49.395: INFO: namespace dns-355 deletion completed in 8.206302554s

• [SLOW TEST:37.924 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:36:49.397: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-4485/configmap-test-70280a96-2745-4098-bdb7-cbd1187aa7e6
STEP: Creating a pod to test consume configMaps
Dec 30 14:36:49.627: INFO: Waiting up to 5m0s for pod "pod-configmaps-139625cb-ec3b-4f7a-8bd4-db9cce3db03a" in namespace "configmap-4485" to be "success or failure"
Dec 30 14:36:49.802: INFO: Pod "pod-configmaps-139625cb-ec3b-4f7a-8bd4-db9cce3db03a": Phase="Pending", Reason="", readiness=false. Elapsed: 174.209052ms
Dec 30 14:36:51.810: INFO: Pod "pod-configmaps-139625cb-ec3b-4f7a-8bd4-db9cce3db03a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.182180001s
Dec 30 14:36:53.828: INFO: Pod "pod-configmaps-139625cb-ec3b-4f7a-8bd4-db9cce3db03a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.200039799s
Dec 30 14:36:55.839: INFO: Pod "pod-configmaps-139625cb-ec3b-4f7a-8bd4-db9cce3db03a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.211623966s
Dec 30 14:36:57.848: INFO: Pod "pod-configmaps-139625cb-ec3b-4f7a-8bd4-db9cce3db03a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.220564626s
Dec 30 14:36:59.858: INFO: Pod "pod-configmaps-139625cb-ec3b-4f7a-8bd4-db9cce3db03a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.230976634s
Dec 30 14:37:01.868: INFO: Pod "pod-configmaps-139625cb-ec3b-4f7a-8bd4-db9cce3db03a": Phase="Pending", Reason="", readiness=false. Elapsed: 12.240306285s
Dec 30 14:37:03.889: INFO: Pod "pod-configmaps-139625cb-ec3b-4f7a-8bd4-db9cce3db03a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.261107454s
STEP: Saw pod success
Dec 30 14:37:03.889: INFO: Pod "pod-configmaps-139625cb-ec3b-4f7a-8bd4-db9cce3db03a" satisfied condition "success or failure"
Dec 30 14:37:03.905: INFO: Trying to get logs from node iruya-node pod pod-configmaps-139625cb-ec3b-4f7a-8bd4-db9cce3db03a container env-test: 
STEP: delete the pod
Dec 30 14:37:03.998: INFO: Waiting for pod pod-configmaps-139625cb-ec3b-4f7a-8bd4-db9cce3db03a to disappear
Dec 30 14:37:04.007: INFO: Pod pod-configmaps-139625cb-ec3b-4f7a-8bd4-db9cce3db03a no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:37:04.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4485" for this suite.
Dec 30 14:37:12.225: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:37:12.347: INFO: namespace configmap-4485 deletion completed in 8.30734244s

• [SLOW TEST:22.950 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:37:12.350: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 30 14:37:12.730: INFO: Create a RollingUpdate DaemonSet
Dec 30 14:37:12.743: INFO: Check that daemon pods launch on every node of the cluster
Dec 30 14:37:12.767: INFO: Number of nodes with available pods: 0
Dec 30 14:37:12.767: INFO: Node iruya-node is running more than one daemon pod
Dec 30 14:37:15.321: INFO: Number of nodes with available pods: 0
Dec 30 14:37:15.321: INFO: Node iruya-node is running more than one daemon pod
Dec 30 14:37:15.867: INFO: Number of nodes with available pods: 0
Dec 30 14:37:15.867: INFO: Node iruya-node is running more than one daemon pod
Dec 30 14:37:17.194: INFO: Number of nodes with available pods: 0
Dec 30 14:37:17.194: INFO: Node iruya-node is running more than one daemon pod
Dec 30 14:37:17.909: INFO: Number of nodes with available pods: 0
Dec 30 14:37:17.909: INFO: Node iruya-node is running more than one daemon pod
Dec 30 14:37:18.793: INFO: Number of nodes with available pods: 0
Dec 30 14:37:18.793: INFO: Node iruya-node is running more than one daemon pod
Dec 30 14:37:19.779: INFO: Number of nodes with available pods: 0
Dec 30 14:37:19.779: INFO: Node iruya-node is running more than one daemon pod
Dec 30 14:37:23.146: INFO: Number of nodes with available pods: 0
Dec 30 14:37:23.146: INFO: Node iruya-node is running more than one daemon pod
Dec 30 14:37:24.545: INFO: Number of nodes with available pods: 0
Dec 30 14:37:24.546: INFO: Node iruya-node is running more than one daemon pod
Dec 30 14:37:24.805: INFO: Number of nodes with available pods: 0
Dec 30 14:37:24.805: INFO: Node iruya-node is running more than one daemon pod
Dec 30 14:37:25.978: INFO: Number of nodes with available pods: 0
Dec 30 14:37:25.978: INFO: Node iruya-node is running more than one daemon pod
Dec 30 14:37:26.796: INFO: Number of nodes with available pods: 0
Dec 30 14:37:26.796: INFO: Node iruya-node is running more than one daemon pod
Dec 30 14:37:27.785: INFO: Number of nodes with available pods: 2
Dec 30 14:37:27.785: INFO: Number of running nodes: 2, number of available pods: 2
Dec 30 14:37:27.785: INFO: Update the DaemonSet to trigger a rollout
Dec 30 14:37:27.800: INFO: Updating DaemonSet daemon-set
Dec 30 14:37:50.228: INFO: Roll back the DaemonSet before rollout is complete
Dec 30 14:37:50.242: INFO: Updating DaemonSet daemon-set
Dec 30 14:37:50.242: INFO: Make sure DaemonSet rollback is complete
Dec 30 14:37:50.287: INFO: Wrong image for pod: daemon-set-b8pzp. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Dec 30 14:37:50.287: INFO: Pod daemon-set-b8pzp is not available
Dec 30 14:37:52.187: INFO: Wrong image for pod: daemon-set-b8pzp. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Dec 30 14:37:52.187: INFO: Pod daemon-set-b8pzp is not available
Dec 30 14:37:52.793: INFO: Wrong image for pod: daemon-set-b8pzp. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Dec 30 14:37:52.793: INFO: Pod daemon-set-b8pzp is not available
Dec 30 14:37:53.803: INFO: Wrong image for pod: daemon-set-b8pzp. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Dec 30 14:37:53.803: INFO: Pod daemon-set-b8pzp is not available
Dec 30 14:37:54.783: INFO: Wrong image for pod: daemon-set-b8pzp. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Dec 30 14:37:54.783: INFO: Pod daemon-set-b8pzp is not available
Dec 30 14:37:56.025: INFO: Wrong image for pod: daemon-set-b8pzp. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Dec 30 14:37:56.025: INFO: Pod daemon-set-b8pzp is not available
Dec 30 14:37:56.797: INFO: Pod daemon-set-lnmp4 is not available
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9250, will wait for the garbage collector to delete the pods
Dec 30 14:37:56.881: INFO: Deleting DaemonSet.extensions daemon-set took: 8.175368ms
Dec 30 14:37:58.482: INFO: Terminating DaemonSet.extensions daemon-set pods took: 1.600802241s
Dec 30 14:38:16.623: INFO: Number of nodes with available pods: 0
Dec 30 14:38:16.623: INFO: Number of running nodes: 0, number of available pods: 0
Dec 30 14:38:16.630: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9250/daemonsets","resourceVersion":"18655329"},"items":null}

Dec 30 14:38:16.638: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9250/pods","resourceVersion":"18655329"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:38:16.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-9250" for this suite.
Dec 30 14:38:24.695: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:38:24.819: INFO: namespace daemonsets-9250 deletion completed in 8.148126067s

• [SLOW TEST:72.469 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:38:24.821: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Dec 30 14:38:41.690: INFO: Successfully updated pod "annotationupdatec7860427-d414-43d0-9238-fb9ad78a9d4a"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:38:43.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7126" for this suite.
Dec 30 14:39:05.967: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:39:06.056: INFO: namespace projected-7126 deletion completed in 22.163610185s

• [SLOW TEST:41.236 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:39:06.056: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating server pod server in namespace prestop-227
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-227
STEP: Deleting pre-stop pod
Dec 30 14:39:41.491: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:39:41.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-227" for this suite.
Dec 30 14:40:27.674: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:40:27.741: INFO: namespace prestop-227 deletion completed in 46.223307158s

• [SLOW TEST:81.684 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:40:27.741: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47
[It] should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Dec 30 14:40:45.446: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Dec 30 14:41:00.655: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:41:00.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4232" for this suite.
Dec 30 14:41:06.760: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:41:06.861: INFO: namespace pods-4232 deletion completed in 6.195759739s

• [SLOW TEST:39.120 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:41:06.862: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-1d170b1c-d79b-4cad-b4a8-1294eba4176d
STEP: Creating a pod to test consume secrets
Dec 30 14:41:07.045: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1f551813-6b72-48ee-83f6-f7f61b21de9e" in namespace "projected-970" to be "success or failure"
Dec 30 14:41:07.162: INFO: Pod "pod-projected-secrets-1f551813-6b72-48ee-83f6-f7f61b21de9e": Phase="Pending", Reason="", readiness=false. Elapsed: 117.689852ms
Dec 30 14:41:09.170: INFO: Pod "pod-projected-secrets-1f551813-6b72-48ee-83f6-f7f61b21de9e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125211368s
Dec 30 14:41:11.181: INFO: Pod "pod-projected-secrets-1f551813-6b72-48ee-83f6-f7f61b21de9e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.136165148s
Dec 30 14:41:13.188: INFO: Pod "pod-projected-secrets-1f551813-6b72-48ee-83f6-f7f61b21de9e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.142789029s
Dec 30 14:41:15.201: INFO: Pod "pod-projected-secrets-1f551813-6b72-48ee-83f6-f7f61b21de9e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.156192558s
Dec 30 14:41:17.214: INFO: Pod "pod-projected-secrets-1f551813-6b72-48ee-83f6-f7f61b21de9e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.168977233s
Dec 30 14:41:19.225: INFO: Pod "pod-projected-secrets-1f551813-6b72-48ee-83f6-f7f61b21de9e": Phase="Pending", Reason="", readiness=false. Elapsed: 12.180405387s
Dec 30 14:41:21.235: INFO: Pod "pod-projected-secrets-1f551813-6b72-48ee-83f6-f7f61b21de9e": Phase="Pending", Reason="", readiness=false. Elapsed: 14.190201923s
Dec 30 14:41:23.241: INFO: Pod "pod-projected-secrets-1f551813-6b72-48ee-83f6-f7f61b21de9e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.195836312s
STEP: Saw pod success
Dec 30 14:41:23.241: INFO: Pod "pod-projected-secrets-1f551813-6b72-48ee-83f6-f7f61b21de9e" satisfied condition "success or failure"
Dec 30 14:41:23.244: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-1f551813-6b72-48ee-83f6-f7f61b21de9e container projected-secret-volume-test: 
STEP: delete the pod
Dec 30 14:41:23.307: INFO: Waiting for pod pod-projected-secrets-1f551813-6b72-48ee-83f6-f7f61b21de9e to disappear
Dec 30 14:41:23.429: INFO: Pod pod-projected-secrets-1f551813-6b72-48ee-83f6-f7f61b21de9e no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:41:23.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-970" for this suite.
Dec 30 14:41:29.988: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:41:30.082: INFO: namespace projected-970 deletion completed in 6.642562841s

• [SLOW TEST:23.219 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:41:30.082: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on tmpfs
Dec 30 14:41:31.312: INFO: Waiting up to 5m0s for pod "pod-fe487b2e-afe2-4954-9195-957f3576289e" in namespace "emptydir-2129" to be "success or failure"
Dec 30 14:41:31.325: INFO: Pod "pod-fe487b2e-afe2-4954-9195-957f3576289e": Phase="Pending", Reason="", readiness=false. Elapsed: 12.808043ms
Dec 30 14:41:33.396: INFO: Pod "pod-fe487b2e-afe2-4954-9195-957f3576289e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083479548s
Dec 30 14:41:35.406: INFO: Pod "pod-fe487b2e-afe2-4954-9195-957f3576289e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093167047s
Dec 30 14:41:37.417: INFO: Pod "pod-fe487b2e-afe2-4954-9195-957f3576289e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.104768958s
Dec 30 14:41:39.517: INFO: Pod "pod-fe487b2e-afe2-4954-9195-957f3576289e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.204041025s
Dec 30 14:41:41.526: INFO: Pod "pod-fe487b2e-afe2-4954-9195-957f3576289e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.213175756s
Dec 30 14:41:43.539: INFO: Pod "pod-fe487b2e-afe2-4954-9195-957f3576289e": Phase="Running", Reason="", readiness=true. Elapsed: 12.226170431s
Dec 30 14:41:45.546: INFO: Pod "pod-fe487b2e-afe2-4954-9195-957f3576289e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.233298746s
STEP: Saw pod success
Dec 30 14:41:45.546: INFO: Pod "pod-fe487b2e-afe2-4954-9195-957f3576289e" satisfied condition "success or failure"
Dec 30 14:41:45.549: INFO: Trying to get logs from node iruya-node pod pod-fe487b2e-afe2-4954-9195-957f3576289e container test-container: 
STEP: delete the pod
Dec 30 14:41:45.743: INFO: Waiting for pod pod-fe487b2e-afe2-4954-9195-957f3576289e to disappear
Dec 30 14:41:45.760: INFO: Pod pod-fe487b2e-afe2-4954-9195-957f3576289e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:41:45.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2129" for this suite.
Dec 30 14:41:51.922: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:41:52.064: INFO: namespace emptydir-2129 deletion completed in 6.281224023s

• [SLOW TEST:21.982 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:41:52.064: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 30 14:41:52.261: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-4582'
Dec 30 14:41:52.437: INFO: stderr: ""
Dec 30 14:41:52.437: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Dec 30 14:42:07.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-4582 -o json'
Dec 30 14:42:07.711: INFO: stderr: ""
Dec 30 14:42:07.712: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2019-12-30T14:41:52Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"kubectl-4582\",\n        \"resourceVersion\": \"18655814\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-4582/pods/e2e-test-nginx-pod\",\n        \"uid\": \"11bd2c21-1b37-406a-8544-b5a42fb07dc2\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-9xhfc\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"iruya-node\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-9xhfc\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-9xhfc\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-30T14:41:52Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-30T14:42:04Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-30T14:42:04Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-30T14:41:52Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://333c1a870ef8e4f15be091208bdb20f815d4c52f6068b549b4a6c9e4440a7b1f\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2019-12-30T14:42:03Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.3.65\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.44.0.1\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2019-12-30T14:41:52Z\"\n    }\n}\n"
STEP: replace the image in the pod
Dec 30 14:42:07.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-4582'
Dec 30 14:42:08.499: INFO: stderr: ""
Dec 30 14:42:08.499: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726
Dec 30 14:42:08.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-4582'
Dec 30 14:42:19.896: INFO: stderr: ""
Dec 30 14:42:19.897: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:42:19.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4582" for this suite.
Dec 30 14:42:25.957: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:42:26.143: INFO: namespace kubectl-4582 deletion completed in 6.211949221s

• [SLOW TEST:34.079 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:42:26.144: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 30 14:42:26.325: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b112cd8f-01e0-4755-909b-f779981d80a0" in namespace "downward-api-4528" to be "success or failure"
Dec 30 14:42:26.351: INFO: Pod "downwardapi-volume-b112cd8f-01e0-4755-909b-f779981d80a0": Phase="Pending", Reason="", readiness=false. Elapsed: 25.487517ms
Dec 30 14:42:28.383: INFO: Pod "downwardapi-volume-b112cd8f-01e0-4755-909b-f779981d80a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057904498s
Dec 30 14:42:30.438: INFO: Pod "downwardapi-volume-b112cd8f-01e0-4755-909b-f779981d80a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.112790248s
Dec 30 14:42:32.451: INFO: Pod "downwardapi-volume-b112cd8f-01e0-4755-909b-f779981d80a0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.125748613s
Dec 30 14:42:34.463: INFO: Pod "downwardapi-volume-b112cd8f-01e0-4755-909b-f779981d80a0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.137808449s
Dec 30 14:42:36.484: INFO: Pod "downwardapi-volume-b112cd8f-01e0-4755-909b-f779981d80a0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.158074095s
Dec 30 14:42:39.944: INFO: Pod "downwardapi-volume-b112cd8f-01e0-4755-909b-f779981d80a0": Phase="Pending", Reason="", readiness=false. Elapsed: 13.618149879s
Dec 30 14:42:41.962: INFO: Pod "downwardapi-volume-b112cd8f-01e0-4755-909b-f779981d80a0": Phase="Pending", Reason="", readiness=false. Elapsed: 15.63688584s
Dec 30 14:42:43.980: INFO: Pod "downwardapi-volume-b112cd8f-01e0-4755-909b-f779981d80a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.65457887s
STEP: Saw pod success
Dec 30 14:42:43.980: INFO: Pod "downwardapi-volume-b112cd8f-01e0-4755-909b-f779981d80a0" satisfied condition "success or failure"
Dec 30 14:42:43.987: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-b112cd8f-01e0-4755-909b-f779981d80a0 container client-container: 
STEP: delete the pod
Dec 30 14:42:44.239: INFO: Waiting for pod downwardapi-volume-b112cd8f-01e0-4755-909b-f779981d80a0 to disappear
Dec 30 14:42:44.251: INFO: Pod downwardapi-volume-b112cd8f-01e0-4755-909b-f779981d80a0 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:42:44.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4528" for this suite.
Dec 30 14:42:50.834: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:42:50.957: INFO: namespace downward-api-4528 deletion completed in 6.683502605s

• [SLOW TEST:24.813 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:42:50.959: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-c96e196c-eb0f-4e0d-a6ac-1d7a662e264c
STEP: Creating a pod to test consume secrets
Dec 30 14:42:51.211: INFO: Waiting up to 5m0s for pod "pod-secrets-840b51a1-c087-464d-b0c2-bafbec86756e" in namespace "secrets-8555" to be "success or failure"
Dec 30 14:42:51.219: INFO: Pod "pod-secrets-840b51a1-c087-464d-b0c2-bafbec86756e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.000704ms
Dec 30 14:42:53.231: INFO: Pod "pod-secrets-840b51a1-c087-464d-b0c2-bafbec86756e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019687088s
Dec 30 14:42:55.244: INFO: Pod "pod-secrets-840b51a1-c087-464d-b0c2-bafbec86756e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033315871s
Dec 30 14:42:57.253: INFO: Pod "pod-secrets-840b51a1-c087-464d-b0c2-bafbec86756e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04177437s
Dec 30 14:42:59.262: INFO: Pod "pod-secrets-840b51a1-c087-464d-b0c2-bafbec86756e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.050960246s
Dec 30 14:43:01.276: INFO: Pod "pod-secrets-840b51a1-c087-464d-b0c2-bafbec86756e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.065539418s
Dec 30 14:43:03.287: INFO: Pod "pod-secrets-840b51a1-c087-464d-b0c2-bafbec86756e": Phase="Pending", Reason="", readiness=false. Elapsed: 12.075961944s
Dec 30 14:43:05.298: INFO: Pod "pod-secrets-840b51a1-c087-464d-b0c2-bafbec86756e": Phase="Pending", Reason="", readiness=false. Elapsed: 14.087276243s
Dec 30 14:43:07.306: INFO: Pod "pod-secrets-840b51a1-c087-464d-b0c2-bafbec86756e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.095282745s
STEP: Saw pod success
Dec 30 14:43:07.306: INFO: Pod "pod-secrets-840b51a1-c087-464d-b0c2-bafbec86756e" satisfied condition "success or failure"
Dec 30 14:43:07.312: INFO: Trying to get logs from node iruya-node pod pod-secrets-840b51a1-c087-464d-b0c2-bafbec86756e container secret-volume-test: 
STEP: delete the pod
Dec 30 14:43:07.389: INFO: Waiting for pod pod-secrets-840b51a1-c087-464d-b0c2-bafbec86756e to disappear
Dec 30 14:43:07.410: INFO: Pod pod-secrets-840b51a1-c087-464d-b0c2-bafbec86756e no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:43:07.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8555" for this suite.
Dec 30 14:43:13.488: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:43:13.602: INFO: namespace secrets-8555 deletion completed in 6.185568338s

• [SLOW TEST:22.643 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:43:13.603: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Dec 30 14:43:13.914: INFO: Waiting up to 5m0s for pod "downward-api-7dcc1586-969a-46da-ac8e-910ca1ef868e" in namespace "downward-api-8156" to be "success or failure"
Dec 30 14:43:13.921: INFO: Pod "downward-api-7dcc1586-969a-46da-ac8e-910ca1ef868e": Phase="Pending", Reason="", readiness=false. Elapsed: 7.783002ms
Dec 30 14:43:15.928: INFO: Pod "downward-api-7dcc1586-969a-46da-ac8e-910ca1ef868e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013973567s
Dec 30 14:43:17.941: INFO: Pod "downward-api-7dcc1586-969a-46da-ac8e-910ca1ef868e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026894177s
Dec 30 14:43:19.947: INFO: Pod "downward-api-7dcc1586-969a-46da-ac8e-910ca1ef868e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033560368s
Dec 30 14:43:21.961: INFO: Pod "downward-api-7dcc1586-969a-46da-ac8e-910ca1ef868e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.047320049s
Dec 30 14:43:23.974: INFO: Pod "downward-api-7dcc1586-969a-46da-ac8e-910ca1ef868e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.060497997s
Dec 30 14:43:25.992: INFO: Pod "downward-api-7dcc1586-969a-46da-ac8e-910ca1ef868e": Phase="Pending", Reason="", readiness=false. Elapsed: 12.078782986s
Dec 30 14:43:28.005: INFO: Pod "downward-api-7dcc1586-969a-46da-ac8e-910ca1ef868e": Phase="Pending", Reason="", readiness=false. Elapsed: 14.09121603s
Dec 30 14:43:30.015: INFO: Pod "downward-api-7dcc1586-969a-46da-ac8e-910ca1ef868e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.101681906s
STEP: Saw pod success
Dec 30 14:43:30.016: INFO: Pod "downward-api-7dcc1586-969a-46da-ac8e-910ca1ef868e" satisfied condition "success or failure"
Dec 30 14:43:30.021: INFO: Trying to get logs from node iruya-node pod downward-api-7dcc1586-969a-46da-ac8e-910ca1ef868e container dapi-container: 
STEP: delete the pod
Dec 30 14:43:30.103: INFO: Waiting for pod downward-api-7dcc1586-969a-46da-ac8e-910ca1ef868e to disappear
Dec 30 14:43:30.112: INFO: Pod downward-api-7dcc1586-969a-46da-ac8e-910ca1ef868e no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:43:30.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8156" for this suite.
Dec 30 14:43:36.274: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:43:36.414: INFO: namespace downward-api-8156 deletion completed in 6.210849941s

• [SLOW TEST:22.812 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:43:36.416: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 30 14:43:36.629: INFO: Waiting up to 5m0s for pod "downwardapi-volume-60409bc7-1ef5-46ca-8bac-efa4fed2f62e" in namespace "projected-5062" to be "success or failure"
Dec 30 14:43:36.704: INFO: Pod "downwardapi-volume-60409bc7-1ef5-46ca-8bac-efa4fed2f62e": Phase="Pending", Reason="", readiness=false. Elapsed: 75.095519ms
Dec 30 14:43:38.712: INFO: Pod "downwardapi-volume-60409bc7-1ef5-46ca-8bac-efa4fed2f62e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082493008s
Dec 30 14:43:40.719: INFO: Pod "downwardapi-volume-60409bc7-1ef5-46ca-8bac-efa4fed2f62e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090243114s
Dec 30 14:43:42.873: INFO: Pod "downwardapi-volume-60409bc7-1ef5-46ca-8bac-efa4fed2f62e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.243753184s
Dec 30 14:43:44.890: INFO: Pod "downwardapi-volume-60409bc7-1ef5-46ca-8bac-efa4fed2f62e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.260597164s
Dec 30 14:43:46.898: INFO: Pod "downwardapi-volume-60409bc7-1ef5-46ca-8bac-efa4fed2f62e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.269131795s
Dec 30 14:43:48.921: INFO: Pod "downwardapi-volume-60409bc7-1ef5-46ca-8bac-efa4fed2f62e": Phase="Pending", Reason="", readiness=false. Elapsed: 12.292312453s
Dec 30 14:43:50.929: INFO: Pod "downwardapi-volume-60409bc7-1ef5-46ca-8bac-efa4fed2f62e": Phase="Pending", Reason="", readiness=false. Elapsed: 14.299442718s
Dec 30 14:43:52.937: INFO: Pod "downwardapi-volume-60409bc7-1ef5-46ca-8bac-efa4fed2f62e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.308115449s
STEP: Saw pod success
Dec 30 14:43:52.937: INFO: Pod "downwardapi-volume-60409bc7-1ef5-46ca-8bac-efa4fed2f62e" satisfied condition "success or failure"
Dec 30 14:43:52.940: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-60409bc7-1ef5-46ca-8bac-efa4fed2f62e container client-container: 
STEP: delete the pod
Dec 30 14:43:53.002: INFO: Waiting for pod downwardapi-volume-60409bc7-1ef5-46ca-8bac-efa4fed2f62e to disappear
Dec 30 14:43:53.206: INFO: Pod downwardapi-volume-60409bc7-1ef5-46ca-8bac-efa4fed2f62e no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:43:53.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5062" for this suite.
Dec 30 14:43:59.456: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:43:59.616: INFO: namespace projected-5062 deletion completed in 6.39612545s

• [SLOW TEST:23.200 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:43:59.618: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 30 14:44:35.868: INFO: Container started at 2019-12-30 14:44:12 +0000 UTC, pod became ready at 2019-12-30 14:44:34 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:44:35.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2861" for this suite.
Dec 30 14:44:58.028: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:44:58.210: INFO: namespace container-probe-2861 deletion completed in 22.332609949s

• [SLOW TEST:58.592 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:44:58.211: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 30 14:44:58.440: INFO: (0) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 16.242575ms)
Dec 30 14:44:58.519: INFO: (1) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 79.069084ms)
Dec 30 14:44:58.532: INFO: (2) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.442477ms)
Dec 30 14:44:58.545: INFO: (3) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.115946ms)
Dec 30 14:44:58.560: INFO: (4) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 15.070213ms)
Dec 30 14:44:58.578: INFO: (5) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 16.993119ms)
Dec 30 14:44:58.597: INFO: (6) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 19.435077ms)
Dec 30 14:44:58.616: INFO: (7) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 18.925649ms)
Dec 30 14:44:58.649: INFO: (8) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 32.57937ms)
Dec 30 14:44:58.665: INFO: (9) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 15.606748ms)
Dec 30 14:44:58.674: INFO: (10) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.474017ms)
Dec 30 14:44:58.685: INFO: (11) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.805466ms)
Dec 30 14:44:58.691: INFO: (12) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.428248ms)
Dec 30 14:44:58.696: INFO: (13) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.059011ms)
Dec 30 14:44:58.701: INFO: (14) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.173288ms)
Dec 30 14:44:58.706: INFO: (15) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.040752ms)
Dec 30 14:44:58.714: INFO: (16) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.975131ms)
Dec 30 14:44:58.747: INFO: (17) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 32.827884ms)
Dec 30 14:44:58.760: INFO: (18) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.016043ms)
Dec 30 14:44:58.780: INFO: (19) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 19.486429ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:44:58.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-9183" for this suite.
Dec 30 14:45:04.895: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:45:05.002: INFO: namespace proxy-9183 deletion completed in 6.216032148s

• [SLOW TEST:6.790 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:45:05.002: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 30 14:45:05.198: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Dec 30 14:45:10.212: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 30 14:45:20.231: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Dec 30 14:45:38.403: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-5504,SelfLink:/apis/apps/v1/namespaces/deployment-5504/deployments/test-cleanup-deployment,UID:f7431dde-dcbb-49a1-acc8-77ac18a9eea6,ResourceVersion:18656284,Generation:1,CreationTimestamp:2019-12-30 14:45:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 1,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-30 14:45:20 +0000 UTC 2019-12-30 14:45:20 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-30 14:45:36 +0000 UTC 2019-12-30 14:45:20 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-cleanup-deployment-55bbcbc84c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Dec 30 14:45:38.406: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-5504,SelfLink:/apis/apps/v1/namespaces/deployment-5504/replicasets/test-cleanup-deployment-55bbcbc84c,UID:ceb33f61-6b97-4566-8bf6-c4e2efbe8547,ResourceVersion:18656273,Generation:1,CreationTimestamp:2019-12-30 14:45:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment f7431dde-dcbb-49a1-acc8-77ac18a9eea6 0xc00097ef37 0xc00097ef38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Dec 30 14:45:38.410: INFO: Pod "test-cleanup-deployment-55bbcbc84c-ltpb6" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-ltpb6,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-5504,SelfLink:/api/v1/namespaces/deployment-5504/pods/test-cleanup-deployment-55bbcbc84c-ltpb6,UID:53cd2d40-e228-4341-8a8b-9f63bcbd586d,ResourceVersion:18656272,Generation:0,CreationTimestamp:2019-12-30 14:45:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c ceb33f61-6b97-4566-8bf6-c4e2efbe8547 0xc000a46e27 0xc000a46e28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rngph {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rngph,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-rngph true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000a46eb0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000a46ed0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:45:20 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:45:35 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:45:35 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:45:20 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2019-12-30 14:45:20 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-30 14:45:34 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://9823bcdbbd1a14f26e2c983bce321b78b03f68fd939a51d22532c7be009eddf0}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:45:38.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-5504" for this suite.
Dec 30 14:45:44.439: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:45:44.631: INFO: namespace deployment-5504 deletion completed in 6.215592797s

• [SLOW TEST:39.629 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:45:44.632: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:47:17.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-1199" for this suite.
Dec 30 14:47:23.789: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:47:23.965: INFO: namespace container-runtime-1199 deletion completed in 6.232677514s

• [SLOW TEST:99.333 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:47:23.965: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting the proxy server
Dec 30 14:47:24.200: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:47:24.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2163" for this suite.
Dec 30 14:47:30.376: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:47:30.647: INFO: namespace kubectl-2163 deletion completed in 6.301581414s

• [SLOW TEST:6.682 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:47:30.648: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-978
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Dec 30 14:47:30.985: INFO: Found 0 stateful pods, waiting for 3
Dec 30 14:47:40.995: INFO: Found 1 stateful pods, waiting for 3
Dec 30 14:47:50.994: INFO: Found 2 stateful pods, waiting for 3
Dec 30 14:48:00.997: INFO: Found 2 stateful pods, waiting for 3
Dec 30 14:48:10.993: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 30 14:48:10.993: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 30 14:48:10.993: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 30 14:48:21.030: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 30 14:48:21.030: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 30 14:48:21.030: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Dec 30 14:48:21.077: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Dec 30 14:48:31.248: INFO: Updating stateful set ss2
Dec 30 14:48:31.267: INFO: Waiting for Pod statefulset-978/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 30 14:48:41.286: INFO: Waiting for Pod statefulset-978/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Dec 30 14:48:51.661: INFO: Found 2 stateful pods, waiting for 3
Dec 30 14:49:02.367: INFO: Found 2 stateful pods, waiting for 3
Dec 30 14:49:11.679: INFO: Found 2 stateful pods, waiting for 3
Dec 30 14:49:21.674: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 30 14:49:21.674: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 30 14:49:21.674: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 30 14:49:31.677: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 30 14:49:31.678: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 30 14:49:31.678: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Dec 30 14:49:31.749: INFO: Updating stateful set ss2
Dec 30 14:49:32.068: INFO: Waiting for Pod statefulset-978/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 30 14:49:42.185: INFO: Waiting for Pod statefulset-978/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 30 14:49:52.108: INFO: Updating stateful set ss2
Dec 30 14:49:52.255: INFO: Waiting for StatefulSet statefulset-978/ss2 to complete update
Dec 30 14:49:52.255: INFO: Waiting for Pod statefulset-978/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 30 14:50:02.601: INFO: Waiting for StatefulSet statefulset-978/ss2 to complete update
Dec 30 14:50:02.601: INFO: Waiting for Pod statefulset-978/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 30 14:50:12.268: INFO: Waiting for StatefulSet statefulset-978/ss2 to complete update
Dec 30 14:50:12.268: INFO: Waiting for Pod statefulset-978/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 30 14:50:22.286: INFO: Waiting for StatefulSet statefulset-978/ss2 to complete update
Dec 30 14:50:32.323: INFO: Waiting for StatefulSet statefulset-978/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Dec 30 14:50:42.271: INFO: Deleting all statefulset in ns statefulset-978
Dec 30 14:50:42.276: INFO: Scaling statefulset ss2 to 0
Dec 30 14:51:32.307: INFO: Waiting for statefulset status.replicas updated to 0
Dec 30 14:51:32.312: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:51:33.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-978" for this suite.
Dec 30 14:51:41.921: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:51:42.101: INFO: namespace statefulset-978 deletion completed in 8.257942931s

• [SLOW TEST:251.454 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:51:42.102: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the initial replication controller
Dec 30 14:51:42.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1847'
Dec 30 14:51:45.431: INFO: stderr: ""
Dec 30 14:51:45.431: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 30 14:51:45.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1847'
Dec 30 14:51:45.609: INFO: stderr: ""
Dec 30 14:51:45.609: INFO: stdout: "update-demo-nautilus-wbr8j "
STEP: Replicas for name=update-demo: expected=2 actual=1
Dec 30 14:51:50.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1847'
Dec 30 14:51:50.716: INFO: stderr: ""
Dec 30 14:51:50.716: INFO: stdout: "update-demo-nautilus-lc826 update-demo-nautilus-wbr8j "
Dec 30 14:51:50.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lc826 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1847'
Dec 30 14:51:50.879: INFO: stderr: ""
Dec 30 14:51:50.879: INFO: stdout: ""
Dec 30 14:51:50.879: INFO: update-demo-nautilus-lc826 is created but not running
Dec 30 14:51:55.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1847'
Dec 30 14:51:56.683: INFO: stderr: ""
Dec 30 14:51:56.683: INFO: stdout: "update-demo-nautilus-lc826 update-demo-nautilus-wbr8j "
Dec 30 14:51:56.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lc826 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1847'
Dec 30 14:51:57.431: INFO: stderr: ""
Dec 30 14:51:57.431: INFO: stdout: ""
Dec 30 14:51:57.431: INFO: update-demo-nautilus-lc826 is created but not running
Dec 30 14:52:02.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1847'
Dec 30 14:52:02.636: INFO: stderr: ""
Dec 30 14:52:02.637: INFO: stdout: "update-demo-nautilus-lc826 update-demo-nautilus-wbr8j "
Dec 30 14:52:02.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lc826 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1847'
Dec 30 14:52:02.732: INFO: stderr: ""
Dec 30 14:52:02.733: INFO: stdout: "true"
Dec 30 14:52:02.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lc826 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1847'
Dec 30 14:52:02.824: INFO: stderr: ""
Dec 30 14:52:02.824: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 30 14:52:02.824: INFO: validating pod update-demo-nautilus-lc826
Dec 30 14:52:02.832: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 30 14:52:02.832: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 30 14:52:02.832: INFO: update-demo-nautilus-lc826 is verified up and running
Dec 30 14:52:02.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wbr8j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1847'
Dec 30 14:52:02.939: INFO: stderr: ""
Dec 30 14:52:02.939: INFO: stdout: "true"
Dec 30 14:52:02.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wbr8j -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1847'
Dec 30 14:52:03.050: INFO: stderr: ""
Dec 30 14:52:03.050: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 30 14:52:03.050: INFO: validating pod update-demo-nautilus-wbr8j
Dec 30 14:52:03.065: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 30 14:52:03.065: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 30 14:52:03.065: INFO: update-demo-nautilus-wbr8j is verified up and running
STEP: rolling-update to new replication controller
Dec 30 14:52:03.069: INFO: scanned /root for discovery docs: 
Dec 30 14:52:03.069: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-1847'
Dec 30 14:52:48.324: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Dec 30 14:52:48.324: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 30 14:52:48.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1847'
Dec 30 14:52:48.536: INFO: stderr: ""
Dec 30 14:52:48.536: INFO: stdout: "update-demo-kitten-v4jbs update-demo-kitten-xgqbf "
Dec 30 14:52:48.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-v4jbs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1847'
Dec 30 14:52:48.622: INFO: stderr: ""
Dec 30 14:52:48.623: INFO: stdout: "true"
Dec 30 14:52:48.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-v4jbs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1847'
Dec 30 14:52:48.777: INFO: stderr: ""
Dec 30 14:52:48.778: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Dec 30 14:52:48.778: INFO: validating pod update-demo-kitten-v4jbs
Dec 30 14:52:48.805: INFO: got data: {
  "image": "kitten.jpg"
}

Dec 30 14:52:48.805: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Dec 30 14:52:48.805: INFO: update-demo-kitten-v4jbs is verified up and running
Dec 30 14:52:48.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-xgqbf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1847'
Dec 30 14:52:48.945: INFO: stderr: ""
Dec 30 14:52:48.945: INFO: stdout: "true"
Dec 30 14:52:48.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-xgqbf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1847'
Dec 30 14:52:49.063: INFO: stderr: ""
Dec 30 14:52:49.063: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Dec 30 14:52:49.063: INFO: validating pod update-demo-kitten-xgqbf
Dec 30 14:52:49.112: INFO: got data: {
  "image": "kitten.jpg"
}

Dec 30 14:52:49.112: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Dec 30 14:52:49.112: INFO: update-demo-kitten-xgqbf is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:52:49.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1847" for this suite.
Dec 30 14:53:19.144: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:53:19.340: INFO: namespace kubectl-1847 deletion completed in 30.221932545s

• [SLOW TEST:97.238 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:53:19.340: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Dec 30 14:53:19.612: INFO: Waiting up to 5m0s for pod "pod-ef58af05-14d0-48fe-8b50-3888d26ac733" in namespace "emptydir-3327" to be "success or failure"
Dec 30 14:53:19.628: INFO: Pod "pod-ef58af05-14d0-48fe-8b50-3888d26ac733": Phase="Pending", Reason="", readiness=false. Elapsed: 15.519864ms
Dec 30 14:53:21.634: INFO: Pod "pod-ef58af05-14d0-48fe-8b50-3888d26ac733": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021784907s
Dec 30 14:53:23.652: INFO: Pod "pod-ef58af05-14d0-48fe-8b50-3888d26ac733": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039114256s
Dec 30 14:53:25.658: INFO: Pod "pod-ef58af05-14d0-48fe-8b50-3888d26ac733": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04563235s
Dec 30 14:53:27.706: INFO: Pod "pod-ef58af05-14d0-48fe-8b50-3888d26ac733": Phase="Pending", Reason="", readiness=false. Elapsed: 8.093560807s
Dec 30 14:53:29.727: INFO: Pod "pod-ef58af05-14d0-48fe-8b50-3888d26ac733": Phase="Pending", Reason="", readiness=false. Elapsed: 10.114580404s
Dec 30 14:53:32.196: INFO: Pod "pod-ef58af05-14d0-48fe-8b50-3888d26ac733": Phase="Pending", Reason="", readiness=false. Elapsed: 12.583569107s
Dec 30 14:53:34.202: INFO: Pod "pod-ef58af05-14d0-48fe-8b50-3888d26ac733": Phase="Pending", Reason="", readiness=false. Elapsed: 14.589798316s
Dec 30 14:53:36.209: INFO: Pod "pod-ef58af05-14d0-48fe-8b50-3888d26ac733": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.596679181s
STEP: Saw pod success
Dec 30 14:53:36.209: INFO: Pod "pod-ef58af05-14d0-48fe-8b50-3888d26ac733" satisfied condition "success or failure"
Dec 30 14:53:36.212: INFO: Trying to get logs from node iruya-node pod pod-ef58af05-14d0-48fe-8b50-3888d26ac733 container test-container: 
STEP: delete the pod
Dec 30 14:53:36.361: INFO: Waiting for pod pod-ef58af05-14d0-48fe-8b50-3888d26ac733 to disappear
Dec 30 14:53:36.389: INFO: Pod pod-ef58af05-14d0-48fe-8b50-3888d26ac733 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:53:36.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3327" for this suite.
Dec 30 14:53:42.417: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 14:53:42.753: INFO: namespace emptydir-3327 deletion completed in 6.355225706s

• [SLOW TEST:23.413 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 14:53:42.757: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-8835
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating stateful set ss in namespace statefulset-8835
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8835
Dec 30 14:53:43.019: INFO: Found 0 stateful pods, waiting for 1
Dec 30 14:53:53.032: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false
Dec 30 14:54:03.027: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Dec 30 14:54:03.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 30 14:54:04.652: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 30 14:54:04.652: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 30 14:54:04.652: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 30 14:54:04.660: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Dec 30 14:54:14.676: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 30 14:54:14.676: INFO: Waiting for statefulset status.replicas updated to 0
Dec 30 14:54:14.763: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Dec 30 14:54:14.763: INFO: ss-0  iruya-node  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:53:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:53:43 +0000 UTC  }]
Dec 30 14:54:14.763: INFO: 
Dec 30 14:54:14.763: INFO: StatefulSet ss has not reached scale 3, at 1
Dec 30 14:54:16.061: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.933816177s
Dec 30 14:54:17.499: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.636190253s
Dec 30 14:54:19.445: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.197998773s
Dec 30 14:54:20.469: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.251532716s
Dec 30 14:54:21.545: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.227557625s
Dec 30 14:54:23.282: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.150981306s
Dec 30 14:54:25.149: INFO: Verifying statefulset ss doesn't scale past 3 for another 414.715533ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8835
Dec 30 14:54:26.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 14:54:27.567: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Dec 30 14:54:27.567: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 30 14:54:27.567: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 30 14:54:27.567: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 14:54:28.271: INFO: rc: 1
Dec 30 14:54:28.271: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc001e83b60 exit status 1   true [0xc002a4c150 0xc002a4c180 0xc002a4c1e0] [0xc002a4c150 0xc002a4c180 0xc002a4c1e0] [0xc002a4c178 0xc002a4c1c0] [0xba6c50 0xba6c50] 0xc00208c000 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1
Dec 30 14:54:38.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 14:54:38.651: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n"
Dec 30 14:54:38.652: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 30 14:54:38.652: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 30 14:54:38.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 14:54:39.189: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n"
Dec 30 14:54:39.189: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 30 14:54:39.189: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 30 14:54:39.199: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 30 14:54:39.199: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 30 14:54:39.199: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Dec 30 14:54:39.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 30 14:54:39.672: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 30 14:54:39.673: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 30 14:54:39.673: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 30 14:54:39.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 30 14:54:40.084: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 30 14:54:40.084: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 30 14:54:40.084: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 30 14:54:40.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 30 14:54:40.977: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 30 14:54:40.977: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 30 14:54:40.977: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 30 14:54:40.977: INFO: Waiting for statefulset status.replicas updated to 0
Dec 30 14:54:40.992: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 30 14:54:40.993: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Dec 30 14:54:40.993: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Dec 30 14:54:41.062: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 30 14:54:41.062: INFO: ss-0  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:53:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:53:43 +0000 UTC  }]
Dec 30 14:54:41.062: INFO: ss-1  iruya-server-sfge57q7djm7  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:14 +0000 UTC  }]
Dec 30 14:54:41.062: INFO: ss-2  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:14 +0000 UTC  }]
Dec 30 14:54:41.062: INFO: 
Dec 30 14:54:41.062: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 30 14:54:43.690: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 30 14:54:43.690: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:53:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:53:43 +0000 UTC  }]
Dec 30 14:54:43.691: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:14 +0000 UTC  }]
Dec 30 14:54:43.691: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:14 +0000 UTC  }]
Dec 30 14:54:43.691: INFO: 
Dec 30 14:54:43.691: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 30 14:54:44.838: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 30 14:54:44.838: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:53:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:53:43 +0000 UTC  }]
Dec 30 14:54:44.838: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:14 +0000 UTC  }]
Dec 30 14:54:44.838: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:14 +0000 UTC  }]
Dec 30 14:54:44.838: INFO: 
Dec 30 14:54:44.838: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 30 14:54:47.987: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 30 14:54:47.987: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:53:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:53:43 +0000 UTC  }]
Dec 30 14:54:47.987: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:14 +0000 UTC  }]
Dec 30 14:54:47.987: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:14 +0000 UTC  }]
Dec 30 14:54:47.987: INFO: 
Dec 30 14:54:47.987: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 30 14:54:48.995: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 30 14:54:48.995: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:53:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:53:43 +0000 UTC  }]
Dec 30 14:54:48.996: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:14 +0000 UTC  }]
Dec 30 14:54:48.996: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:14 +0000 UTC  }]
Dec 30 14:54:48.996: INFO: 
Dec 30 14:54:48.996: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 30 14:54:50.004: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 30 14:54:50.004: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:53:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:53:43 +0000 UTC  }]
Dec 30 14:54:50.004: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:14 +0000 UTC  }]
Dec 30 14:54:50.005: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:14 +0000 UTC  }]
Dec 30 14:54:50.005: INFO: 
Dec 30 14:54:50.005: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 30 14:54:51.013: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 30 14:54:51.013: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:53:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:53:43 +0000 UTC  }]
Dec 30 14:54:51.013: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:14 +0000 UTC  }]
Dec 30 14:54:51.013: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 14:54:14 +0000 UTC  }]
Dec 30 14:54:51.013: INFO: 
Dec 30 14:54:51.013: INFO: StatefulSet ss has not reached scale 0, at 3
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8835
Dec 30 14:54:52.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 14:54:52.387: INFO: rc: 1
Dec 30 14:54:52.388: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc002b621e0 exit status 1   true [0xc0016d6900 0xc0016d6948 0xc0016d6980] [0xc0016d6900 0xc0016d6948 0xc0016d6980] [0xc0016d6930 0xc0016d6970] [0xba6c50 0xba6c50] 0xc002b47bc0 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1
Dec 30 14:55:02.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 14:55:02.513: INFO: rc: 1
Dec 30 14:55:02.514: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc003395fb0 exit status 1   true [0xc000011468 0xc000011500 0xc000011550] [0xc000011468 0xc000011500 0xc000011550] [0xc0000114e0 0xc000011540] [0xba6c50 0xba6c50] 0xc002754f60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 30 14:55:12.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 14:55:12.669: INFO: rc: 1
Dec 30 14:55:12.669: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00306d920 exit status 1   true [0xc001f3a170 0xc001f3a1a8 0xc001f3a1e0] [0xc001f3a170 0xc001f3a1a8 0xc001f3a1e0] [0xc001f3a198 0xc001f3a1d0] [0xba6c50 0xba6c50] 0xc0028f4300 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 30 14:55:22.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 14:55:24.002: INFO: rc: 1
Dec 30 14:55:24.002: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0027ea0c0 exit status 1   true [0xc0000115c0 0xc0000115f0 0xc000011608] [0xc0000115c0 0xc0000115f0 0xc000011608] [0xc0000115e8 0xc000011600] [0xba6c50 0xba6c50] 0xc0027552c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 30 14:55:34.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 14:55:34.207: INFO: rc: 1
Dec 30 14:55:34.207: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0027ea1b0 exit status 1   true [0xc000011618 0xc000011670 0xc0000116d8] [0xc000011618 0xc000011670 0xc0000116d8] [0xc000011660 0xc0000116a8] [0xba6c50 0xba6c50] 0xc0027555c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 30 14:55:44.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 14:55:44.314: INFO: rc: 1
Dec 30 14:55:44.314: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0027ea270 exit status 1   true [0xc000011700 0xc000011748 0xc0000117a0] [0xc000011700 0xc000011748 0xc0000117a0] [0xc000011738 0xc000011788] [0xba6c50 0xba6c50] 0xc0027559e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 30 14:55:54.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 14:55:55.421: INFO: rc: 1
Dec 30 14:55:55.421: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00306da10 exit status 1   true [0xc001f3a1f0 0xc001f3a220 0xc001f3a288] [0xc001f3a1f0 0xc001f3a220 0xc001f3a288] [0xc001f3a210 0xc001f3a270] [0xba6c50 0xba6c50] 0xc0028f4600 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 30 14:56:05.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 14:56:05.599: INFO: rc: 1
Dec 30 14:56:05.599: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0024641e0 exit status 1   true [0xc002a4c3b0 0xc002a4c408 0xc002a4c420] [0xc002a4c3b0 0xc002a4c408 0xc002a4c420] [0xc002a4c3f0 0xc002a4c418] [0xba6c50 0xba6c50] 0xc0020d5380 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 30 14:56:15.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 14:56:15.745: INFO: rc: 1
Dec 30 14:56:15.746: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0024642d0 exit status 1   true [0xc002a4c438 0xc002a4c460 0xc002a4c4a0] [0xc002a4c438 0xc002a4c460 0xc002a4c4a0] [0xc002a4c458 0xc002a4c488] [0xba6c50 0xba6c50] 0xc0020d5860 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 30 14:56:25.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 14:56:25.976: INFO: rc: 1
Dec 30 14:56:25.976: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000977200 exit status 1   true [0xc000186000 0xc0016d6040 0xc0016d6080] [0xc000186000 0xc0016d6040 0xc0016d6080] [0xc0016d6008 0xc0016d6070] [0xba6c50 0xba6c50] 0xc001e84fc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 30 14:56:35.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 14:56:36.139: INFO: rc: 1
Dec 30 14:56:36.139: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002800090 exit status 1   true [0xc000010f08 0xc000010f90 0xc000011008] [0xc000010f08 0xc000010f90 0xc000011008] [0xc000010f58 0xc000010fe8] [0xba6c50 0xba6c50] 0xc002417860 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 30 14:56:46.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 14:56:46.339: INFO: rc: 1
Dec 30 14:56:46.340: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0028001b0 exit status 1   true [0xc000011020 0xc0000110e0 0xc0000111e0] [0xc000011020 0xc0000110e0 0xc0000111e0] [0xc0000110a8 0xc000011178] [0xba6c50 0xba6c50] 0xc0022caea0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 30 14:56:56.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 14:56:56.498: INFO: rc: 1
Dec 30 14:56:56.499: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001e82090 exit status 1   true [0xc001f3a008 0xc001f3a050 0xc001f3a080] [0xc001f3a008 0xc001f3a050 0xc001f3a080] [0xc001f3a030 0xc001f3a070] [0xba6c50 0xba6c50] 0xc001d68060 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 30 14:57:06.500: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 14:57:06.702: INFO: rc: 1
Dec 30 14:57:06.702: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0028002d0 exit status 1   true [0xc000011258 0xc000011338 0xc000011380] [0xc000011258 0xc000011338 0xc000011380] [0xc000011308 0xc000011368] [0xba6c50 0xba6c50] 0xc002a2a2a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 30 14:57:16.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 14:57:16.807: INFO: rc: 1
Dec 30 14:57:16.808: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0028003c0 exit status 1   true [0xc0000113c0 0xc000011418 0xc0000114a0] [0xc0000113c0 0xc000011418 0xc0000114a0] [0xc0000113e0 0xc000011468] [0xba6c50 0xba6c50] 0xc002a2aa20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 30 14:57:26.808: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 14:57:26.958: INFO: rc: 1
Dec 30 14:57:26.958: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0025b28a0 exit status 1   true [0xc002a4c008 0xc002a4c030 0xc002a4c080] [0xc002a4c008 0xc002a4c030 0xc002a4c080] [0xc002a4c028 0xc002a4c068] [0xba6c50 0xba6c50] 0xc002b46660 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 30 14:57:36.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 14:57:37.112: INFO: rc: 1
Dec 30 14:57:37.112: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002800480 exit status 1   true [0xc0000114e0 0xc000011540 0xc0000115e0] [0xc0000114e0 0xc000011540 0xc0000115e0] [0xc000011520 0xc0000115c0] [0xba6c50 0xba6c50] 0xc002a2afc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 30 14:57:47.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 14:57:47.269: INFO: rc: 1
Dec 30 14:57:47.269: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000977350 exit status 1   true [0xc0016d60b8 0xc0016d61e8 0xc0016d62e0] [0xc0016d60b8 0xc0016d61e8 0xc0016d62e0] [0xc0016d6150 0xc0016d6288] [0xba6c50 0xba6c50] 0xc0027543c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 30 14:57:57.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 14:57:57.482: INFO: rc: 1
Dec 30 14:57:57.482: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001e82180 exit status 1   true [0xc001f3a090 0xc001f3a0c8 0xc001f3a128] [0xc001f3a090 0xc001f3a0c8 0xc001f3a128] [0xc001f3a0b8 0xc001f3a100] [0xba6c50 0xba6c50] 0xc001d69920 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 30 14:58:07.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 14:58:07.744: INFO: rc: 1
Dec 30 14:58:07.744: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001e82270 exit status 1   true [0xc001f3a138 0xc001f3a170 0xc001f3a1a8] [0xc001f3a138 0xc001f3a170 0xc001f3a1a8] [0xc001f3a160 0xc001f3a198] [0xba6c50 0xba6c50] 0xc0028f4180 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 30 14:58:17.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 14:58:17.999: INFO: rc: 1
Dec 30 14:58:17.999: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001e82360 exit status 1   true [0xc001f3a1b8 0xc001f3a1f0 0xc001f3a220] [0xc001f3a1b8 0xc001f3a1f0 0xc001f3a220] [0xc001f3a1e0 0xc001f3a210] [0xba6c50 0xba6c50] 0xc0028f44e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 30 14:58:28.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 14:58:28.204: INFO: rc: 1
Dec 30 14:58:28.204: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000977230 exit status 1   true [0xc000186008 0xc0016d6040 0xc0016d6080] [0xc000186008 0xc0016d6040 0xc0016d6080] [0xc0016d6008 0xc0016d6070] [0xba6c50 0xba6c50] 0xc001d69020 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 30 14:58:38.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 14:58:38.338: INFO: rc: 1
Dec 30 14:58:38.338: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001e820c0 exit status 1   true [0xc001f3a008 0xc001f3a050 0xc001f3a080] [0xc001f3a008 0xc001f3a050 0xc001f3a080] [0xc001f3a030 0xc001f3a070] [0xba6c50 0xba6c50] 0xc0022ca3c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 30 14:58:48.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 14:58:48.551: INFO: rc: 1
Dec 30 14:58:48.552: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0028000c0 exit status 1   true [0xc000010f08 0xc000010f90 0xc000011008] [0xc000010f08 0xc000010f90 0xc000011008] [0xc000010f58 0xc000010fe8] [0xba6c50 0xba6c50] 0xc002417860 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 30 14:58:58.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 14:58:58.718: INFO: rc: 1
Dec 30 14:58:58.718: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001e82210 exit status 1   true [0xc001f3a090 0xc001f3a0c8 0xc001f3a128] [0xc001f3a090 0xc001f3a0c8 0xc001f3a128] [0xc001f3a0b8 0xc001f3a100] [0xba6c50 0xba6c50] 0xc0022cb860 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 30 14:59:08.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 14:59:08.895: INFO: rc: 1
Dec 30 14:59:08.896: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001e82390 exit status 1   true [0xc001f3a138 0xc001f3a170 0xc001f3a1a8] [0xc001f3a138 0xc001f3a170 0xc001f3a1a8] [0xc001f3a160 0xc001f3a198] [0xba6c50 0xba6c50] 0xc001e85980 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 30 14:59:18.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 14:59:19.059: INFO: rc: 1
Dec 30 14:59:19.059: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000977320 exit status 1   true [0xc0016d60b8 0xc0016d61e8 0xc0016d62e0] [0xc0016d60b8 0xc0016d61e8 0xc0016d62e0] [0xc0016d6150 0xc0016d6288] [0xba6c50 0xba6c50] 0xc001695ce0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 30 14:59:29.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 14:59:29.167: INFO: rc: 1
Dec 30 14:59:29.168: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000977410 exit status 1   true [0xc0016d6330 0xc0016d6428 0xc0016d6490] [0xc0016d6330 0xc0016d6428 0xc0016d6490] [0xc0016d6400 0xc0016d6460] [0xba6c50 0xba6c50] 0xc0027545a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 30 14:59:39.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 14:59:39.313: INFO: rc: 1
Dec 30 14:59:39.313: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002800180 exit status 1   true [0xc000011020 0xc0000110e0 0xc0000111e0] [0xc000011020 0xc0000110e0 0xc0000111e0] [0xc0000110a8 0xc000011178] [0xba6c50 0xba6c50] 0xc0028f41e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 30 14:59:49.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 14:59:49.445: INFO: rc: 1
Dec 30 14:59:49.446: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002800300 exit status 1   true [0xc000011258 0xc000011338 0xc000011380] [0xc000011258 0xc000011338 0xc000011380] [0xc000011308 0xc000011368] [0xba6c50 0xba6c50] 0xc0028f4540 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 30 14:59:59.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8835 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 14:59:59.601: INFO: rc: 1
Dec 30 14:59:59.602: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: 
Dec 30 14:59:59.602: INFO: Scaling statefulset ss to 0
Dec 30 14:59:59.632: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Dec 30 14:59:59.680: INFO: Deleting all statefulset in ns statefulset-8835
Dec 30 14:59:59.688: INFO: Scaling statefulset ss to 0
Dec 30 14:59:59.700: INFO: Waiting for statefulset status.replicas updated to 0
Dec 30 14:59:59.704: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 14:59:59.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-8835" for this suite.
Dec 30 15:00:05.777: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 15:00:05.973: INFO: namespace statefulset-8835 deletion completed in 6.240768528s

• [SLOW TEST:383.216 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 15:00:05.973: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Starting the proxy
Dec 30 15:00:06.089: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix401161252/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 15:00:06.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4480" for this suite.
Dec 30 15:00:12.274: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 15:00:12.383: INFO: namespace kubectl-4480 deletion completed in 6.143456323s

• [SLOW TEST:6.410 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 15:00:12.383: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with configMap that has name projected-configmap-test-upd-939fc75f-6d7b-408b-89dc-0f8252c7ee98
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-939fc75f-6d7b-408b-89dc-0f8252c7ee98
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 15:00:24.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9130" for this suite.
Dec 30 15:01:02.739: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 15:01:02.885: INFO: namespace projected-9130 deletion completed in 38.163083112s

• [SLOW TEST:50.502 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 15:01:02.886: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-9451
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-9451
STEP: Creating statefulset with conflicting port in namespace statefulset-9451
STEP: Waiting until pod test-pod will start running in namespace statefulset-9451
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-9451
Dec 30 15:01:15.108: INFO: Observed stateful pod in namespace: statefulset-9451, name: ss-0, uid: 17ddab40-7c48-4750-9536-9fc390ebaed8, status phase: Pending. Waiting for statefulset controller to delete.
Dec 30 15:06:15.108: INFO: Pod ss-0 expected to be re-created at least once
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Dec 30 15:06:15.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe po ss-0 --namespace=statefulset-9451'
Dec 30 15:06:18.689: INFO: stderr: ""
Dec 30 15:06:18.689: INFO: stdout: "Name:           ss-0\nNamespace:      statefulset-9451\nPriority:       0\nNode:           iruya-node/\nLabels:         baz=blah\n                controller-revision-hash=ss-6f98bdb9c4\n                foo=bar\n                statefulset.kubernetes.io/pod-name=ss-0\nAnnotations:    \nStatus:         Pending\nIP:             \nControlled By:  StatefulSet/ss\nContainers:\n  nginx:\n    Image:        docker.io/library/nginx:1.14-alpine\n    Port:         21017/TCP\n    Host Port:    21017/TCP\n    Environment:  \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-qz2bk (ro)\nVolumes:\n  default-token-qz2bk:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-qz2bk\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type     Reason            Age    From                 Message\n  ----     ------            ----   ----                 -------\n  Warning  PodFitsHostPorts  5m12s  kubelet, iruya-node  Predicate PodFitsHostPorts failed\n"
Dec 30 15:06:18.689: INFO: 
Output of kubectl describe ss-0:
Name:           ss-0
Namespace:      statefulset-9451
Priority:       0
Node:           iruya-node/
Labels:         baz=blah
                controller-revision-hash=ss-6f98bdb9c4
                foo=bar
                statefulset.kubernetes.io/pod-name=ss-0
Annotations:    
Status:         Pending
IP:             
Controlled By:  StatefulSet/ss
Containers:
  nginx:
    Image:        docker.io/library/nginx:1.14-alpine
    Port:         21017/TCP
    Host Port:    21017/TCP
    Environment:  
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-qz2bk (ro)
Volumes:
  default-token-qz2bk:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-qz2bk
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age    From                 Message
  ----     ------            ----   ----                 -------
  Warning  PodFitsHostPorts  5m12s  kubelet, iruya-node  Predicate PodFitsHostPorts failed

Dec 30 15:06:18.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs ss-0 --namespace=statefulset-9451 --tail=100'
Dec 30 15:06:18.866: INFO: rc: 1
Dec 30 15:06:18.866: INFO: 
Last 100 log lines of ss-0:

Dec 30 15:06:18.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe po test-pod --namespace=statefulset-9451'
Dec 30 15:06:18.979: INFO: stderr: ""
Dec 30 15:06:18.979: INFO: stdout: "Name:         test-pod\nNamespace:    statefulset-9451\nPriority:     0\nNode:         iruya-node/10.96.3.65\nStart Time:   Mon, 30 Dec 2019 15:01:03 +0000\nLabels:       \nAnnotations:  \nStatus:       Running\nIP:           10.44.0.1\nContainers:\n  nginx:\n    Container ID:   docker://f257e3d3f03315470505c685576e07fa49541fa31a8d3f231ea6273e2d3fe4d5\n    Image:          docker.io/library/nginx:1.14-alpine\n    Image ID:       docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\n    Port:           21017/TCP\n    Host Port:      21017/TCP\n    State:          Running\n      Started:      Mon, 30 Dec 2019 15:01:12 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-qz2bk (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-qz2bk:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-qz2bk\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason   Age    From                 Message\n  ----    ------   ----   ----                 -------\n  Normal  Pulled   5m10s  kubelet, iruya-node  Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\n  Normal  Created  5m7s   kubelet, iruya-node  Created container nginx\n  Normal  Started  5m6s   kubelet, iruya-node  Started container nginx\n"
Dec 30 15:06:18.979: INFO: 
Output of kubectl describe test-pod:
Name:         test-pod
Namespace:    statefulset-9451
Priority:     0
Node:         iruya-node/10.96.3.65
Start Time:   Mon, 30 Dec 2019 15:01:03 +0000
Labels:       
Annotations:  
Status:       Running
IP:           10.44.0.1
Containers:
  nginx:
    Container ID:   docker://f257e3d3f03315470505c685576e07fa49541fa31a8d3f231ea6273e2d3fe4d5
    Image:          docker.io/library/nginx:1.14-alpine
    Image ID:       docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7
    Port:           21017/TCP
    Host Port:      21017/TCP
    State:          Running
      Started:      Mon, 30 Dec 2019 15:01:12 +0000
    Ready:          True
    Restart Count:  0
    Environment:    
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-qz2bk (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-qz2bk:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-qz2bk
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason   Age    From                 Message
  ----    ------   ----   ----                 -------
  Normal  Pulled   5m10s  kubelet, iruya-node  Container image "docker.io/library/nginx:1.14-alpine" already present on machine
  Normal  Created  5m7s   kubelet, iruya-node  Created container nginx
  Normal  Started  5m6s   kubelet, iruya-node  Started container nginx

Dec 30 15:06:18.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs test-pod --namespace=statefulset-9451 --tail=100'
Dec 30 15:06:19.122: INFO: stderr: ""
Dec 30 15:06:19.122: INFO: stdout: ""
Dec 30 15:06:19.122: INFO: 
Last 100 log lines of test-pod:

Dec 30 15:06:19.122: INFO: Deleting all statefulset in ns statefulset-9451
Dec 30 15:06:19.126: INFO: Scaling statefulset ss to 0
Dec 30 15:06:29.190: INFO: Waiting for statefulset status.replicas updated to 0
Dec 30 15:06:29.201: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Collecting events from namespace "statefulset-9451".
STEP: Found 13 events.
Dec 30 15:06:29.237: INFO: At 2019-12-30 15:01:03 +0000 UTC - event for ss: {statefulset-controller } SuccessfulCreate: create Pod ss-0 in StatefulSet ss successful
Dec 30 15:06:29.237: INFO: At 2019-12-30 15:01:03 +0000 UTC - event for ss: {statefulset-controller } SuccessfulDelete: delete Pod ss-0 in StatefulSet ss successful
Dec 30 15:06:29.237: INFO: At 2019-12-30 15:01:03 +0000 UTC - event for ss: {statefulset-controller } RecreatingFailedPod: StatefulSet statefulset-9451/ss is recreating failed Pod ss-0
Dec 30 15:06:29.237: INFO: At 2019-12-30 15:01:03 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Dec 30 15:06:29.237: INFO: At 2019-12-30 15:01:03 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Dec 30 15:06:29.237: INFO: At 2019-12-30 15:01:04 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Dec 30 15:06:29.237: INFO: At 2019-12-30 15:01:05 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Dec 30 15:06:29.237: INFO: At 2019-12-30 15:01:06 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Dec 30 15:06:29.237: INFO: At 2019-12-30 15:01:06 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Dec 30 15:06:29.237: INFO: At 2019-12-30 15:01:06 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Dec 30 15:06:29.237: INFO: At 2019-12-30 15:01:08 +0000 UTC - event for test-pod: {kubelet iruya-node} Pulled: Container image "docker.io/library/nginx:1.14-alpine" already present on machine
Dec 30 15:06:29.237: INFO: At 2019-12-30 15:01:11 +0000 UTC - event for test-pod: {kubelet iruya-node} Created: Created container nginx
Dec 30 15:06:29.237: INFO: At 2019-12-30 15:01:12 +0000 UTC - event for test-pod: {kubelet iruya-node} Started: Started container nginx
Dec 30 15:06:29.244: INFO: POD       NODE        PHASE    GRACE  CONDITIONS
Dec 30 15:06:29.244: INFO: test-pod  iruya-node  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 15:01:03 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 15:01:13 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 15:01:13 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 15:01:03 +0000 UTC  }]
Dec 30 15:06:29.244: INFO: 
Dec 30 15:06:29.254: INFO: 
Logging node info for node iruya-node
Dec 30 15:06:29.259: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:iruya-node,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/iruya-node,UID:b2aa273d-23ea-4c86-9e2f-72569e3392bd,ResourceVersion:18658702,Generation:0,CreationTimestamp:2019-08-04 09:01:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: iruya-node,kubernetes.io/os: linux,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.96.1.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-10-12 11:56:49 +0000 UTC 2019-10-12 11:56:49 +0000 UTC WeaveIsUp Weave pod has set this} {MemoryPressure False 2019-12-30 15:06:24 +0000 UTC 2019-08-04 09:01:39 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-12-30 15:06:24 +0000 UTC 2019-08-04 09:01:39 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-12-30 15:06:24 +0000 UTC 2019-08-04 09:01:39 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-12-30 15:06:24 +0000 UTC 2019-08-04 09:02:19 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.96.3.65} {Hostname iruya-node}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f573dcf04d6f4a87856a35d266a2fa7a,SystemUUID:F573DCF0-4D6F-4A87-856A-35D266A2FA7A,BootID:8baf4beb-8391-43e6-b17b-b1e184b5370a,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.15.1,KubeProxyVersion:v1.15.1,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6] 373099368} {[k8s.gcr.io/etcd@sha256:17da501f5d2a675be46040422a27b7cc21b8a43895ac998b171db1c346f361f7 k8s.gcr.io/etcd:3.3.10] 258116302} {[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15] 246640776} {[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0] 195659796} {[weaveworks/weave-kube@sha256:8fea236b8e64192c454e459b40381bd48795bd54d791fa684d818afdc12bd100 weaveworks/weave-kube:2.5.2] 148150868} {[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine] 126894770} {[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine] 123781643} {[gcr.io/google-samples/gb-redisslave@sha256:57730a481f97b3321138161ba2c8c9ca3b32df32ce9180e4029e6940446800ec gcr.io/google-samples/gb-redisslave:v3] 98945667} {[k8s.gcr.io/kube-proxy@sha256:08186f4897488e96cb098dd8d1d931af9a6ea718bb8737bf44bb76e42075f0ce k8s.gcr.io/kube-proxy:v1.15.1] 82408284} {[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10] 61365829} {[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6] 57345321} {[weaveworks/weave-npc@sha256:56c93a359d54107558720a2859b83cb28a31c70c82a1aaa3dc4704e6c62e3b15 weaveworks/weave-npc:2.5.2] 49569458} {[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine] 29331594} {[gcr.io/kubernetes-e2e-test-images/nettest@sha256:6aa91bc71993260a87513e31b672ec14ce84bc253cd5233406c6946d3a8f55a1 gcr.io/kubernetes-e2e-test-images/nettest:1.0] 27413498} {[nginx@sha256:57a226fb6ab6823027c0704a9346a890ffb0cacde06bc19bbc234c8720673555 nginx:1.15-alpine] 16087791} {[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine] 16032814} {[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0] 11443478} {[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1] 9349974} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 8490662} {[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0] 6757579} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 6705349} {[gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 gcr.io/kubernetes-e2e-test-images/redis:1.0] 5905732} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 5851985} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:71c3fc838e0637df570497febafa0ee73bf47176dfd43612de5c55a71230674e gcr.io/kubernetes-e2e-test-images/liveness:1.1] 5829944} {[appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 appropriate/curl:latest] 5496756} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 4753501} {[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0] 4747037} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 4732240} {[gcr.io/kubernetes-e2e-test-images/porter@sha256:d6389405e453950618ae7749d9eee388f0eb32e0328a7e6583c41433aa5f2a77 gcr.io/kubernetes-e2e-test-images/porter:1.0] 4681408} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 2729534} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 1563521} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 1450451} {[busybox@sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e busybox:latest] 1219782} {[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29] 1154361} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472} {[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest] 239840}],VolumesInUse:[],VolumesAttached:[],Config:nil,},}
Dec 30 15:06:29.259: INFO: 
Logging kubelet events for node iruya-node
Dec 30 15:06:29.265: INFO: 
Logging pods the kubelet thinks is on node iruya-node
Dec 30 15:06:29.287: INFO: weave-net-rlp57 started at 2019-10-12 11:56:39 +0000 UTC (0+2 container statuses recorded)
Dec 30 15:06:29.287: INFO: 	Container weave ready: true, restart count 0
Dec 30 15:06:29.287: INFO: 	Container weave-npc ready: true, restart count 0
Dec 30 15:06:29.287: INFO: test-pod started at 2019-12-30 15:01:03 +0000 UTC (0+1 container statuses recorded)
Dec 30 15:06:29.287: INFO: 	Container nginx ready: true, restart count 0
Dec 30 15:06:29.287: INFO: kube-proxy-976zl started at 2019-08-04 09:01:39 +0000 UTC (0+1 container statuses recorded)
Dec 30 15:06:29.287: INFO: 	Container kube-proxy ready: true, restart count 0
W1230 15:06:29.294307       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 30 15:06:29.372: INFO: 
Latency metrics for node iruya-node
Dec 30 15:06:29.372: INFO: 
Logging node info for node iruya-server-sfge57q7djm7
Dec 30 15:06:29.378: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:iruya-server-sfge57q7djm7,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/iruya-server-sfge57q7djm7,UID:67f2a658-4743-4118-95e7-463a23bcd212,ResourceVersion:18658640,Generation:0,CreationTimestamp:2019-08-04 08:52:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: iruya-server-sfge57q7djm7,kubernetes.io/os: linux,node-role.kubernetes.io/master: ,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.96.0.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-08-04 08:53:00 +0000 UTC 2019-08-04 08:53:00 +0000 UTC WeaveIsUp Weave pod has set this} {MemoryPressure False 2019-12-30 15:05:45 +0000 UTC 2019-08-04 08:52:04 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-12-30 15:05:45 +0000 UTC 2019-08-04 08:52:04 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-12-30 15:05:45 +0000 UTC 2019-08-04 08:52:04 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-12-30 15:05:45 +0000 UTC 2019-08-04 08:53:09 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.96.2.216} {Hostname iruya-server-sfge57q7djm7}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:78bacef342604a51913cae58dd95802b,SystemUUID:78BACEF3-4260-4A51-913C-AE58DD95802B,BootID:db143d3a-01b3-4483-b23e-e72adff2b28d,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.15.1,KubeProxyVersion:v1.15.1,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6] 373099368} {[k8s.gcr.io/etcd@sha256:17da501f5d2a675be46040422a27b7cc21b8a43895ac998b171db1c346f361f7 k8s.gcr.io/etcd:3.3.10] 258116302} {[k8s.gcr.io/kube-apiserver@sha256:304a1c38707834062ee87df62ef329d52a8b9a3e70459565d0a396479073f54c k8s.gcr.io/kube-apiserver:v1.15.1] 206827454} {[k8s.gcr.io/kube-controller-manager@sha256:9abae95e428e228fe8f6d1630d55e79e018037460f3731312805c0f37471e4bf k8s.gcr.io/kube-controller-manager:v1.15.1] 158722622} {[weaveworks/weave-kube@sha256:8fea236b8e64192c454e459b40381bd48795bd54d791fa684d818afdc12bd100 weaveworks/weave-kube:2.5.2] 148150868} {[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine] 126894770} {[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine] 123781643} {[gcr.io/google-samples/gb-redisslave@sha256:57730a481f97b3321138161ba2c8c9ca3b32df32ce9180e4029e6940446800ec gcr.io/google-samples/gb-redisslave:v3] 98945667} {[k8s.gcr.io/kube-proxy@sha256:08186f4897488e96cb098dd8d1d931af9a6ea718bb8737bf44bb76e42075f0ce k8s.gcr.io/kube-proxy:v1.15.1] 82408284} {[k8s.gcr.io/kube-scheduler@sha256:d0ee18a9593013fbc44b1920e4930f29b664b59a3958749763cb33b57e0e8956 k8s.gcr.io/kube-scheduler:v1.15.1] 81107582} {[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6] 57345321} {[weaveworks/weave-npc@sha256:56c93a359d54107558720a2859b83cb28a31c70c82a1aaa3dc4704e6c62e3b15 weaveworks/weave-npc:2.5.2] 49569458} {[k8s.gcr.io/coredns@sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4 k8s.gcr.io/coredns:1.3.1] 40303560} {[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine] 29331594} {[nginx@sha256:57a226fb6ab6823027c0704a9346a890ffb0cacde06bc19bbc234c8720673555 nginx:1.15-alpine] 16087791} {[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine] 16032814} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 6705349} {[gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 gcr.io/kubernetes-e2e-test-images/redis:1.0] 5905732} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 4753501} {[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0] 4747037} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 1563521} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472} {[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest] 239840}],VolumesInUse:[],VolumesAttached:[],Config:nil,},}
Dec 30 15:06:29.378: INFO: 
Logging kubelet events for node iruya-server-sfge57q7djm7
Dec 30 15:06:29.382: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7
Dec 30 15:06:29.397: INFO: etcd-iruya-server-sfge57q7djm7 started at 2019-08-04 08:51:38 +0000 UTC (0+1 container statuses recorded)
Dec 30 15:06:29.397: INFO: 	Container etcd ready: true, restart count 0
Dec 30 15:06:29.397: INFO: weave-net-bzl4d started at 2019-08-04 08:52:37 +0000 UTC (0+2 container statuses recorded)
Dec 30 15:06:29.397: INFO: 	Container weave ready: true, restart count 0
Dec 30 15:06:29.397: INFO: 	Container weave-npc ready: true, restart count 0
Dec 30 15:06:29.397: INFO: coredns-5c98db65d4-bm4gs started at 2019-08-04 08:53:12 +0000 UTC (0+1 container statuses recorded)
Dec 30 15:06:29.397: INFO: 	Container coredns ready: true, restart count 0
Dec 30 15:06:29.397: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 started at 2019-08-04 08:51:42 +0000 UTC (0+1 container statuses recorded)
Dec 30 15:06:29.397: INFO: 	Container kube-controller-manager ready: true, restart count 14
Dec 30 15:06:29.397: INFO: kube-proxy-58v95 started at 2019-08-04 08:52:37 +0000 UTC (0+1 container statuses recorded)
Dec 30 15:06:29.397: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 30 15:06:29.397: INFO: kube-apiserver-iruya-server-sfge57q7djm7 started at 2019-08-04 08:51:39 +0000 UTC (0+1 container statuses recorded)
Dec 30 15:06:29.397: INFO: 	Container kube-apiserver ready: true, restart count 0
Dec 30 15:06:29.397: INFO: kube-scheduler-iruya-server-sfge57q7djm7 started at 2019-08-04 08:51:43 +0000 UTC (0+1 container statuses recorded)
Dec 30 15:06:29.397: INFO: 	Container kube-scheduler ready: true, restart count 10
Dec 30 15:06:29.397: INFO: coredns-5c98db65d4-xx8w8 started at 2019-08-04 08:53:12 +0000 UTC (0+1 container statuses recorded)
Dec 30 15:06:29.397: INFO: 	Container coredns ready: true, restart count 0
W1230 15:06:29.415071       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 30 15:06:29.467: INFO: 
Latency metrics for node iruya-server-sfge57q7djm7
Dec 30 15:06:29.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9451" for this suite.
Dec 30 15:06:51.508: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 15:06:51.638: INFO: namespace statefulset-9451 deletion completed in 22.165071593s

• Failure [348.753 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Should recreate evicted statefulset [Conformance] [It]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697

    Dec 30 15:06:15.108: Pod ss-0 expected to be re-created at least once

    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:769
------------------------------
SSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 15:06:51.639: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 15:07:24.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-8215" for this suite.
Dec 30 15:07:30.269: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 15:07:30.417: INFO: namespace namespaces-8215 deletion completed in 6.262481634s
STEP: Destroying namespace "nsdeletetest-4421" for this suite.
Dec 30 15:07:30.420: INFO: Namespace nsdeletetest-4421 was already deleted
STEP: Destroying namespace "nsdeletetest-9349" for this suite.
Dec 30 15:07:36.448: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 15:07:36.653: INFO: namespace nsdeletetest-9349 deletion completed in 6.232999816s

• [SLOW TEST:45.014 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 15:07:36.654: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-dacb18c9-49b0-47fb-995e-dfd02071a7a1
STEP: Creating a pod to test consume configMaps
Dec 30 15:07:36.775: INFO: Waiting up to 5m0s for pod "pod-configmaps-2874c0f2-7373-4326-a31c-23e719579331" in namespace "configmap-570" to be "success or failure"
Dec 30 15:07:36.779: INFO: Pod "pod-configmaps-2874c0f2-7373-4326-a31c-23e719579331": Phase="Pending", Reason="", readiness=false. Elapsed: 3.828927ms
Dec 30 15:07:38.790: INFO: Pod "pod-configmaps-2874c0f2-7373-4326-a31c-23e719579331": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014597997s
Dec 30 15:07:40.797: INFO: Pod "pod-configmaps-2874c0f2-7373-4326-a31c-23e719579331": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021951579s
Dec 30 15:07:42.806: INFO: Pod "pod-configmaps-2874c0f2-7373-4326-a31c-23e719579331": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030357968s
Dec 30 15:07:44.817: INFO: Pod "pod-configmaps-2874c0f2-7373-4326-a31c-23e719579331": Phase="Pending", Reason="", readiness=false. Elapsed: 8.04183854s
Dec 30 15:07:46.831: INFO: Pod "pod-configmaps-2874c0f2-7373-4326-a31c-23e719579331": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.0561611s
STEP: Saw pod success
Dec 30 15:07:46.832: INFO: Pod "pod-configmaps-2874c0f2-7373-4326-a31c-23e719579331" satisfied condition "success or failure"
Dec 30 15:07:46.835: INFO: Trying to get logs from node iruya-node pod pod-configmaps-2874c0f2-7373-4326-a31c-23e719579331 container configmap-volume-test: 
STEP: delete the pod
Dec 30 15:07:47.632: INFO: Waiting for pod pod-configmaps-2874c0f2-7373-4326-a31c-23e719579331 to disappear
Dec 30 15:07:47.645: INFO: Pod pod-configmaps-2874c0f2-7373-4326-a31c-23e719579331 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 15:07:47.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-570" for this suite.
Dec 30 15:07:53.686: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 15:07:53.902: INFO: namespace configmap-570 deletion completed in 6.244136792s

• [SLOW TEST:17.249 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 15:07:53.904: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name secret-emptykey-test-e046e15e-c284-4827-82e9-3e8bd34e5d57
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 15:07:54.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3039" for this suite.
Dec 30 15:08:00.105: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 15:08:00.233: INFO: namespace secrets-3039 deletion completed in 6.160423973s

• [SLOW TEST:6.329 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 15:08:00.233: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 30 15:08:00.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-2820'
Dec 30 15:08:00.488: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 30 15:08:00.488: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Dec 30 15:08:00.589: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-7rh7k]
Dec 30 15:08:00.589: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-7rh7k" in namespace "kubectl-2820" to be "running and ready"
Dec 30 15:08:00.593: INFO: Pod "e2e-test-nginx-rc-7rh7k": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063532ms
Dec 30 15:08:02.605: INFO: Pod "e2e-test-nginx-rc-7rh7k": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016081188s
Dec 30 15:08:04.622: INFO: Pod "e2e-test-nginx-rc-7rh7k": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032756733s
Dec 30 15:08:06.648: INFO: Pod "e2e-test-nginx-rc-7rh7k": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058894913s
Dec 30 15:08:08.656: INFO: Pod "e2e-test-nginx-rc-7rh7k": Phase="Pending", Reason="", readiness=false. Elapsed: 8.066810611s
Dec 30 15:08:10.682: INFO: Pod "e2e-test-nginx-rc-7rh7k": Phase="Running", Reason="", readiness=true. Elapsed: 10.093451667s
Dec 30 15:08:10.682: INFO: Pod "e2e-test-nginx-rc-7rh7k" satisfied condition "running and ready"
Dec 30 15:08:10.682: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-7rh7k]
Dec 30 15:08:10.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-2820'
Dec 30 15:08:10.932: INFO: stderr: ""
Dec 30 15:08:10.932: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461
Dec 30 15:08:10.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-2820'
Dec 30 15:08:11.063: INFO: stderr: ""
Dec 30 15:08:11.063: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 15:08:11.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2820" for this suite.
Dec 30 15:08:33.079: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 15:08:33.296: INFO: namespace kubectl-2820 deletion completed in 22.229505507s

• [SLOW TEST:33.063 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 15:08:33.296: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-406d94ed-b3d4-43a7-987c-b528052e8599 in namespace container-probe-922
Dec 30 15:08:43.450: INFO: Started pod busybox-406d94ed-b3d4-43a7-987c-b528052e8599 in namespace container-probe-922
STEP: checking the pod's current state and verifying that restartCount is present
Dec 30 15:08:43.460: INFO: Initial restart count of pod busybox-406d94ed-b3d4-43a7-987c-b528052e8599 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 15:12:45.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-922" for this suite.
Dec 30 15:12:51.163: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 15:12:51.297: INFO: namespace container-probe-922 deletion completed in 6.190712289s

• [SLOW TEST:258.001 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 15:12:51.299: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 15:13:01.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1101" for this suite.
Dec 30 15:13:47.559: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 15:13:47.696: INFO: namespace kubelet-test-1101 deletion completed in 46.169550752s

• [SLOW TEST:56.397 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 15:13:47.697: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Dec 30 15:13:47.860: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1723,SelfLink:/api/v1/namespaces/watch-1723/configmaps/e2e-watch-test-label-changed,UID:5b29f5fc-f2ab-4ca7-b9ac-c7807c13102d,ResourceVersion:18659477,Generation:0,CreationTimestamp:2019-12-30 15:13:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 30 15:13:47.861: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1723,SelfLink:/api/v1/namespaces/watch-1723/configmaps/e2e-watch-test-label-changed,UID:5b29f5fc-f2ab-4ca7-b9ac-c7807c13102d,ResourceVersion:18659478,Generation:0,CreationTimestamp:2019-12-30 15:13:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Dec 30 15:13:47.861: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1723,SelfLink:/api/v1/namespaces/watch-1723/configmaps/e2e-watch-test-label-changed,UID:5b29f5fc-f2ab-4ca7-b9ac-c7807c13102d,ResourceVersion:18659479,Generation:0,CreationTimestamp:2019-12-30 15:13:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Dec 30 15:13:57.957: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1723,SelfLink:/api/v1/namespaces/watch-1723/configmaps/e2e-watch-test-label-changed,UID:5b29f5fc-f2ab-4ca7-b9ac-c7807c13102d,ResourceVersion:18659494,Generation:0,CreationTimestamp:2019-12-30 15:13:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 30 15:13:57.957: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1723,SelfLink:/api/v1/namespaces/watch-1723/configmaps/e2e-watch-test-label-changed,UID:5b29f5fc-f2ab-4ca7-b9ac-c7807c13102d,ResourceVersion:18659495,Generation:0,CreationTimestamp:2019-12-30 15:13:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Dec 30 15:13:57.957: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1723,SelfLink:/api/v1/namespaces/watch-1723/configmaps/e2e-watch-test-label-changed,UID:5b29f5fc-f2ab-4ca7-b9ac-c7807c13102d,ResourceVersion:18659496,Generation:0,CreationTimestamp:2019-12-30 15:13:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 15:13:57.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-1723" for this suite.
Dec 30 15:14:04.073: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 15:14:04.170: INFO: namespace watch-1723 deletion completed in 6.204259633s

• [SLOW TEST:16.473 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 15:14:04.171: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Dec 30 15:14:04.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3132'
Dec 30 15:14:04.651: INFO: stderr: ""
Dec 30 15:14:04.651: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Dec 30 15:14:05.665: INFO: Selector matched 1 pods for map[app:redis]
Dec 30 15:14:05.665: INFO: Found 0 / 1
Dec 30 15:14:06.670: INFO: Selector matched 1 pods for map[app:redis]
Dec 30 15:14:06.670: INFO: Found 0 / 1
Dec 30 15:14:07.659: INFO: Selector matched 1 pods for map[app:redis]
Dec 30 15:14:07.659: INFO: Found 0 / 1
Dec 30 15:14:08.667: INFO: Selector matched 1 pods for map[app:redis]
Dec 30 15:14:08.667: INFO: Found 0 / 1
Dec 30 15:14:09.661: INFO: Selector matched 1 pods for map[app:redis]
Dec 30 15:14:09.661: INFO: Found 0 / 1
Dec 30 15:14:10.665: INFO: Selector matched 1 pods for map[app:redis]
Dec 30 15:14:10.665: INFO: Found 0 / 1
Dec 30 15:14:11.675: INFO: Selector matched 1 pods for map[app:redis]
Dec 30 15:14:11.675: INFO: Found 0 / 1
Dec 30 15:14:12.659: INFO: Selector matched 1 pods for map[app:redis]
Dec 30 15:14:12.659: INFO: Found 0 / 1
Dec 30 15:14:13.664: INFO: Selector matched 1 pods for map[app:redis]
Dec 30 15:14:13.664: INFO: Found 1 / 1
Dec 30 15:14:13.664: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Dec 30 15:14:13.669: INFO: Selector matched 1 pods for map[app:redis]
Dec 30 15:14:13.669: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Dec 30 15:14:13.669: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-gfrz9 --namespace=kubectl-3132 -p {"metadata":{"annotations":{"x":"y"}}}'
Dec 30 15:14:13.870: INFO: stderr: ""
Dec 30 15:14:13.870: INFO: stdout: "pod/redis-master-gfrz9 patched\n"
STEP: checking annotations
Dec 30 15:14:13.888: INFO: Selector matched 1 pods for map[app:redis]
Dec 30 15:14:13.888: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 15:14:13.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3132" for this suite.
Dec 30 15:14:35.911: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 15:14:36.051: INFO: namespace kubectl-3132 deletion completed in 22.157986416s

• [SLOW TEST:31.881 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 15:14:36.054: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Dec 30 15:14:46.808: INFO: Successfully updated pod "labelsupdate7b9562e7-f540-4759-9211-b484b9ed774a"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 15:14:49.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3715" for this suite.
Dec 30 15:15:11.244: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 15:15:11.394: INFO: namespace projected-3715 deletion completed in 22.23129134s

• [SLOW TEST:35.340 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 15:15:11.395: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 15:15:16.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5694" for this suite.
Dec 30 15:15:23.068: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 15:15:23.313: INFO: namespace watch-5694 deletion completed in 6.352027444s

• [SLOW TEST:11.918 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 15:15:23.313: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 30 15:15:23.462: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Dec 30 15:15:23.484: INFO: Number of nodes with available pods: 0
Dec 30 15:15:23.484: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Dec 30 15:15:23.593: INFO: Number of nodes with available pods: 0
Dec 30 15:15:23.594: INFO: Node iruya-node is running more than one daemon pod
Dec 30 15:15:24.610: INFO: Number of nodes with available pods: 0
Dec 30 15:15:24.610: INFO: Node iruya-node is running more than one daemon pod
Dec 30 15:15:25.602: INFO: Number of nodes with available pods: 0
Dec 30 15:15:25.603: INFO: Node iruya-node is running more than one daemon pod
Dec 30 15:15:26.610: INFO: Number of nodes with available pods: 0
Dec 30 15:15:26.610: INFO: Node iruya-node is running more than one daemon pod
Dec 30 15:15:27.608: INFO: Number of nodes with available pods: 0
Dec 30 15:15:27.608: INFO: Node iruya-node is running more than one daemon pod
Dec 30 15:15:29.182: INFO: Number of nodes with available pods: 0
Dec 30 15:15:29.182: INFO: Node iruya-node is running more than one daemon pod
Dec 30 15:15:29.603: INFO: Number of nodes with available pods: 0
Dec 30 15:15:29.603: INFO: Node iruya-node is running more than one daemon pod
Dec 30 15:15:30.601: INFO: Number of nodes with available pods: 0
Dec 30 15:15:30.601: INFO: Node iruya-node is running more than one daemon pod
Dec 30 15:15:31.605: INFO: Number of nodes with available pods: 0
Dec 30 15:15:31.605: INFO: Node iruya-node is running more than one daemon pod
Dec 30 15:15:32.617: INFO: Number of nodes with available pods: 0
Dec 30 15:15:32.617: INFO: Node iruya-node is running more than one daemon pod
Dec 30 15:15:33.612: INFO: Number of nodes with available pods: 1
Dec 30 15:15:33.613: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Dec 30 15:15:33.687: INFO: Number of nodes with available pods: 1
Dec 30 15:15:33.687: INFO: Number of running nodes: 0, number of available pods: 1
Dec 30 15:15:34.694: INFO: Number of nodes with available pods: 0
Dec 30 15:15:34.694: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Dec 30 15:15:34.716: INFO: Number of nodes with available pods: 0
Dec 30 15:15:34.716: INFO: Node iruya-node is running more than one daemon pod
Dec 30 15:15:35.725: INFO: Number of nodes with available pods: 0
Dec 30 15:15:35.725: INFO: Node iruya-node is running more than one daemon pod
Dec 30 15:15:36.728: INFO: Number of nodes with available pods: 0
Dec 30 15:15:36.728: INFO: Node iruya-node is running more than one daemon pod
Dec 30 15:15:37.727: INFO: Number of nodes with available pods: 0
Dec 30 15:15:37.727: INFO: Node iruya-node is running more than one daemon pod
Dec 30 15:15:38.728: INFO: Number of nodes with available pods: 0
Dec 30 15:15:38.728: INFO: Node iruya-node is running more than one daemon pod
Dec 30 15:15:39.729: INFO: Number of nodes with available pods: 0
Dec 30 15:15:39.729: INFO: Node iruya-node is running more than one daemon pod
Dec 30 15:15:40.731: INFO: Number of nodes with available pods: 0
Dec 30 15:15:40.731: INFO: Node iruya-node is running more than one daemon pod
Dec 30 15:15:41.727: INFO: Number of nodes with available pods: 0
Dec 30 15:15:41.727: INFO: Node iruya-node is running more than one daemon pod
Dec 30 15:15:42.724: INFO: Number of nodes with available pods: 0
Dec 30 15:15:42.724: INFO: Node iruya-node is running more than one daemon pod
Dec 30 15:15:43.728: INFO: Number of nodes with available pods: 0
Dec 30 15:15:43.728: INFO: Node iruya-node is running more than one daemon pod
Dec 30 15:15:44.723: INFO: Number of nodes with available pods: 0
Dec 30 15:15:44.723: INFO: Node iruya-node is running more than one daemon pod
Dec 30 15:15:45.725: INFO: Number of nodes with available pods: 0
Dec 30 15:15:45.725: INFO: Node iruya-node is running more than one daemon pod
Dec 30 15:15:46.725: INFO: Number of nodes with available pods: 0
Dec 30 15:15:46.725: INFO: Node iruya-node is running more than one daemon pod
Dec 30 15:15:47.729: INFO: Number of nodes with available pods: 0
Dec 30 15:15:47.729: INFO: Node iruya-node is running more than one daemon pod
Dec 30 15:15:48.729: INFO: Number of nodes with available pods: 0
Dec 30 15:15:48.729: INFO: Node iruya-node is running more than one daemon pod
Dec 30 15:15:49.725: INFO: Number of nodes with available pods: 1
Dec 30 15:15:49.725: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1418, will wait for the garbage collector to delete the pods
Dec 30 15:15:49.815: INFO: Deleting DaemonSet.extensions daemon-set took: 27.823943ms
Dec 30 15:15:50.116: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.617359ms
Dec 30 15:16:06.635: INFO: Number of nodes with available pods: 0
Dec 30 15:16:06.636: INFO: Number of running nodes: 0, number of available pods: 0
Dec 30 15:16:06.641: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1418/daemonsets","resourceVersion":"18659899"},"items":null}

Dec 30 15:16:06.644: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1418/pods","resourceVersion":"18659899"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 15:16:06.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-1418" for this suite.
Dec 30 15:16:12.823: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 15:16:12.955: INFO: namespace daemonsets-1418 deletion completed in 6.26075649s

• [SLOW TEST:49.642 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 15:16:12.955: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 30 15:16:13.145: INFO: Creating deployment "test-recreate-deployment"
Dec 30 15:16:13.170: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Dec 30 15:16:13.375: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Dec 30 15:16:15.404: INFO: Waiting deployment "test-recreate-deployment" to complete
Dec 30 15:16:15.416: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713315773, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713315773, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713315773, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713315773, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 30 15:16:17.426: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713315773, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713315773, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713315773, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713315773, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 30 15:16:19.428: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713315773, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713315773, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713315773, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713315773, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 30 15:16:21.432: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713315773, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713315773, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713315773, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713315773, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 30 15:16:23.425: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Dec 30 15:16:23.440: INFO: Updating deployment test-recreate-deployment
Dec 30 15:16:23.440: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Dec 30 15:16:24.160: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-4997,SelfLink:/apis/apps/v1/namespaces/deployment-4997/deployments/test-recreate-deployment,UID:4e0d9afb-ac29-4866-9bdc-0d5fd8e80cb8,ResourceVersion:18659984,Generation:2,CreationTimestamp:2019-12-30 15:16:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2019-12-30 15:16:24 +0000 UTC 2019-12-30 15:16:24 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2019-12-30 15:16:24 +0000 UTC 2019-12-30 15:16:13 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Dec 30 15:16:24.222: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-4997,SelfLink:/apis/apps/v1/namespaces/deployment-4997/replicasets/test-recreate-deployment-5c8c9cc69d,UID:27ffcec3-c4e3-4df5-a7e2-9975eab51249,ResourceVersion:18659983,Generation:1,CreationTimestamp:2019-12-30 15:16:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 4e0d9afb-ac29-4866-9bdc-0d5fd8e80cb8 0xc0036d6c87 0xc0036d6c88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 30 15:16:24.222: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Dec 30 15:16:24.222: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-4997,SelfLink:/apis/apps/v1/namespaces/deployment-4997/replicasets/test-recreate-deployment-6df85df6b9,UID:ffba7a53-d302-40ba-bbf5-084f213a5579,ResourceVersion:18659972,Generation:2,CreationTimestamp:2019-12-30 15:16:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 4e0d9afb-ac29-4866-9bdc-0d5fd8e80cb8 0xc0036d6d57 0xc0036d6d58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 30 15:16:24.228: INFO: Pod "test-recreate-deployment-5c8c9cc69d-8dgmg" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-8dgmg,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-4997,SelfLink:/api/v1/namespaces/deployment-4997/pods/test-recreate-deployment-5c8c9cc69d-8dgmg,UID:e7241e20-d4f0-4993-bb75-90fa1a755594,ResourceVersion:18659981,Generation:0,CreationTimestamp:2019-12-30 15:16:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 27ffcec3-c4e3-4df5-a7e2-9975eab51249 0xc0034ed967 0xc0034ed968}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-n6x9l {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n6x9l,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-n6x9l true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0034ed9e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0034eda00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-30 15:16:23 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 15:16:24.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-4997" for this suite.
Dec 30 15:16:30.299: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 15:16:30.439: INFO: namespace deployment-4997 deletion completed in 6.20721163s

• [SLOW TEST:17.484 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 15:16:30.440: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-07d36497-c045-4f18-a7b7-c8aad60c50d0
STEP: Creating a pod to test consume configMaps
Dec 30 15:16:30.622: INFO: Waiting up to 5m0s for pod "pod-configmaps-737efdaa-4a98-4ad8-8fc7-7f889af32339" in namespace "configmap-5440" to be "success or failure"
Dec 30 15:16:30.719: INFO: Pod "pod-configmaps-737efdaa-4a98-4ad8-8fc7-7f889af32339": Phase="Pending", Reason="", readiness=false. Elapsed: 96.065878ms
Dec 30 15:16:32.724: INFO: Pod "pod-configmaps-737efdaa-4a98-4ad8-8fc7-7f889af32339": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101852794s
Dec 30 15:16:34.735: INFO: Pod "pod-configmaps-737efdaa-4a98-4ad8-8fc7-7f889af32339": Phase="Pending", Reason="", readiness=false. Elapsed: 4.112808511s
Dec 30 15:16:36.752: INFO: Pod "pod-configmaps-737efdaa-4a98-4ad8-8fc7-7f889af32339": Phase="Pending", Reason="", readiness=false. Elapsed: 6.129222768s
Dec 30 15:16:38.765: INFO: Pod "pod-configmaps-737efdaa-4a98-4ad8-8fc7-7f889af32339": Phase="Pending", Reason="", readiness=false. Elapsed: 8.142484722s
Dec 30 15:16:41.314: INFO: Pod "pod-configmaps-737efdaa-4a98-4ad8-8fc7-7f889af32339": Phase="Pending", Reason="", readiness=false. Elapsed: 10.691006487s
Dec 30 15:16:43.323: INFO: Pod "pod-configmaps-737efdaa-4a98-4ad8-8fc7-7f889af32339": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.700237837s
STEP: Saw pod success
Dec 30 15:16:43.323: INFO: Pod "pod-configmaps-737efdaa-4a98-4ad8-8fc7-7f889af32339" satisfied condition "success or failure"
Dec 30 15:16:43.330: INFO: Trying to get logs from node iruya-node pod pod-configmaps-737efdaa-4a98-4ad8-8fc7-7f889af32339 container configmap-volume-test: 
STEP: delete the pod
Dec 30 15:16:43.502: INFO: Waiting for pod pod-configmaps-737efdaa-4a98-4ad8-8fc7-7f889af32339 to disappear
Dec 30 15:16:43.510: INFO: Pod pod-configmaps-737efdaa-4a98-4ad8-8fc7-7f889af32339 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 15:16:43.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5440" for this suite.
Dec 30 15:16:49.548: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 15:16:49.693: INFO: namespace configmap-5440 deletion completed in 6.175086603s

• [SLOW TEST:19.253 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 15:16:49.694: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-bc43175c-0e38-4895-bca5-73052d82b04c
STEP: Creating a pod to test consume configMaps
Dec 30 15:16:49.837: INFO: Waiting up to 5m0s for pod "pod-configmaps-7eec7b38-372b-4916-b8b3-b92f2d70457e" in namespace "configmap-1265" to be "success or failure"
Dec 30 15:16:49.851: INFO: Pod "pod-configmaps-7eec7b38-372b-4916-b8b3-b92f2d70457e": Phase="Pending", Reason="", readiness=false. Elapsed: 13.809685ms
Dec 30 15:16:51.885: INFO: Pod "pod-configmaps-7eec7b38-372b-4916-b8b3-b92f2d70457e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046939718s
Dec 30 15:16:53.906: INFO: Pod "pod-configmaps-7eec7b38-372b-4916-b8b3-b92f2d70457e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068204182s
Dec 30 15:16:55.917: INFO: Pod "pod-configmaps-7eec7b38-372b-4916-b8b3-b92f2d70457e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079526071s
Dec 30 15:16:57.924: INFO: Pod "pod-configmaps-7eec7b38-372b-4916-b8b3-b92f2d70457e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.086601346s
Dec 30 15:16:59.932: INFO: Pod "pod-configmaps-7eec7b38-372b-4916-b8b3-b92f2d70457e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.094633177s
STEP: Saw pod success
Dec 30 15:16:59.932: INFO: Pod "pod-configmaps-7eec7b38-372b-4916-b8b3-b92f2d70457e" satisfied condition "success or failure"
Dec 30 15:16:59.935: INFO: Trying to get logs from node iruya-node pod pod-configmaps-7eec7b38-372b-4916-b8b3-b92f2d70457e container configmap-volume-test: 
STEP: delete the pod
Dec 30 15:17:00.012: INFO: Waiting for pod pod-configmaps-7eec7b38-372b-4916-b8b3-b92f2d70457e to disappear
Dec 30 15:17:00.021: INFO: Pod pod-configmaps-7eec7b38-372b-4916-b8b3-b92f2d70457e no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 15:17:00.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1265" for this suite.
Dec 30 15:17:06.257: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 15:17:06.405: INFO: namespace configmap-1265 deletion completed in 6.184397427s

• [SLOW TEST:16.712 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 15:17:06.407: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Dec 30 15:17:06.551: INFO: Waiting up to 5m0s for pod "pod-325a678b-4a8a-42b4-ac0b-d2d22a7e56ff" in namespace "emptydir-9459" to be "success or failure"
Dec 30 15:17:06.560: INFO: Pod "pod-325a678b-4a8a-42b4-ac0b-d2d22a7e56ff": Phase="Pending", Reason="", readiness=false. Elapsed: 8.177569ms
Dec 30 15:17:08.570: INFO: Pod "pod-325a678b-4a8a-42b4-ac0b-d2d22a7e56ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018731472s
Dec 30 15:17:10.609: INFO: Pod "pod-325a678b-4a8a-42b4-ac0b-d2d22a7e56ff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058024907s
Dec 30 15:17:12.758: INFO: Pod "pod-325a678b-4a8a-42b4-ac0b-d2d22a7e56ff": Phase="Pending", Reason="", readiness=false. Elapsed: 6.206546385s
Dec 30 15:17:14.770: INFO: Pod "pod-325a678b-4a8a-42b4-ac0b-d2d22a7e56ff": Phase="Pending", Reason="", readiness=false. Elapsed: 8.219034336s
Dec 30 15:17:16.777: INFO: Pod "pod-325a678b-4a8a-42b4-ac0b-d2d22a7e56ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.225800474s
STEP: Saw pod success
Dec 30 15:17:16.777: INFO: Pod "pod-325a678b-4a8a-42b4-ac0b-d2d22a7e56ff" satisfied condition "success or failure"
Dec 30 15:17:16.781: INFO: Trying to get logs from node iruya-node pod pod-325a678b-4a8a-42b4-ac0b-d2d22a7e56ff container test-container: 
STEP: delete the pod
Dec 30 15:17:16.914: INFO: Waiting for pod pod-325a678b-4a8a-42b4-ac0b-d2d22a7e56ff to disappear
Dec 30 15:17:16.929: INFO: Pod pod-325a678b-4a8a-42b4-ac0b-d2d22a7e56ff no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 15:17:16.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9459" for this suite.
Dec 30 15:17:22.990: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 15:17:23.131: INFO: namespace emptydir-9459 deletion completed in 6.192490618s

• [SLOW TEST:16.725 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 15:17:23.132: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 30 15:17:23.298: INFO: Waiting up to 5m0s for pod "downwardapi-volume-25a80935-b436-41a6-948f-248d0e8da259" in namespace "downward-api-4825" to be "success or failure"
Dec 30 15:17:23.303: INFO: Pod "downwardapi-volume-25a80935-b436-41a6-948f-248d0e8da259": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07914ms
Dec 30 15:17:25.312: INFO: Pod "downwardapi-volume-25a80935-b436-41a6-948f-248d0e8da259": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013114166s
Dec 30 15:17:27.322: INFO: Pod "downwardapi-volume-25a80935-b436-41a6-948f-248d0e8da259": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023104382s
Dec 30 15:17:29.331: INFO: Pod "downwardapi-volume-25a80935-b436-41a6-948f-248d0e8da259": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032766729s
Dec 30 15:17:31.343: INFO: Pod "downwardapi-volume-25a80935-b436-41a6-948f-248d0e8da259": Phase="Pending", Reason="", readiness=false. Elapsed: 8.044940322s
Dec 30 15:17:33.351: INFO: Pod "downwardapi-volume-25a80935-b436-41a6-948f-248d0e8da259": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.052954829s
STEP: Saw pod success
Dec 30 15:17:33.352: INFO: Pod "downwardapi-volume-25a80935-b436-41a6-948f-248d0e8da259" satisfied condition "success or failure"
Dec 30 15:17:33.355: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-25a80935-b436-41a6-948f-248d0e8da259 container client-container: 
STEP: delete the pod
Dec 30 15:17:33.476: INFO: Waiting for pod downwardapi-volume-25a80935-b436-41a6-948f-248d0e8da259 to disappear
Dec 30 15:17:33.480: INFO: Pod downwardapi-volume-25a80935-b436-41a6-948f-248d0e8da259 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 15:17:33.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4825" for this suite.
Dec 30 15:17:39.506: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 15:17:39.694: INFO: namespace downward-api-4825 deletion completed in 6.208580715s

• [SLOW TEST:16.563 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 15:17:39.695: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-755.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-755.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-755.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-755.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 30 15:17:51.887: INFO: File wheezy_udp@dns-test-service-3.dns-755.svc.cluster.local from pod  dns-755/dns-test-e8b83748-b7a9-48e4-94e8-c4ae1b0fef46 contains '' instead of 'foo.example.com.'
Dec 30 15:17:51.896: INFO: File jessie_udp@dns-test-service-3.dns-755.svc.cluster.local from pod  dns-755/dns-test-e8b83748-b7a9-48e4-94e8-c4ae1b0fef46 contains '' instead of 'foo.example.com.'
Dec 30 15:17:51.896: INFO: Lookups using dns-755/dns-test-e8b83748-b7a9-48e4-94e8-c4ae1b0fef46 failed for: [wheezy_udp@dns-test-service-3.dns-755.svc.cluster.local jessie_udp@dns-test-service-3.dns-755.svc.cluster.local]

Dec 30 15:17:56.935: INFO: DNS probes using dns-test-e8b83748-b7a9-48e4-94e8-c4ae1b0fef46 succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-755.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-755.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-755.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-755.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 30 15:18:13.253: INFO: File wheezy_udp@dns-test-service-3.dns-755.svc.cluster.local from pod  dns-755/dns-test-5af7d7d3-7b58-454b-ade7-c7c95c2b32f8 contains '' instead of 'bar.example.com.'
Dec 30 15:18:13.259: INFO: File jessie_udp@dns-test-service-3.dns-755.svc.cluster.local from pod  dns-755/dns-test-5af7d7d3-7b58-454b-ade7-c7c95c2b32f8 contains '' instead of 'bar.example.com.'
Dec 30 15:18:13.259: INFO: Lookups using dns-755/dns-test-5af7d7d3-7b58-454b-ade7-c7c95c2b32f8 failed for: [wheezy_udp@dns-test-service-3.dns-755.svc.cluster.local jessie_udp@dns-test-service-3.dns-755.svc.cluster.local]

Dec 30 15:18:18.315: INFO: File wheezy_udp@dns-test-service-3.dns-755.svc.cluster.local from pod  dns-755/dns-test-5af7d7d3-7b58-454b-ade7-c7c95c2b32f8 contains 'foo.example.com.
' instead of 'bar.example.com.'
Dec 30 15:18:18.324: INFO: File jessie_udp@dns-test-service-3.dns-755.svc.cluster.local from pod  dns-755/dns-test-5af7d7d3-7b58-454b-ade7-c7c95c2b32f8 contains 'foo.example.com.
' instead of 'bar.example.com.'
Dec 30 15:18:18.324: INFO: Lookups using dns-755/dns-test-5af7d7d3-7b58-454b-ade7-c7c95c2b32f8 failed for: [wheezy_udp@dns-test-service-3.dns-755.svc.cluster.local jessie_udp@dns-test-service-3.dns-755.svc.cluster.local]

Dec 30 15:18:23.277: INFO: File wheezy_udp@dns-test-service-3.dns-755.svc.cluster.local from pod  dns-755/dns-test-5af7d7d3-7b58-454b-ade7-c7c95c2b32f8 contains 'foo.example.com.
' instead of 'bar.example.com.'
Dec 30 15:18:23.286: INFO: File jessie_udp@dns-test-service-3.dns-755.svc.cluster.local from pod  dns-755/dns-test-5af7d7d3-7b58-454b-ade7-c7c95c2b32f8 contains 'foo.example.com.
' instead of 'bar.example.com.'
Dec 30 15:18:23.286: INFO: Lookups using dns-755/dns-test-5af7d7d3-7b58-454b-ade7-c7c95c2b32f8 failed for: [wheezy_udp@dns-test-service-3.dns-755.svc.cluster.local jessie_udp@dns-test-service-3.dns-755.svc.cluster.local]

Dec 30 15:18:28.315: INFO: DNS probes using dns-test-5af7d7d3-7b58-454b-ade7-c7c95c2b32f8 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-755.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-755.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-755.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-755.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 30 15:18:44.799: INFO: File wheezy_udp@dns-test-service-3.dns-755.svc.cluster.local from pod  dns-755/dns-test-e66d969e-5e1a-419d-bcd1-7d09290f6d03 contains '' instead of '10.97.97.247'
Dec 30 15:18:44.808: INFO: File jessie_udp@dns-test-service-3.dns-755.svc.cluster.local from pod  dns-755/dns-test-e66d969e-5e1a-419d-bcd1-7d09290f6d03 contains '' instead of '10.97.97.247'
Dec 30 15:18:44.808: INFO: Lookups using dns-755/dns-test-e66d969e-5e1a-419d-bcd1-7d09290f6d03 failed for: [wheezy_udp@dns-test-service-3.dns-755.svc.cluster.local jessie_udp@dns-test-service-3.dns-755.svc.cluster.local]

Dec 30 15:18:49.878: INFO: DNS probes using dns-test-e66d969e-5e1a-419d-bcd1-7d09290f6d03 succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 15:18:50.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-755" for this suite.
Dec 30 15:18:58.198: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 15:18:58.362: INFO: namespace dns-755 deletion completed in 8.272426998s

• [SLOW TEST:78.667 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 15:18:58.362: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-projected-all-test-volume-00325699-b7b3-4df3-bf13-6554cabb100e
STEP: Creating secret with name secret-projected-all-test-volume-bbd0b70e-2ffd-47d0-b7cb-0cfc29dfd9d1
STEP: Creating a pod to test Check all projections for projected volume plugin
Dec 30 15:18:58.487: INFO: Waiting up to 5m0s for pod "projected-volume-55d7dbb1-4c94-42b0-b8ae-4df8649428c2" in namespace "projected-1815" to be "success or failure"
Dec 30 15:18:58.501: INFO: Pod "projected-volume-55d7dbb1-4c94-42b0-b8ae-4df8649428c2": Phase="Pending", Reason="", readiness=false. Elapsed: 13.886156ms
Dec 30 15:19:00.525: INFO: Pod "projected-volume-55d7dbb1-4c94-42b0-b8ae-4df8649428c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038052576s
Dec 30 15:19:02.539: INFO: Pod "projected-volume-55d7dbb1-4c94-42b0-b8ae-4df8649428c2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051725877s
Dec 30 15:19:04.551: INFO: Pod "projected-volume-55d7dbb1-4c94-42b0-b8ae-4df8649428c2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063632689s
Dec 30 15:19:06.563: INFO: Pod "projected-volume-55d7dbb1-4c94-42b0-b8ae-4df8649428c2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.076121872s
Dec 30 15:19:08.588: INFO: Pod "projected-volume-55d7dbb1-4c94-42b0-b8ae-4df8649428c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.100997933s
STEP: Saw pod success
Dec 30 15:19:08.589: INFO: Pod "projected-volume-55d7dbb1-4c94-42b0-b8ae-4df8649428c2" satisfied condition "success or failure"
Dec 30 15:19:08.608: INFO: Trying to get logs from node iruya-node pod projected-volume-55d7dbb1-4c94-42b0-b8ae-4df8649428c2 container projected-all-volume-test: 
STEP: delete the pod
Dec 30 15:19:08.803: INFO: Waiting for pod projected-volume-55d7dbb1-4c94-42b0-b8ae-4df8649428c2 to disappear
Dec 30 15:19:08.858: INFO: Pod projected-volume-55d7dbb1-4c94-42b0-b8ae-4df8649428c2 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 15:19:08.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1815" for this suite.
Dec 30 15:19:14.914: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 15:19:15.046: INFO: namespace projected-1815 deletion completed in 6.170103705s

• [SLOW TEST:16.684 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 15:19:15.047: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Dec 30 15:19:15.197: INFO: Waiting up to 5m0s for pod "pod-9e9b1897-612b-46c2-933d-717e5e48806d" in namespace "emptydir-3033" to be "success or failure"
Dec 30 15:19:15.210: INFO: Pod "pod-9e9b1897-612b-46c2-933d-717e5e48806d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.296143ms
Dec 30 15:19:17.221: INFO: Pod "pod-9e9b1897-612b-46c2-933d-717e5e48806d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023810951s
Dec 30 15:19:19.236: INFO: Pod "pod-9e9b1897-612b-46c2-933d-717e5e48806d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038330148s
Dec 30 15:19:21.257: INFO: Pod "pod-9e9b1897-612b-46c2-933d-717e5e48806d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059127056s
Dec 30 15:19:23.265: INFO: Pod "pod-9e9b1897-612b-46c2-933d-717e5e48806d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.067868614s
Dec 30 15:19:25.274: INFO: Pod "pod-9e9b1897-612b-46c2-933d-717e5e48806d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.07617616s
STEP: Saw pod success
Dec 30 15:19:25.274: INFO: Pod "pod-9e9b1897-612b-46c2-933d-717e5e48806d" satisfied condition "success or failure"
Dec 30 15:19:25.277: INFO: Trying to get logs from node iruya-node pod pod-9e9b1897-612b-46c2-933d-717e5e48806d container test-container: 
STEP: delete the pod
Dec 30 15:19:25.340: INFO: Waiting for pod pod-9e9b1897-612b-46c2-933d-717e5e48806d to disappear
Dec 30 15:19:25.453: INFO: Pod pod-9e9b1897-612b-46c2-933d-717e5e48806d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 15:19:25.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3033" for this suite.
Dec 30 15:19:31.497: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 15:19:31.686: INFO: namespace emptydir-3033 deletion completed in 6.223244573s

• [SLOW TEST:16.640 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 15:19:31.687: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-667.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-667.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 30 15:19:45.963: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-667/dns-test-cf63942e-708a-4e6e-bc8b-114db34f1407: the server could not find the requested resource (get pods dns-test-cf63942e-708a-4e6e-bc8b-114db34f1407)
Dec 30 15:19:45.980: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-667/dns-test-cf63942e-708a-4e6e-bc8b-114db34f1407: the server could not find the requested resource (get pods dns-test-cf63942e-708a-4e6e-bc8b-114db34f1407)
Dec 30 15:19:45.993: INFO: Unable to read wheezy_udp@PodARecord from pod dns-667/dns-test-cf63942e-708a-4e6e-bc8b-114db34f1407: the server could not find the requested resource (get pods dns-test-cf63942e-708a-4e6e-bc8b-114db34f1407)
Dec 30 15:19:46.008: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-667/dns-test-cf63942e-708a-4e6e-bc8b-114db34f1407: the server could not find the requested resource (get pods dns-test-cf63942e-708a-4e6e-bc8b-114db34f1407)
Dec 30 15:19:46.018: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-667/dns-test-cf63942e-708a-4e6e-bc8b-114db34f1407: the server could not find the requested resource (get pods dns-test-cf63942e-708a-4e6e-bc8b-114db34f1407)
Dec 30 15:19:46.025: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-667/dns-test-cf63942e-708a-4e6e-bc8b-114db34f1407: the server could not find the requested resource (get pods dns-test-cf63942e-708a-4e6e-bc8b-114db34f1407)
Dec 30 15:19:46.042: INFO: Unable to read jessie_udp@PodARecord from pod dns-667/dns-test-cf63942e-708a-4e6e-bc8b-114db34f1407: the server could not find the requested resource (get pods dns-test-cf63942e-708a-4e6e-bc8b-114db34f1407)
Dec 30 15:19:46.047: INFO: Unable to read jessie_tcp@PodARecord from pod dns-667/dns-test-cf63942e-708a-4e6e-bc8b-114db34f1407: the server could not find the requested resource (get pods dns-test-cf63942e-708a-4e6e-bc8b-114db34f1407)
Dec 30 15:19:46.047: INFO: Lookups using dns-667/dns-test-cf63942e-708a-4e6e-bc8b-114db34f1407 failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Dec 30 15:19:51.119: INFO: DNS probes using dns-667/dns-test-cf63942e-708a-4e6e-bc8b-114db34f1407 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 15:19:51.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-667" for this suite.
Dec 30 15:19:57.296: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 15:19:57.429: INFO: namespace dns-667 deletion completed in 6.178875646s

• [SLOW TEST:25.742 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 15:19:57.429: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Dec 30 15:20:19.676: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 30 15:20:19.682: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 30 15:20:21.683: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 30 15:20:21.695: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 30 15:20:23.683: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 30 15:20:23.693: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 30 15:20:25.683: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 30 15:20:25.693: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 30 15:20:27.683: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 30 15:20:27.692: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 30 15:20:29.683: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 30 15:20:29.717: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 30 15:20:31.683: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 30 15:20:31.690: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 30 15:20:33.683: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 30 15:20:33.725: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 30 15:20:35.683: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 30 15:20:35.694: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 30 15:20:37.683: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 30 15:20:37.693: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 30 15:20:39.683: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 30 15:20:39.697: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 30 15:20:41.683: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 30 15:20:41.694: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 30 15:20:43.683: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 30 15:20:43.692: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 30 15:20:45.683: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 30 15:20:45.692: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 30 15:20:47.683: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 30 15:20:47.694: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 15:20:47.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-9256" for this suite.
Dec 30 15:21:09.761: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 15:21:09.888: INFO: namespace container-lifecycle-hook-9256 deletion completed in 22.15326934s

• [SLOW TEST:72.459 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 15:21:09.889: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-471f2004-332e-4e84-8674-4f2a8a51278c
STEP: Creating a pod to test consume secrets
Dec 30 15:21:10.039: INFO: Waiting up to 5m0s for pod "pod-secrets-c7648823-72bb-4604-97d0-f855f755b6ef" in namespace "secrets-7403" to be "success or failure"
Dec 30 15:21:10.075: INFO: Pod "pod-secrets-c7648823-72bb-4604-97d0-f855f755b6ef": Phase="Pending", Reason="", readiness=false. Elapsed: 36.254685ms
Dec 30 15:21:12.088: INFO: Pod "pod-secrets-c7648823-72bb-4604-97d0-f855f755b6ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049416628s
Dec 30 15:21:14.093: INFO: Pod "pod-secrets-c7648823-72bb-4604-97d0-f855f755b6ef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054204052s
Dec 30 15:21:16.102: INFO: Pod "pod-secrets-c7648823-72bb-4604-97d0-f855f755b6ef": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062946555s
Dec 30 15:21:18.108: INFO: Pod "pod-secrets-c7648823-72bb-4604-97d0-f855f755b6ef": Phase="Pending", Reason="", readiness=false. Elapsed: 8.069344459s
Dec 30 15:21:20.118: INFO: Pod "pod-secrets-c7648823-72bb-4604-97d0-f855f755b6ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.079232992s
STEP: Saw pod success
Dec 30 15:21:20.118: INFO: Pod "pod-secrets-c7648823-72bb-4604-97d0-f855f755b6ef" satisfied condition "success or failure"
Dec 30 15:21:20.121: INFO: Trying to get logs from node iruya-node pod pod-secrets-c7648823-72bb-4604-97d0-f855f755b6ef container secret-volume-test: 
STEP: delete the pod
Dec 30 15:21:20.164: INFO: Waiting for pod pod-secrets-c7648823-72bb-4604-97d0-f855f755b6ef to disappear
Dec 30 15:21:20.168: INFO: Pod pod-secrets-c7648823-72bb-4604-97d0-f855f755b6ef no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 15:21:20.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7403" for this suite.
Dec 30 15:21:26.223: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 15:21:26.351: INFO: namespace secrets-7403 deletion completed in 6.152460764s

• [SLOW TEST:16.463 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 15:21:26.352: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 30 15:21:26.449: INFO: Waiting up to 5m0s for pod "downwardapi-volume-05caa3f3-b790-4ca3-af2a-52fba4321e5a" in namespace "downward-api-4590" to be "success or failure"
Dec 30 15:21:26.544: INFO: Pod "downwardapi-volume-05caa3f3-b790-4ca3-af2a-52fba4321e5a": Phase="Pending", Reason="", readiness=false. Elapsed: 93.976564ms
Dec 30 15:21:28.557: INFO: Pod "downwardapi-volume-05caa3f3-b790-4ca3-af2a-52fba4321e5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10709989s
Dec 30 15:21:30.573: INFO: Pod "downwardapi-volume-05caa3f3-b790-4ca3-af2a-52fba4321e5a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.123299798s
Dec 30 15:21:32.588: INFO: Pod "downwardapi-volume-05caa3f3-b790-4ca3-af2a-52fba4321e5a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.138926369s
Dec 30 15:21:34.610: INFO: Pod "downwardapi-volume-05caa3f3-b790-4ca3-af2a-52fba4321e5a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.160475066s
Dec 30 15:21:36.620: INFO: Pod "downwardapi-volume-05caa3f3-b790-4ca3-af2a-52fba4321e5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.170636314s
STEP: Saw pod success
Dec 30 15:21:36.620: INFO: Pod "downwardapi-volume-05caa3f3-b790-4ca3-af2a-52fba4321e5a" satisfied condition "success or failure"
Dec 30 15:21:36.629: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-05caa3f3-b790-4ca3-af2a-52fba4321e5a container client-container: 
STEP: delete the pod
Dec 30 15:21:36.681: INFO: Waiting for pod downwardapi-volume-05caa3f3-b790-4ca3-af2a-52fba4321e5a to disappear
Dec 30 15:21:36.689: INFO: Pod downwardapi-volume-05caa3f3-b790-4ca3-af2a-52fba4321e5a no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 15:21:36.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4590" for this suite.
Dec 30 15:21:42.735: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 15:21:42.817: INFO: namespace downward-api-4590 deletion completed in 6.120750496s

• [SLOW TEST:16.465 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 15:21:42.817: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating replication controller svc-latency-rc in namespace svc-latency-9165
I1230 15:21:42.909125       8 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-9165, replica count: 1
I1230 15:21:43.959851       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1230 15:21:44.960314       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1230 15:21:45.960842       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1230 15:21:46.961168       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1230 15:21:47.961399       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1230 15:21:48.961733       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1230 15:21:49.962199       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1230 15:21:50.962497       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Dec 30 15:21:51.222: INFO: Created: latency-svc-k5fsc
Dec 30 15:21:51.251: INFO: Got endpoints: latency-svc-k5fsc [188.970849ms]
Dec 30 15:21:51.403: INFO: Created: latency-svc-lcc68
Dec 30 15:21:51.411: INFO: Got endpoints: latency-svc-lcc68 [159.169552ms]
Dec 30 15:21:51.558: INFO: Created: latency-svc-89w76
Dec 30 15:21:51.577: INFO: Got endpoints: latency-svc-89w76 [324.283148ms]
Dec 30 15:21:51.611: INFO: Created: latency-svc-htwq4
Dec 30 15:21:51.623: INFO: Got endpoints: latency-svc-htwq4 [369.913164ms]
Dec 30 15:21:51.655: INFO: Created: latency-svc-mj24d
Dec 30 15:21:51.735: INFO: Got endpoints: latency-svc-mj24d [482.139088ms]
Dec 30 15:21:51.782: INFO: Created: latency-svc-vqdks
Dec 30 15:21:51.827: INFO: Got endpoints: latency-svc-vqdks [573.969312ms]
Dec 30 15:21:51.830: INFO: Created: latency-svc-ldd75
Dec 30 15:21:51.940: INFO: Got endpoints: latency-svc-ldd75 [687.359168ms]
Dec 30 15:21:51.971: INFO: Created: latency-svc-xnc7r
Dec 30 15:21:51.991: INFO: Got endpoints: latency-svc-xnc7r [737.777576ms]
Dec 30 15:21:52.078: INFO: Created: latency-svc-kbspq
Dec 30 15:21:52.173: INFO: Got endpoints: latency-svc-kbspq [920.016589ms]
Dec 30 15:21:52.213: INFO: Created: latency-svc-j6l8s
Dec 30 15:21:52.239: INFO: Got endpoints: latency-svc-j6l8s [987.192859ms]
Dec 30 15:21:52.401: INFO: Created: latency-svc-4qngv
Dec 30 15:21:52.403: INFO: Got endpoints: latency-svc-4qngv [1.14994885s]
Dec 30 15:21:52.445: INFO: Created: latency-svc-lbggp
Dec 30 15:21:52.453: INFO: Got endpoints: latency-svc-lbggp [1.199466636s]
Dec 30 15:21:52.483: INFO: Created: latency-svc-2pfz7
Dec 30 15:21:52.560: INFO: Got endpoints: latency-svc-2pfz7 [1.306412485s]
Dec 30 15:21:52.605: INFO: Created: latency-svc-tmxkg
Dec 30 15:21:52.645: INFO: Got endpoints: latency-svc-tmxkg [1.391438487s]
Dec 30 15:21:52.783: INFO: Created: latency-svc-w2mjt
Dec 30 15:21:52.825: INFO: Got endpoints: latency-svc-w2mjt [1.571171049s]
Dec 30 15:21:52.925: INFO: Created: latency-svc-9s25x
Dec 30 15:21:52.928: INFO: Got endpoints: latency-svc-9s25x [1.674536966s]
Dec 30 15:21:52.974: INFO: Created: latency-svc-6nvz6
Dec 30 15:21:52.979: INFO: Got endpoints: latency-svc-6nvz6 [1.567531969s]
Dec 30 15:21:53.012: INFO: Created: latency-svc-vmhk5
Dec 30 15:21:53.129: INFO: Got endpoints: latency-svc-vmhk5 [1.552157214s]
Dec 30 15:21:53.176: INFO: Created: latency-svc-jvqrl
Dec 30 15:21:53.183: INFO: Got endpoints: latency-svc-jvqrl [1.559450997s]
Dec 30 15:21:53.218: INFO: Created: latency-svc-krkhq
Dec 30 15:21:53.225: INFO: Got endpoints: latency-svc-krkhq [1.490337541s]
Dec 30 15:21:53.332: INFO: Created: latency-svc-gv7hl
Dec 30 15:21:53.337: INFO: Got endpoints: latency-svc-gv7hl [1.510236379s]
Dec 30 15:21:53.370: INFO: Created: latency-svc-s7ntv
Dec 30 15:21:53.384: INFO: Got endpoints: latency-svc-s7ntv [1.444230229s]
Dec 30 15:21:53.413: INFO: Created: latency-svc-cpbvr
Dec 30 15:21:53.427: INFO: Got endpoints: latency-svc-cpbvr [1.435999751s]
Dec 30 15:21:53.560: INFO: Created: latency-svc-bfpg4
Dec 30 15:21:53.573: INFO: Got endpoints: latency-svc-bfpg4 [1.399773995s]
Dec 30 15:21:53.612: INFO: Created: latency-svc-smqdq
Dec 30 15:21:53.625: INFO: Got endpoints: latency-svc-smqdq [1.385499065s]
Dec 30 15:21:53.719: INFO: Created: latency-svc-88ww9
Dec 30 15:21:53.727: INFO: Got endpoints: latency-svc-88ww9 [1.324012428s]
Dec 30 15:21:53.799: INFO: Created: latency-svc-q2slf
Dec 30 15:21:53.939: INFO: Got endpoints: latency-svc-q2slf [1.485810782s]
Dec 30 15:21:53.976: INFO: Created: latency-svc-mffxc
Dec 30 15:21:53.990: INFO: Got endpoints: latency-svc-mffxc [1.429339387s]
Dec 30 15:21:54.043: INFO: Created: latency-svc-2zht2
Dec 30 15:21:54.152: INFO: Got endpoints: latency-svc-2zht2 [1.507103708s]
Dec 30 15:21:54.160: INFO: Created: latency-svc-brg25
Dec 30 15:21:54.166: INFO: Got endpoints: latency-svc-brg25 [1.340795468s]
Dec 30 15:21:54.293: INFO: Created: latency-svc-d5m96
Dec 30 15:21:54.319: INFO: Got endpoints: latency-svc-d5m96 [1.391459296s]
Dec 30 15:21:54.357: INFO: Created: latency-svc-lxm4x
Dec 30 15:21:54.362: INFO: Got endpoints: latency-svc-lxm4x [1.382839418s]
Dec 30 15:21:54.450: INFO: Created: latency-svc-2d796
Dec 30 15:21:54.492: INFO: Got endpoints: latency-svc-2d796 [1.362478096s]
Dec 30 15:21:54.534: INFO: Created: latency-svc-4qkjg
Dec 30 15:21:54.674: INFO: Got endpoints: latency-svc-4qkjg [1.490885145s]
Dec 30 15:21:54.703: INFO: Created: latency-svc-ckpgf
Dec 30 15:21:54.764: INFO: Got endpoints: latency-svc-ckpgf [1.538176194s]
Dec 30 15:21:54.780: INFO: Created: latency-svc-5qdpm
Dec 30 15:21:54.780: INFO: Got endpoints: latency-svc-5qdpm [1.442948578s]
Dec 30 15:21:54.948: INFO: Created: latency-svc-fvbcr
Dec 30 15:21:55.069: INFO: Got endpoints: latency-svc-fvbcr [1.684811087s]
Dec 30 15:21:55.079: INFO: Created: latency-svc-l4nz7
Dec 30 15:21:55.083: INFO: Got endpoints: latency-svc-l4nz7 [1.655597711s]
Dec 30 15:21:55.139: INFO: Created: latency-svc-tvxxq
Dec 30 15:21:55.145: INFO: Got endpoints: latency-svc-tvxxq [1.571804217s]
Dec 30 15:21:55.277: INFO: Created: latency-svc-rj98g
Dec 30 15:21:55.283: INFO: Got endpoints: latency-svc-rj98g [1.657674671s]
Dec 30 15:21:55.323: INFO: Created: latency-svc-5mmgb
Dec 30 15:21:55.336: INFO: Got endpoints: latency-svc-5mmgb [1.609025055s]
Dec 30 15:21:55.440: INFO: Created: latency-svc-4845r
Dec 30 15:21:55.482: INFO: Got endpoints: latency-svc-4845r [1.542288836s]
Dec 30 15:21:55.485: INFO: Created: latency-svc-974jl
Dec 30 15:21:55.490: INFO: Got endpoints: latency-svc-974jl [1.500604777s]
Dec 30 15:21:55.557: INFO: Created: latency-svc-hm8hk
Dec 30 15:21:55.576: INFO: Got endpoints: latency-svc-hm8hk [1.423280201s]
Dec 30 15:21:55.613: INFO: Created: latency-svc-762fp
Dec 30 15:21:55.617: INFO: Got endpoints: latency-svc-762fp [1.45137914s]
Dec 30 15:21:55.724: INFO: Created: latency-svc-bn2tv
Dec 30 15:21:55.728: INFO: Got endpoints: latency-svc-bn2tv [1.408870865s]
Dec 30 15:21:55.803: INFO: Created: latency-svc-fsnrv
Dec 30 15:21:55.803: INFO: Got endpoints: latency-svc-fsnrv [1.441407875s]
Dec 30 15:21:55.934: INFO: Created: latency-svc-pjxzk
Dec 30 15:21:55.942: INFO: Got endpoints: latency-svc-pjxzk [1.449550122s]
Dec 30 15:21:55.983: INFO: Created: latency-svc-nzjq2
Dec 30 15:21:55.987: INFO: Got endpoints: latency-svc-nzjq2 [1.312826683s]
Dec 30 15:21:56.163: INFO: Created: latency-svc-6hg65
Dec 30 15:21:56.173: INFO: Got endpoints: latency-svc-6hg65 [1.408658205s]
Dec 30 15:21:56.225: INFO: Created: latency-svc-7pw5v
Dec 30 15:21:56.262: INFO: Got endpoints: latency-svc-7pw5v [1.481655472s]
Dec 30 15:21:56.274: INFO: Created: latency-svc-9dhxf
Dec 30 15:21:56.377: INFO: Got endpoints: latency-svc-9dhxf [1.307952522s]
Dec 30 15:21:56.410: INFO: Created: latency-svc-l7wf8
Dec 30 15:21:56.412: INFO: Got endpoints: latency-svc-l7wf8 [1.32912532s]
Dec 30 15:21:56.444: INFO: Created: latency-svc-4cfb9
Dec 30 15:21:56.453: INFO: Got endpoints: latency-svc-4cfb9 [1.307516163s]
Dec 30 15:21:56.562: INFO: Created: latency-svc-slnct
Dec 30 15:21:56.573: INFO: Got endpoints: latency-svc-slnct [1.290038051s]
Dec 30 15:21:56.607: INFO: Created: latency-svc-lfq8f
Dec 30 15:21:56.620: INFO: Got endpoints: latency-svc-lfq8f [1.283365769s]
Dec 30 15:21:56.654: INFO: Created: latency-svc-nmx7s
Dec 30 15:21:56.737: INFO: Got endpoints: latency-svc-nmx7s [1.255469434s]
Dec 30 15:21:56.766: INFO: Created: latency-svc-lp4v5
Dec 30 15:21:56.786: INFO: Got endpoints: latency-svc-lp4v5 [1.2952558s]
Dec 30 15:21:56.835: INFO: Created: latency-svc-c6stm
Dec 30 15:21:56.948: INFO: Got endpoints: latency-svc-c6stm [1.37224859s]
Dec 30 15:21:56.973: INFO: Created: latency-svc-jkvm4
Dec 30 15:21:57.011: INFO: Created: latency-svc-hcxwq
Dec 30 15:21:57.011: INFO: Got endpoints: latency-svc-jkvm4 [1.393543337s]
Dec 30 15:21:57.093: INFO: Got endpoints: latency-svc-hcxwq [1.364931276s]
Dec 30 15:21:57.116: INFO: Created: latency-svc-jhnqf
Dec 30 15:21:57.120: INFO: Got endpoints: latency-svc-jhnqf [1.316015868s]
Dec 30 15:21:57.141: INFO: Created: latency-svc-tddff
Dec 30 15:21:57.179: INFO: Got endpoints: latency-svc-tddff [1.236997525s]
Dec 30 15:21:57.180: INFO: Created: latency-svc-z2cbw
Dec 30 15:21:57.254: INFO: Got endpoints: latency-svc-z2cbw [1.266439812s]
Dec 30 15:21:57.292: INFO: Created: latency-svc-ztsq6
Dec 30 15:21:57.306: INFO: Got endpoints: latency-svc-ztsq6 [1.133758767s]
Dec 30 15:21:57.355: INFO: Created: latency-svc-4qvl8
Dec 30 15:21:57.443: INFO: Got endpoints: latency-svc-4qvl8 [1.18011923s]
Dec 30 15:21:57.484: INFO: Created: latency-svc-lgsz2
Dec 30 15:21:57.513: INFO: Got endpoints: latency-svc-lgsz2 [1.135291567s]
Dec 30 15:21:57.699: INFO: Created: latency-svc-bd7zg
Dec 30 15:21:57.725: INFO: Got endpoints: latency-svc-bd7zg [1.312695721s]
Dec 30 15:21:57.752: INFO: Created: latency-svc-wltjh
Dec 30 15:21:57.887: INFO: Got endpoints: latency-svc-wltjh [1.434028099s]
Dec 30 15:21:57.891: INFO: Created: latency-svc-q2gzw
Dec 30 15:21:57.933: INFO: Got endpoints: latency-svc-q2gzw [1.35954072s]
Dec 30 15:21:57.938: INFO: Created: latency-svc-psfvw
Dec 30 15:21:57.945: INFO: Got endpoints: latency-svc-psfvw [1.325134248s]
Dec 30 15:21:58.065: INFO: Created: latency-svc-ncm9l
Dec 30 15:21:58.069: INFO: Got endpoints: latency-svc-ncm9l [1.331230761s]
Dec 30 15:21:58.131: INFO: Created: latency-svc-d8v4h
Dec 30 15:21:58.142: INFO: Got endpoints: latency-svc-d8v4h [1.35580215s]
Dec 30 15:21:58.273: INFO: Created: latency-svc-9bcbj
Dec 30 15:21:58.289: INFO: Got endpoints: latency-svc-9bcbj [1.340444667s]
Dec 30 15:21:58.343: INFO: Created: latency-svc-4xtkq
Dec 30 15:21:58.351: INFO: Got endpoints: latency-svc-4xtkq [1.340043793s]
Dec 30 15:21:58.451: INFO: Created: latency-svc-8zv2d
Dec 30 15:21:58.474: INFO: Got endpoints: latency-svc-8zv2d [1.380818089s]
Dec 30 15:21:59.158: INFO: Created: latency-svc-rc75l
Dec 30 15:21:59.162: INFO: Got endpoints: latency-svc-rc75l [2.042590243s]
Dec 30 15:21:59.206: INFO: Created: latency-svc-cpfjm
Dec 30 15:21:59.212: INFO: Got endpoints: latency-svc-cpfjm [2.032258825s]
Dec 30 15:21:59.336: INFO: Created: latency-svc-57tz6
Dec 30 15:21:59.347: INFO: Got endpoints: latency-svc-57tz6 [2.09320802s]
Dec 30 15:21:59.398: INFO: Created: latency-svc-tz45p
Dec 30 15:21:59.472: INFO: Got endpoints: latency-svc-tz45p [2.164942425s]
Dec 30 15:21:59.648: INFO: Created: latency-svc-mwhqf
Dec 30 15:21:59.669: INFO: Got endpoints: latency-svc-mwhqf [2.226549859s]
Dec 30 15:21:59.701: INFO: Created: latency-svc-p8l6z
Dec 30 15:21:59.720: INFO: Got endpoints: latency-svc-p8l6z [2.206428209s]
Dec 30 15:21:59.839: INFO: Created: latency-svc-z9l4w
Dec 30 15:21:59.842: INFO: Got endpoints: latency-svc-z9l4w [2.116933539s]
Dec 30 15:21:59.876: INFO: Created: latency-svc-2cdln
Dec 30 15:21:59.989: INFO: Got endpoints: latency-svc-2cdln [2.102384552s]
Dec 30 15:22:00.000: INFO: Created: latency-svc-ppzbh
Dec 30 15:22:00.014: INFO: Got endpoints: latency-svc-ppzbh [2.080627535s]
Dec 30 15:22:00.051: INFO: Created: latency-svc-7zdhb
Dec 30 15:22:00.083: INFO: Got endpoints: latency-svc-7zdhb [2.138169478s]
Dec 30 15:22:00.223: INFO: Created: latency-svc-jmc45
Dec 30 15:22:00.230: INFO: Got endpoints: latency-svc-jmc45 [2.160004888s]
Dec 30 15:22:00.279: INFO: Created: latency-svc-l4z4x
Dec 30 15:22:00.297: INFO: Got endpoints: latency-svc-l4z4x [2.155609067s]
Dec 30 15:22:00.319: INFO: Created: latency-svc-xqm7w
Dec 30 15:22:00.431: INFO: Got endpoints: latency-svc-xqm7w [2.141355952s]
Dec 30 15:22:00.449: INFO: Created: latency-svc-s8znn
Dec 30 15:22:00.474: INFO: Got endpoints: latency-svc-s8znn [2.122256664s]
Dec 30 15:22:00.485: INFO: Created: latency-svc-mxfmq
Dec 30 15:22:00.487: INFO: Got endpoints: latency-svc-mxfmq [2.012100189s]
Dec 30 15:22:00.535: INFO: Created: latency-svc-j4ccl
Dec 30 15:22:00.665: INFO: Got endpoints: latency-svc-j4ccl [1.502004583s]
Dec 30 15:22:00.679: INFO: Created: latency-svc-f5vh5
Dec 30 15:22:00.682: INFO: Got endpoints: latency-svc-f5vh5 [1.470196613s]
Dec 30 15:22:00.738: INFO: Created: latency-svc-c8sxr
Dec 30 15:22:00.741: INFO: Got endpoints: latency-svc-c8sxr [1.393922499s]
Dec 30 15:22:00.945: INFO: Created: latency-svc-plrrc
Dec 30 15:22:00.960: INFO: Got endpoints: latency-svc-plrrc [1.488215907s]
Dec 30 15:22:01.158: INFO: Created: latency-svc-h4xjk
Dec 30 15:22:01.172: INFO: Got endpoints: latency-svc-h4xjk [1.502044606s]
Dec 30 15:22:01.220: INFO: Created: latency-svc-7xnf9
Dec 30 15:22:01.226: INFO: Got endpoints: latency-svc-7xnf9 [1.505969484s]
Dec 30 15:22:01.336: INFO: Created: latency-svc-ddqxb
Dec 30 15:22:01.387: INFO: Created: latency-svc-x6242
Dec 30 15:22:01.387: INFO: Got endpoints: latency-svc-ddqxb [1.54549409s]
Dec 30 15:22:01.417: INFO: Got endpoints: latency-svc-x6242 [1.427294316s]
Dec 30 15:22:01.559: INFO: Created: latency-svc-r5h4b
Dec 30 15:22:01.565: INFO: Got endpoints: latency-svc-r5h4b [1.550771541s]
Dec 30 15:22:01.598: INFO: Created: latency-svc-5mvv6
Dec 30 15:22:01.609: INFO: Got endpoints: latency-svc-5mvv6 [1.525196422s]
Dec 30 15:22:01.642: INFO: Created: latency-svc-bxcdq
Dec 30 15:22:01.650: INFO: Got endpoints: latency-svc-bxcdq [1.420534534s]
Dec 30 15:22:01.735: INFO: Created: latency-svc-fpzh2
Dec 30 15:22:01.743: INFO: Got endpoints: latency-svc-fpzh2 [1.444605064s]
Dec 30 15:22:01.808: INFO: Created: latency-svc-852pk
Dec 30 15:22:01.826: INFO: Got endpoints: latency-svc-852pk [1.395179248s]
Dec 30 15:22:01.955: INFO: Created: latency-svc-n6rzr
Dec 30 15:22:01.966: INFO: Got endpoints: latency-svc-n6rzr [1.492379879s]
Dec 30 15:22:02.023: INFO: Created: latency-svc-9lx4r
Dec 30 15:22:02.034: INFO: Got endpoints: latency-svc-9lx4r [1.546646052s]
Dec 30 15:22:02.127: INFO: Created: latency-svc-47ltx
Dec 30 15:22:02.152: INFO: Got endpoints: latency-svc-47ltx [1.486300076s]
Dec 30 15:22:02.187: INFO: Created: latency-svc-dmm6n
Dec 30 15:22:02.205: INFO: Got endpoints: latency-svc-dmm6n [1.523119877s]
Dec 30 15:22:02.343: INFO: Created: latency-svc-59mqt
Dec 30 15:22:02.349: INFO: Got endpoints: latency-svc-59mqt [1.608144685s]
Dec 30 15:22:02.404: INFO: Created: latency-svc-4cjmc
Dec 30 15:22:02.405: INFO: Got endpoints: latency-svc-4cjmc [1.444267513s]
Dec 30 15:22:02.432: INFO: Created: latency-svc-vf6zk
Dec 30 15:22:02.438: INFO: Got endpoints: latency-svc-vf6zk [1.265773288s]
Dec 30 15:22:02.586: INFO: Created: latency-svc-chl8j
Dec 30 15:22:02.620: INFO: Got endpoints: latency-svc-chl8j [1.3934849s]
Dec 30 15:22:02.640: INFO: Created: latency-svc-vss2b
Dec 30 15:22:02.655: INFO: Got endpoints: latency-svc-vss2b [1.267303638s]
Dec 30 15:22:02.762: INFO: Created: latency-svc-h2wsp
Dec 30 15:22:02.781: INFO: Got endpoints: latency-svc-h2wsp [1.363450779s]
Dec 30 15:22:02.951: INFO: Created: latency-svc-n5g6g
Dec 30 15:22:02.983: INFO: Got endpoints: latency-svc-n5g6g [1.417273755s]
Dec 30 15:22:03.031: INFO: Created: latency-svc-bbtdf
Dec 30 15:22:03.182: INFO: Got endpoints: latency-svc-bbtdf [1.572601068s]
Dec 30 15:22:03.198: INFO: Created: latency-svc-74c9r
Dec 30 15:22:03.209: INFO: Got endpoints: latency-svc-74c9r [1.558490056s]
Dec 30 15:22:03.282: INFO: Created: latency-svc-8gp8m
Dec 30 15:22:03.367: INFO: Got endpoints: latency-svc-8gp8m [1.624670431s]
Dec 30 15:22:03.387: INFO: Created: latency-svc-dhknc
Dec 30 15:22:03.409: INFO: Got endpoints: latency-svc-dhknc [1.582245632s]
Dec 30 15:22:03.438: INFO: Created: latency-svc-29kd8
Dec 30 15:22:03.443: INFO: Got endpoints: latency-svc-29kd8 [1.476370774s]
Dec 30 15:22:03.584: INFO: Created: latency-svc-ntwwf
Dec 30 15:22:03.595: INFO: Got endpoints: latency-svc-ntwwf [1.560820989s]
Dec 30 15:22:03.674: INFO: Created: latency-svc-tzbst
Dec 30 15:22:03.675: INFO: Got endpoints: latency-svc-tzbst [1.523072589s]
Dec 30 15:22:03.783: INFO: Created: latency-svc-qkx9q
Dec 30 15:22:03.824: INFO: Got endpoints: latency-svc-qkx9q [1.618636435s]
Dec 30 15:22:04.134: INFO: Created: latency-svc-wplgr
Dec 30 15:22:04.277: INFO: Got endpoints: latency-svc-wplgr [1.927095772s]
Dec 30 15:22:04.297: INFO: Created: latency-svc-7fzh5
Dec 30 15:22:04.306: INFO: Got endpoints: latency-svc-7fzh5 [1.901422263s]
Dec 30 15:22:04.486: INFO: Created: latency-svc-xvmwv
Dec 30 15:22:04.552: INFO: Got endpoints: latency-svc-xvmwv [2.11471358s]
Dec 30 15:22:04.559: INFO: Created: latency-svc-n7lrv
Dec 30 15:22:04.576: INFO: Got endpoints: latency-svc-n7lrv [1.9549279s]
Dec 30 15:22:04.683: INFO: Created: latency-svc-qvwpc
Dec 30 15:22:04.692: INFO: Got endpoints: latency-svc-qvwpc [2.036482708s]
Dec 30 15:22:04.736: INFO: Created: latency-svc-jjmbx
Dec 30 15:22:04.864: INFO: Created: latency-svc-jxq4z
Dec 30 15:22:04.867: INFO: Got endpoints: latency-svc-jjmbx [2.085287537s]
Dec 30 15:22:04.888: INFO: Got endpoints: latency-svc-jxq4z [1.90529446s]
Dec 30 15:22:04.936: INFO: Created: latency-svc-g9b2d
Dec 30 15:22:04.952: INFO: Got endpoints: latency-svc-g9b2d [1.769661292s]
Dec 30 15:22:05.164: INFO: Created: latency-svc-7gqwh
Dec 30 15:22:05.217: INFO: Got endpoints: latency-svc-7gqwh [2.008127163s]
Dec 30 15:22:05.433: INFO: Created: latency-svc-ff8kl
Dec 30 15:22:05.480: INFO: Got endpoints: latency-svc-ff8kl [2.112387746s]
Dec 30 15:22:05.582: INFO: Created: latency-svc-zpgl9
Dec 30 15:22:05.592: INFO: Got endpoints: latency-svc-zpgl9 [2.183183198s]
Dec 30 15:22:05.659: INFO: Created: latency-svc-nb6fk
Dec 30 15:22:05.784: INFO: Created: latency-svc-ht6dl
Dec 30 15:22:05.784: INFO: Got endpoints: latency-svc-nb6fk [2.341449045s]
Dec 30 15:22:05.792: INFO: Got endpoints: latency-svc-ht6dl [2.196338854s]
Dec 30 15:22:05.854: INFO: Created: latency-svc-tlnmk
Dec 30 15:22:06.094: INFO: Got endpoints: latency-svc-tlnmk [2.419167394s]
Dec 30 15:22:06.162: INFO: Created: latency-svc-95kjz
Dec 30 15:22:06.174: INFO: Got endpoints: latency-svc-95kjz [2.349476428s]
Dec 30 15:22:06.368: INFO: Created: latency-svc-v8l8n
Dec 30 15:22:06.373: INFO: Got endpoints: latency-svc-v8l8n [2.096731824s]
Dec 30 15:22:06.436: INFO: Created: latency-svc-fv7h6
Dec 30 15:22:06.452: INFO: Got endpoints: latency-svc-fv7h6 [2.145807017s]
Dec 30 15:22:06.608: INFO: Created: latency-svc-dnlpn
Dec 30 15:22:06.666: INFO: Got endpoints: latency-svc-dnlpn [2.113639025s]
Dec 30 15:22:06.692: INFO: Created: latency-svc-6njr5
Dec 30 15:22:06.810: INFO: Got endpoints: latency-svc-6njr5 [2.234126563s]
Dec 30 15:22:06.847: INFO: Created: latency-svc-dlwrk
Dec 30 15:22:07.067: INFO: Created: latency-svc-cqnkk
Dec 30 15:22:07.068: INFO: Got endpoints: latency-svc-dlwrk [2.376218124s]
Dec 30 15:22:07.146: INFO: Got endpoints: latency-svc-cqnkk [2.279148566s]
Dec 30 15:22:07.160: INFO: Created: latency-svc-f62c7
Dec 30 15:22:07.377: INFO: Got endpoints: latency-svc-f62c7 [2.488462672s]
Dec 30 15:22:07.428: INFO: Created: latency-svc-h5hkd
Dec 30 15:22:07.434: INFO: Got endpoints: latency-svc-h5hkd [2.481916639s]
Dec 30 15:22:07.467: INFO: Created: latency-svc-x9dlr
Dec 30 15:22:07.555: INFO: Got endpoints: latency-svc-x9dlr [2.337584162s]
Dec 30 15:22:07.585: INFO: Created: latency-svc-4kw2k
Dec 30 15:22:07.609: INFO: Got endpoints: latency-svc-4kw2k [2.128292432s]
Dec 30 15:22:07.656: INFO: Created: latency-svc-6sqj2
Dec 30 15:22:07.757: INFO: Got endpoints: latency-svc-6sqj2 [2.165061545s]
Dec 30 15:22:07.761: INFO: Created: latency-svc-j8422
Dec 30 15:22:07.765: INFO: Got endpoints: latency-svc-j8422 [1.980742078s]
Dec 30 15:22:07.794: INFO: Created: latency-svc-smplt
Dec 30 15:22:07.798: INFO: Got endpoints: latency-svc-smplt [2.006396662s]
Dec 30 15:22:07.837: INFO: Created: latency-svc-r9cb8
Dec 30 15:22:07.840: INFO: Got endpoints: latency-svc-r9cb8 [1.745851024s]
Dec 30 15:22:07.931: INFO: Created: latency-svc-9pqgd
Dec 30 15:22:07.939: INFO: Got endpoints: latency-svc-9pqgd [1.765226502s]
Dec 30 15:22:07.963: INFO: Created: latency-svc-zkxlk
Dec 30 15:22:07.980: INFO: Got endpoints: latency-svc-zkxlk [1.606176497s]
Dec 30 15:22:08.140: INFO: Created: latency-svc-2g6gg
Dec 30 15:22:08.144: INFO: Got endpoints: latency-svc-2g6gg [1.691923612s]
Dec 30 15:22:08.183: INFO: Created: latency-svc-fmhc9
Dec 30 15:22:08.188: INFO: Got endpoints: latency-svc-fmhc9 [1.52098335s]
Dec 30 15:22:08.318: INFO: Created: latency-svc-4xmwz
Dec 30 15:22:08.350: INFO: Got endpoints: latency-svc-4xmwz [1.540007235s]
Dec 30 15:22:08.390: INFO: Created: latency-svc-q6tln
Dec 30 15:22:08.392: INFO: Got endpoints: latency-svc-q6tln [1.323388666s]
Dec 30 15:22:08.549: INFO: Created: latency-svc-48m4g
Dec 30 15:22:08.568: INFO: Got endpoints: latency-svc-48m4g [1.420854977s]
Dec 30 15:22:08.607: INFO: Created: latency-svc-s4mmz
Dec 30 15:22:08.622: INFO: Got endpoints: latency-svc-s4mmz [1.244996166s]
Dec 30 15:22:08.707: INFO: Created: latency-svc-2llvk
Dec 30 15:22:08.717: INFO: Got endpoints: latency-svc-2llvk [1.28294213s]
Dec 30 15:22:08.784: INFO: Created: latency-svc-bqgdv
Dec 30 15:22:08.876: INFO: Got endpoints: latency-svc-bqgdv [1.320258677s]
Dec 30 15:22:08.908: INFO: Created: latency-svc-kpwd6
Dec 30 15:22:08.914: INFO: Got endpoints: latency-svc-kpwd6 [1.305030973s]
Dec 30 15:22:09.115: INFO: Created: latency-svc-6ltgs
Dec 30 15:22:09.121: INFO: Got endpoints: latency-svc-6ltgs [1.363701479s]
Dec 30 15:22:09.165: INFO: Created: latency-svc-h99nm
Dec 30 15:22:09.181: INFO: Got endpoints: latency-svc-h99nm [1.415545845s]
Dec 30 15:22:09.298: INFO: Created: latency-svc-ttw28
Dec 30 15:22:09.310: INFO: Got endpoints: latency-svc-ttw28 [1.511448263s]
Dec 30 15:22:09.372: INFO: Created: latency-svc-nqd4f
Dec 30 15:22:09.390: INFO: Got endpoints: latency-svc-nqd4f [1.550229465s]
Dec 30 15:22:09.556: INFO: Created: latency-svc-zs9bf
Dec 30 15:22:09.817: INFO: Created: latency-svc-v5mv7
Dec 30 15:22:09.835: INFO: Got endpoints: latency-svc-zs9bf [1.894991381s]
Dec 30 15:22:09.852: INFO: Got endpoints: latency-svc-v5mv7 [1.87193343s]
Dec 30 15:22:09.908: INFO: Created: latency-svc-74hs4
Dec 30 15:22:10.071: INFO: Got endpoints: latency-svc-74hs4 [1.926635206s]
Dec 30 15:22:10.109: INFO: Created: latency-svc-76565
Dec 30 15:22:10.130: INFO: Got endpoints: latency-svc-76565 [1.942125166s]
Dec 30 15:22:10.264: INFO: Created: latency-svc-l5lgr
Dec 30 15:22:10.276: INFO: Got endpoints: latency-svc-l5lgr [1.925112469s]
Dec 30 15:22:10.312: INFO: Created: latency-svc-k9vk5
Dec 30 15:22:10.316: INFO: Got endpoints: latency-svc-k9vk5 [1.923808269s]
Dec 30 15:22:10.488: INFO: Created: latency-svc-bbtsv
Dec 30 15:22:10.500: INFO: Got endpoints: latency-svc-bbtsv [1.932610494s]
Dec 30 15:22:10.544: INFO: Created: latency-svc-h2lhf
Dec 30 15:22:10.558: INFO: Got endpoints: latency-svc-h2lhf [1.935938594s]
Dec 30 15:22:10.692: INFO: Created: latency-svc-cjks7
Dec 30 15:22:10.696: INFO: Got endpoints: latency-svc-cjks7 [1.979514163s]
Dec 30 15:22:10.716: INFO: Created: latency-svc-7gq5d
Dec 30 15:22:10.724: INFO: Got endpoints: latency-svc-7gq5d [1.84752828s]
Dec 30 15:22:10.758: INFO: Created: latency-svc-5h2z7
Dec 30 15:22:10.893: INFO: Got endpoints: latency-svc-5h2z7 [1.978798218s]
Dec 30 15:22:10.911: INFO: Created: latency-svc-j8ks6
Dec 30 15:22:10.918: INFO: Got endpoints: latency-svc-j8ks6 [1.796860955s]
Dec 30 15:22:10.954: INFO: Created: latency-svc-qfldg
Dec 30 15:22:10.961: INFO: Got endpoints: latency-svc-qfldg [1.780009703s]
Dec 30 15:22:11.115: INFO: Created: latency-svc-xxv5r
Dec 30 15:22:11.117: INFO: Got endpoints: latency-svc-xxv5r [1.806861546s]
Dec 30 15:22:11.162: INFO: Created: latency-svc-ccz9k
Dec 30 15:22:11.167: INFO: Got endpoints: latency-svc-ccz9k [1.776656925s]
Dec 30 15:22:11.211: INFO: Created: latency-svc-nwq4s
Dec 30 15:22:11.297: INFO: Got endpoints: latency-svc-nwq4s [1.461982867s]
Dec 30 15:22:11.330: INFO: Created: latency-svc-f4zrg
Dec 30 15:22:11.346: INFO: Got endpoints: latency-svc-f4zrg [1.493774377s]
Dec 30 15:22:11.458: INFO: Created: latency-svc-xsrzr
Dec 30 15:22:11.487: INFO: Got endpoints: latency-svc-xsrzr [1.415515428s]
Dec 30 15:22:11.503: INFO: Created: latency-svc-5zmmn
Dec 30 15:22:11.508: INFO: Got endpoints: latency-svc-5zmmn [1.377655966s]
Dec 30 15:22:11.627: INFO: Created: latency-svc-qp9pb
Dec 30 15:22:11.684: INFO: Got endpoints: latency-svc-qp9pb [1.407816871s]
Dec 30 15:22:11.686: INFO: Created: latency-svc-pq8bc
Dec 30 15:22:11.692: INFO: Got endpoints: latency-svc-pq8bc [1.376567756s]
Dec 30 15:22:11.726: INFO: Created: latency-svc-lll6g
Dec 30 15:22:11.786: INFO: Got endpoints: latency-svc-lll6g [1.285577152s]
Dec 30 15:22:11.809: INFO: Created: latency-svc-mx9zs
Dec 30 15:22:11.827: INFO: Got endpoints: latency-svc-mx9zs [1.268612404s]
Dec 30 15:22:11.863: INFO: Created: latency-svc-67xmk
Dec 30 15:22:11.977: INFO: Got endpoints: latency-svc-67xmk [1.280572011s]
Dec 30 15:22:11.996: INFO: Created: latency-svc-dfsmh
Dec 30 15:22:12.006: INFO: Got endpoints: latency-svc-dfsmh [1.282118064s]
Dec 30 15:22:12.064: INFO: Created: latency-svc-vchms
Dec 30 15:22:12.172: INFO: Got endpoints: latency-svc-vchms [1.278911784s]
Dec 30 15:22:12.195: INFO: Created: latency-svc-d4nwv
Dec 30 15:22:12.195: INFO: Got endpoints: latency-svc-d4nwv [1.277132138s]
Dec 30 15:22:12.241: INFO: Created: latency-svc-9ndx5
Dec 30 15:22:12.380: INFO: Got endpoints: latency-svc-9ndx5 [1.418921209s]
Dec 30 15:22:12.420: INFO: Created: latency-svc-9fgxl
Dec 30 15:22:12.451: INFO: Got endpoints: latency-svc-9fgxl [1.334317404s]
Dec 30 15:22:12.556: INFO: Created: latency-svc-zvrxc
Dec 30 15:22:12.617: INFO: Got endpoints: latency-svc-zvrxc [1.449013227s]
Dec 30 15:22:12.627: INFO: Created: latency-svc-j946w
Dec 30 15:22:12.716: INFO: Got endpoints: latency-svc-j946w [1.41858571s]
Dec 30 15:22:12.735: INFO: Created: latency-svc-p94cm
Dec 30 15:22:12.739: INFO: Got endpoints: latency-svc-p94cm [1.392552124s]
Dec 30 15:22:12.789: INFO: Created: latency-svc-fzmx8
Dec 30 15:22:12.892: INFO: Created: latency-svc-tpvk9
Dec 30 15:22:12.892: INFO: Got endpoints: latency-svc-fzmx8 [1.405262937s]
Dec 30 15:22:12.910: INFO: Got endpoints: latency-svc-tpvk9 [1.401651967s]
Dec 30 15:22:12.911: INFO: Latencies: [159.169552ms 324.283148ms 369.913164ms 482.139088ms 573.969312ms 687.359168ms 737.777576ms 920.016589ms 987.192859ms 1.133758767s 1.135291567s 1.14994885s 1.18011923s 1.199466636s 1.236997525s 1.244996166s 1.255469434s 1.265773288s 1.266439812s 1.267303638s 1.268612404s 1.277132138s 1.278911784s 1.280572011s 1.282118064s 1.28294213s 1.283365769s 1.285577152s 1.290038051s 1.2952558s 1.305030973s 1.306412485s 1.307516163s 1.307952522s 1.312695721s 1.312826683s 1.316015868s 1.320258677s 1.323388666s 1.324012428s 1.325134248s 1.32912532s 1.331230761s 1.334317404s 1.340043793s 1.340444667s 1.340795468s 1.35580215s 1.35954072s 1.362478096s 1.363450779s 1.363701479s 1.364931276s 1.37224859s 1.376567756s 1.377655966s 1.380818089s 1.382839418s 1.385499065s 1.391438487s 1.391459296s 1.392552124s 1.3934849s 1.393543337s 1.393922499s 1.395179248s 1.399773995s 1.401651967s 1.405262937s 1.407816871s 1.408658205s 1.408870865s 1.415515428s 1.415545845s 1.417273755s 1.41858571s 1.418921209s 1.420534534s 1.420854977s 1.423280201s 1.427294316s 1.429339387s 1.434028099s 1.435999751s 1.441407875s 1.442948578s 1.444230229s 1.444267513s 1.444605064s 1.449013227s 1.449550122s 1.45137914s 1.461982867s 1.470196613s 1.476370774s 1.481655472s 1.485810782s 1.486300076s 1.488215907s 1.490337541s 1.490885145s 1.492379879s 1.493774377s 1.500604777s 1.502004583s 1.502044606s 1.505969484s 1.507103708s 1.510236379s 1.511448263s 1.52098335s 1.523072589s 1.523119877s 1.525196422s 1.538176194s 1.540007235s 1.542288836s 1.54549409s 1.546646052s 1.550229465s 1.550771541s 1.552157214s 1.558490056s 1.559450997s 1.560820989s 1.567531969s 1.571171049s 1.571804217s 1.572601068s 1.582245632s 1.606176497s 1.608144685s 1.609025055s 1.618636435s 1.624670431s 1.655597711s 1.657674671s 1.674536966s 1.684811087s 1.691923612s 1.745851024s 1.765226502s 1.769661292s 1.776656925s 1.780009703s 1.796860955s 1.806861546s 1.84752828s 1.87193343s 1.894991381s 1.901422263s 1.90529446s 1.923808269s 1.925112469s 1.926635206s 1.927095772s 1.932610494s 1.935938594s 1.942125166s 1.9549279s 1.978798218s 1.979514163s 1.980742078s 2.006396662s 2.008127163s 2.012100189s 2.032258825s 2.036482708s 2.042590243s 2.080627535s 2.085287537s 2.09320802s 2.096731824s 2.102384552s 2.112387746s 2.113639025s 2.11471358s 2.116933539s 2.122256664s 2.128292432s 2.138169478s 2.141355952s 2.145807017s 2.155609067s 2.160004888s 2.164942425s 2.165061545s 2.183183198s 2.196338854s 2.206428209s 2.226549859s 2.234126563s 2.279148566s 2.337584162s 2.341449045s 2.349476428s 2.376218124s 2.419167394s 2.481916639s 2.488462672s]
Dec 30 15:22:12.911: INFO: 50 %ile: 1.490885145s
Dec 30 15:22:12.911: INFO: 90 %ile: 2.138169478s
Dec 30 15:22:12.911: INFO: 99 %ile: 2.481916639s
Dec 30 15:22:12.911: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 15:22:12.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-9165" for this suite.
Dec 30 15:22:48.947: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 15:22:49.033: INFO: namespace svc-latency-9165 deletion completed in 36.111857501s

• [SLOW TEST:66.216 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 15:22:49.033: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-908844be-f60d-4c47-ac1d-37f3fac31a63 in namespace container-probe-6940
Dec 30 15:22:59.197: INFO: Started pod liveness-908844be-f60d-4c47-ac1d-37f3fac31a63 in namespace container-probe-6940
STEP: checking the pod's current state and verifying that restartCount is present
Dec 30 15:22:59.201: INFO: Initial restart count of pod liveness-908844be-f60d-4c47-ac1d-37f3fac31a63 is 0
Dec 30 15:23:21.327: INFO: Restart count of pod container-probe-6940/liveness-908844be-f60d-4c47-ac1d-37f3fac31a63 is now 1 (22.126289612s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 15:23:21.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6940" for this suite.
Dec 30 15:23:27.540: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 15:23:28.000: INFO: namespace container-probe-6940 deletion completed in 6.580538769s

• [SLOW TEST:38.967 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 15:23:28.000: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-8494
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-8494
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8494
Dec 30 15:23:28.153: INFO: Found 0 stateful pods, waiting for 1
Dec 30 15:23:38.162: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Dec 30 15:23:38.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8494 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 30 15:23:40.952: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 30 15:23:40.952: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 30 15:23:40.952: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 30 15:23:40.969: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Dec 30 15:23:50.975: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 30 15:23:50.975: INFO: Waiting for statefulset status.replicas updated to 0
Dec 30 15:23:51.101: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999998423s
Dec 30 15:23:52.157: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.883082768s
Dec 30 15:23:53.165: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.827669937s
Dec 30 15:23:54.178: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.819859245s
Dec 30 15:23:55.190: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.807306359s
Dec 30 15:23:56.203: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.794408517s
Dec 30 15:23:57.220: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.781912281s
Dec 30 15:23:58.231: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.764437261s
Dec 30 15:23:59.240: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.753939111s
Dec 30 15:24:00.251: INFO: Verifying statefulset ss doesn't scale past 1 for another 744.821554ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8494
Dec 30 15:24:01.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 15:24:01.970: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Dec 30 15:24:01.971: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 30 15:24:01.971: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 30 15:24:01.989: INFO: Found 1 stateful pods, waiting for 3
Dec 30 15:24:11.999: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 30 15:24:11.999: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 30 15:24:11.999: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 30 15:24:22.000: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 30 15:24:22.000: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 30 15:24:22.000: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Dec 30 15:24:22.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8494 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 30 15:24:23.029: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 30 15:24:23.029: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 30 15:24:23.029: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 30 15:24:23.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8494 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 30 15:24:23.541: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 30 15:24:23.541: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 30 15:24:23.541: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 30 15:24:23.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8494 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 30 15:24:24.369: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 30 15:24:24.370: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 30 15:24:24.370: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 30 15:24:24.370: INFO: Waiting for statefulset status.replicas updated to 0
Dec 30 15:24:24.394: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Dec 30 15:24:34.413: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 30 15:24:34.413: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Dec 30 15:24:34.413: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Dec 30 15:24:34.432: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999998956s
Dec 30 15:24:35.442: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.994952894s
Dec 30 15:24:36.452: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.985186793s
Dec 30 15:24:37.461: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.975367031s
Dec 30 15:24:38.479: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.966142s
Dec 30 15:24:39.492: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.9478961s
Dec 30 15:24:40.508: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.935322904s
Dec 30 15:24:41.524: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.919200571s
Dec 30 15:24:42.543: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.902473992s
Dec 30 15:24:43.576: INFO: Verifying statefulset ss doesn't scale past 3 for another 884.082944ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8494
Dec 30 15:24:44.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 15:24:45.199: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Dec 30 15:24:45.199: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 30 15:24:45.199: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 30 15:24:45.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8494 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 15:24:45.480: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Dec 30 15:24:45.480: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 30 15:24:45.480: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 30 15:24:45.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8494 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 30 15:24:46.254: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Dec 30 15:24:46.254: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 30 15:24:46.254: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 30 15:24:46.254: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Dec 30 15:25:26.283: INFO: Deleting all statefulset in ns statefulset-8494
Dec 30 15:25:26.288: INFO: Scaling statefulset ss to 0
Dec 30 15:25:26.297: INFO: Waiting for statefulset status.replicas updated to 0
Dec 30 15:25:26.299: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 15:25:26.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-8494" for this suite.
Dec 30 15:25:32.377: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 15:25:32.478: INFO: namespace statefulset-8494 deletion completed in 6.122459133s

• [SLOW TEST:124.477 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 15:25:32.478: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-5570875c-45e9-469f-8445-27912fa7a3f8
STEP: Creating a pod to test consume configMaps
Dec 30 15:25:32.594: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-326c9798-0e2b-4b77-aefa-6ea7126349d7" in namespace "projected-3894" to be "success or failure"
Dec 30 15:25:32.621: INFO: Pod "pod-projected-configmaps-326c9798-0e2b-4b77-aefa-6ea7126349d7": Phase="Pending", Reason="", readiness=false. Elapsed: 26.775517ms
Dec 30 15:25:34.637: INFO: Pod "pod-projected-configmaps-326c9798-0e2b-4b77-aefa-6ea7126349d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042773088s
Dec 30 15:25:36.643: INFO: Pod "pod-projected-configmaps-326c9798-0e2b-4b77-aefa-6ea7126349d7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049294383s
Dec 30 15:25:38.657: INFO: Pod "pod-projected-configmaps-326c9798-0e2b-4b77-aefa-6ea7126349d7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06309797s
Dec 30 15:25:40.674: INFO: Pod "pod-projected-configmaps-326c9798-0e2b-4b77-aefa-6ea7126349d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.079501323s
STEP: Saw pod success
Dec 30 15:25:40.674: INFO: Pod "pod-projected-configmaps-326c9798-0e2b-4b77-aefa-6ea7126349d7" satisfied condition "success or failure"
Dec 30 15:25:40.680: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-326c9798-0e2b-4b77-aefa-6ea7126349d7 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 30 15:25:40.840: INFO: Waiting for pod pod-projected-configmaps-326c9798-0e2b-4b77-aefa-6ea7126349d7 to disappear
Dec 30 15:25:40.849: INFO: Pod pod-projected-configmaps-326c9798-0e2b-4b77-aefa-6ea7126349d7 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 15:25:40.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3894" for this suite.
Dec 30 15:25:46.988: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 15:25:47.086: INFO: namespace projected-3894 deletion completed in 6.226115761s

• [SLOW TEST:14.608 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 15:25:47.088: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 30 15:25:47.267: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"c6936746-4c09-4c70-9c61-ea5369660fd0", Controller:(*bool)(0xc00211cb52), BlockOwnerDeletion:(*bool)(0xc00211cb53)}}
Dec 30 15:25:47.335: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"4d7bb976-0152-4eb9-9cf3-1dacebc977f2", Controller:(*bool)(0xc0030be192), BlockOwnerDeletion:(*bool)(0xc0030be193)}}
Dec 30 15:25:47.399: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"3b04b6ae-9afa-41f9-97d7-1bbb4c0f7180", Controller:(*bool)(0xc002877fc2), BlockOwnerDeletion:(*bool)(0xc002877fc3)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 15:25:52.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-773" for this suite.
Dec 30 15:25:58.503: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 15:25:58.684: INFO: namespace gc-773 deletion completed in 6.239134533s

• [SLOW TEST:11.596 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 15:25:58.685: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-91993de3-3753-44c9-9144-07b15c49bc98
STEP: Creating a pod to test consume configMaps
Dec 30 15:25:58.831: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-16a5a100-8c40-42f2-8135-6eb758dc5e83" in namespace "projected-6829" to be "success or failure"
Dec 30 15:25:58.839: INFO: Pod "pod-projected-configmaps-16a5a100-8c40-42f2-8135-6eb758dc5e83": Phase="Pending", Reason="", readiness=false. Elapsed: 7.875143ms
Dec 30 15:26:00.848: INFO: Pod "pod-projected-configmaps-16a5a100-8c40-42f2-8135-6eb758dc5e83": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016617319s
Dec 30 15:26:02.868: INFO: Pod "pod-projected-configmaps-16a5a100-8c40-42f2-8135-6eb758dc5e83": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036283877s
Dec 30 15:26:04.876: INFO: Pod "pod-projected-configmaps-16a5a100-8c40-42f2-8135-6eb758dc5e83": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044367934s
Dec 30 15:26:06.890: INFO: Pod "pod-projected-configmaps-16a5a100-8c40-42f2-8135-6eb758dc5e83": Phase="Pending", Reason="", readiness=false. Elapsed: 8.058389919s
Dec 30 15:26:08.901: INFO: Pod "pod-projected-configmaps-16a5a100-8c40-42f2-8135-6eb758dc5e83": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.069724781s
STEP: Saw pod success
Dec 30 15:26:08.901: INFO: Pod "pod-projected-configmaps-16a5a100-8c40-42f2-8135-6eb758dc5e83" satisfied condition "success or failure"
Dec 30 15:26:08.905: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-16a5a100-8c40-42f2-8135-6eb758dc5e83 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 30 15:26:09.062: INFO: Waiting for pod pod-projected-configmaps-16a5a100-8c40-42f2-8135-6eb758dc5e83 to disappear
Dec 30 15:26:09.073: INFO: Pod pod-projected-configmaps-16a5a100-8c40-42f2-8135-6eb758dc5e83 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 15:26:09.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6829" for this suite.
Dec 30 15:26:15.120: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 15:26:15.245: INFO: namespace projected-6829 deletion completed in 6.156791896s

• [SLOW TEST:16.560 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 15:26:15.247: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Dec 30 15:26:15.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5204'
Dec 30 15:26:15.868: INFO: stderr: ""
Dec 30 15:26:15.868: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 30 15:26:15.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5204'
Dec 30 15:26:16.153: INFO: stderr: ""
Dec 30 15:26:16.154: INFO: stdout: "update-demo-nautilus-89psv update-demo-nautilus-wnqmz "
Dec 30 15:26:16.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-89psv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5204'
Dec 30 15:26:16.328: INFO: stderr: ""
Dec 30 15:26:16.328: INFO: stdout: ""
Dec 30 15:26:16.328: INFO: update-demo-nautilus-89psv is created but not running
Dec 30 15:26:21.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5204'
Dec 30 15:26:23.174: INFO: stderr: ""
Dec 30 15:26:23.174: INFO: stdout: "update-demo-nautilus-89psv update-demo-nautilus-wnqmz "
Dec 30 15:26:23.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-89psv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5204'
Dec 30 15:26:23.688: INFO: stderr: ""
Dec 30 15:26:23.688: INFO: stdout: ""
Dec 30 15:26:23.688: INFO: update-demo-nautilus-89psv is created but not running
Dec 30 15:26:28.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5204'
Dec 30 15:26:28.838: INFO: stderr: ""
Dec 30 15:26:28.838: INFO: stdout: "update-demo-nautilus-89psv update-demo-nautilus-wnqmz "
Dec 30 15:26:28.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-89psv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5204'
Dec 30 15:26:28.993: INFO: stderr: ""
Dec 30 15:26:28.993: INFO: stdout: "true"
Dec 30 15:26:28.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-89psv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5204'
Dec 30 15:26:29.129: INFO: stderr: ""
Dec 30 15:26:29.129: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 30 15:26:29.129: INFO: validating pod update-demo-nautilus-89psv
Dec 30 15:26:29.143: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 30 15:26:29.143: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 30 15:26:29.143: INFO: update-demo-nautilus-89psv is verified up and running
Dec 30 15:26:29.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wnqmz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5204'
Dec 30 15:26:29.214: INFO: stderr: ""
Dec 30 15:26:29.214: INFO: stdout: "true"
Dec 30 15:26:29.214: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wnqmz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5204'
Dec 30 15:26:29.289: INFO: stderr: ""
Dec 30 15:26:29.289: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 30 15:26:29.289: INFO: validating pod update-demo-nautilus-wnqmz
Dec 30 15:26:29.296: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 30 15:26:29.296: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 30 15:26:29.296: INFO: update-demo-nautilus-wnqmz is verified up and running
STEP: scaling down the replication controller
Dec 30 15:26:29.300: INFO: scanned /root for discovery docs: 
Dec 30 15:26:29.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-5204'
Dec 30 15:26:30.461: INFO: stderr: ""
Dec 30 15:26:30.462: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 30 15:26:30.462: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5204'
Dec 30 15:26:30.636: INFO: stderr: ""
Dec 30 15:26:30.636: INFO: stdout: "update-demo-nautilus-89psv update-demo-nautilus-wnqmz "
STEP: Replicas for name=update-demo: expected=1 actual=2
Dec 30 15:26:35.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5204'
Dec 30 15:26:35.811: INFO: stderr: ""
Dec 30 15:26:35.811: INFO: stdout: "update-demo-nautilus-89psv "
Dec 30 15:26:35.811: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-89psv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5204'
Dec 30 15:26:35.960: INFO: stderr: ""
Dec 30 15:26:35.960: INFO: stdout: "true"
Dec 30 15:26:35.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-89psv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5204'
Dec 30 15:26:36.074: INFO: stderr: ""
Dec 30 15:26:36.074: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 30 15:26:36.074: INFO: validating pod update-demo-nautilus-89psv
Dec 30 15:26:36.085: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 30 15:26:36.085: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 30 15:26:36.085: INFO: update-demo-nautilus-89psv is verified up and running
STEP: scaling up the replication controller
Dec 30 15:26:36.088: INFO: scanned /root for discovery docs: 
Dec 30 15:26:36.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-5204'
Dec 30 15:26:37.435: INFO: stderr: ""
Dec 30 15:26:37.435: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 30 15:26:37.435: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5204'
Dec 30 15:26:37.556: INFO: stderr: ""
Dec 30 15:26:37.556: INFO: stdout: "update-demo-nautilus-89psv update-demo-nautilus-dxxzh "
Dec 30 15:26:37.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-89psv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5204'
Dec 30 15:26:37.674: INFO: stderr: ""
Dec 30 15:26:37.674: INFO: stdout: "true"
Dec 30 15:26:37.674: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-89psv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5204'
Dec 30 15:26:37.809: INFO: stderr: ""
Dec 30 15:26:37.809: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 30 15:26:37.809: INFO: validating pod update-demo-nautilus-89psv
Dec 30 15:26:37.822: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 30 15:26:37.822: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 30 15:26:37.822: INFO: update-demo-nautilus-89psv is verified up and running
Dec 30 15:26:37.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dxxzh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5204'
Dec 30 15:26:37.963: INFO: stderr: ""
Dec 30 15:26:37.963: INFO: stdout: ""
Dec 30 15:26:37.963: INFO: update-demo-nautilus-dxxzh is created but not running
Dec 30 15:26:42.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5204'
Dec 30 15:26:43.119: INFO: stderr: ""
Dec 30 15:26:43.119: INFO: stdout: "update-demo-nautilus-89psv update-demo-nautilus-dxxzh "
Dec 30 15:26:43.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-89psv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5204'
Dec 30 15:26:43.320: INFO: stderr: ""
Dec 30 15:26:43.320: INFO: stdout: "true"
Dec 30 15:26:43.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-89psv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5204'
Dec 30 15:26:43.413: INFO: stderr: ""
Dec 30 15:26:43.414: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 30 15:26:43.414: INFO: validating pod update-demo-nautilus-89psv
Dec 30 15:26:43.422: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 30 15:26:43.423: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 30 15:26:43.423: INFO: update-demo-nautilus-89psv is verified up and running
Dec 30 15:26:43.423: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dxxzh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5204'
Dec 30 15:26:43.495: INFO: stderr: ""
Dec 30 15:26:43.495: INFO: stdout: "true"
Dec 30 15:26:43.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dxxzh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5204'
Dec 30 15:26:43.634: INFO: stderr: ""
Dec 30 15:26:43.634: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 30 15:26:43.634: INFO: validating pod update-demo-nautilus-dxxzh
Dec 30 15:26:43.642: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 30 15:26:43.643: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 30 15:26:43.643: INFO: update-demo-nautilus-dxxzh is verified up and running
STEP: using delete to clean up resources
Dec 30 15:26:43.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5204'
Dec 30 15:26:43.729: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 30 15:26:43.730: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Dec 30 15:26:43.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5204'
Dec 30 15:26:43.887: INFO: stderr: "No resources found.\n"
Dec 30 15:26:43.887: INFO: stdout: ""
Dec 30 15:26:43.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5204 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 30 15:26:44.157: INFO: stderr: ""
Dec 30 15:26:44.157: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 15:26:44.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5204" for this suite.
Dec 30 15:27:06.240: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 15:27:06.493: INFO: namespace kubectl-5204 deletion completed in 22.300829788s

• [SLOW TEST:51.246 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 30 15:27:06.494: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 30 15:27:06.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-6545'
Dec 30 15:27:06.816: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 30 15:27:06.816: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562
Dec 30 15:27:08.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-6545'
Dec 30 15:27:09.050: INFO: stderr: ""
Dec 30 15:27:09.050: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 30 15:27:09.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6545" for this suite.
Dec 30 15:27:15.093: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 30 15:27:15.187: INFO: namespace kubectl-6545 deletion completed in 6.122387868s

• [SLOW TEST:8.693 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SDec 30 15:27:15.187: INFO: Running AfterSuite actions on all nodes
Dec 30 15:27:15.188: INFO: Running AfterSuite actions on node 1
Dec 30 15:27:15.188: INFO: Skipping dumping logs from cluster


Summarizing 1 Failure:

[Fail] [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] [It] Should recreate evicted statefulset [Conformance] 
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:769

Ran 215 of 4412 Specs in 9063.111 seconds
FAIL! -- 214 Passed | 1 Failed | 0 Pending | 4197 Skipped
--- FAIL: TestE2E (9063.49s)
FAIL