I1224 12:56:09.136623 8 e2e.go:243] Starting e2e run "6726ca72-d209-4ad3-becc-472ec83926f1" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1577192167 - Will randomize all specs Will run 215 of 4412 specs Dec 24 12:56:09.510: INFO: >>> kubeConfig: /root/.kube/config Dec 24 12:56:09.514: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Dec 24 12:56:09.547: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Dec 24 12:56:09.589: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Dec 24 12:56:09.589: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Dec 24 12:56:09.589: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Dec 24 12:56:09.602: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Dec 24 12:56:09.602: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Dec 24 12:56:09.602: INFO: e2e test version: v1.15.7 Dec 24 12:56:09.604: INFO: kube-apiserver version: v1.15.1 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 24 12:56:09.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api Dec 24 12:56:09.694: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Dec 24 12:56:09.810: INFO: Waiting up to 5m0s for pod "downward-api-0ac7c475-0e50-4016-abf2-b157e5c09f22" in namespace "downward-api-257" to be "success or failure" Dec 24 12:56:09.840: INFO: Pod "downward-api-0ac7c475-0e50-4016-abf2-b157e5c09f22": Phase="Pending", Reason="", readiness=false. Elapsed: 30.036351ms Dec 24 12:56:11.885: INFO: Pod "downward-api-0ac7c475-0e50-4016-abf2-b157e5c09f22": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075104531s Dec 24 12:56:13.939: INFO: Pod "downward-api-0ac7c475-0e50-4016-abf2-b157e5c09f22": Phase="Pending", Reason="", readiness=false. Elapsed: 4.129060971s Dec 24 12:56:15.978: INFO: Pod "downward-api-0ac7c475-0e50-4016-abf2-b157e5c09f22": Phase="Pending", Reason="", readiness=false. Elapsed: 6.168131039s Dec 24 12:56:17.995: INFO: Pod "downward-api-0ac7c475-0e50-4016-abf2-b157e5c09f22": Phase="Pending", Reason="", readiness=false. Elapsed: 8.18505916s Dec 24 12:56:20.008: INFO: Pod "downward-api-0ac7c475-0e50-4016-abf2-b157e5c09f22": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.198433407s STEP: Saw pod success Dec 24 12:56:20.008: INFO: Pod "downward-api-0ac7c475-0e50-4016-abf2-b157e5c09f22" satisfied condition "success or failure" Dec 24 12:56:20.012: INFO: Trying to get logs from node iruya-node pod downward-api-0ac7c475-0e50-4016-abf2-b157e5c09f22 container dapi-container: STEP: delete the pod Dec 24 12:56:20.098: INFO: Waiting for pod downward-api-0ac7c475-0e50-4016-abf2-b157e5c09f22 to disappear Dec 24 12:56:20.147: INFO: Pod downward-api-0ac7c475-0e50-4016-abf2-b157e5c09f22 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 24 12:56:20.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-257" for this suite. Dec 24 12:56:26.179: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 24 12:56:26.327: INFO: namespace downward-api-257 deletion completed in 6.16852743s • [SLOW TEST:16.723 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 24 12:56:26.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-017e7d55-02fa-4221-8c25-063f9c6ccb0d STEP: Creating a pod to test consume secrets Dec 24 12:56:26.504: INFO: Waiting up to 5m0s for pod "pod-secrets-ee6fb7bf-c6e0-4a1c-9818-9dc0f8cb9411" in namespace "secrets-5100" to be "success or failure" Dec 24 12:56:26.559: INFO: Pod "pod-secrets-ee6fb7bf-c6e0-4a1c-9818-9dc0f8cb9411": Phase="Pending", Reason="", readiness=false. Elapsed: 54.952616ms Dec 24 12:56:28.578: INFO: Pod "pod-secrets-ee6fb7bf-c6e0-4a1c-9818-9dc0f8cb9411": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07412216s Dec 24 12:56:30.602: INFO: Pod "pod-secrets-ee6fb7bf-c6e0-4a1c-9818-9dc0f8cb9411": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097431069s Dec 24 12:56:32.612: INFO: Pod "pod-secrets-ee6fb7bf-c6e0-4a1c-9818-9dc0f8cb9411": Phase="Pending", Reason="", readiness=false. Elapsed: 6.107853274s Dec 24 12:56:34.627: INFO: Pod "pod-secrets-ee6fb7bf-c6e0-4a1c-9818-9dc0f8cb9411": Phase="Pending", Reason="", readiness=false. Elapsed: 8.122245435s Dec 24 12:56:36.640: INFO: Pod "pod-secrets-ee6fb7bf-c6e0-4a1c-9818-9dc0f8cb9411": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.1359322s STEP: Saw pod success Dec 24 12:56:36.640: INFO: Pod "pod-secrets-ee6fb7bf-c6e0-4a1c-9818-9dc0f8cb9411" satisfied condition "success or failure" Dec 24 12:56:36.646: INFO: Trying to get logs from node iruya-node pod pod-secrets-ee6fb7bf-c6e0-4a1c-9818-9dc0f8cb9411 container secret-volume-test: STEP: delete the pod Dec 24 12:56:36.894: INFO: Waiting for pod pod-secrets-ee6fb7bf-c6e0-4a1c-9818-9dc0f8cb9411 to disappear Dec 24 12:56:36.912: INFO: Pod pod-secrets-ee6fb7bf-c6e0-4a1c-9818-9dc0f8cb9411 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 24 12:56:36.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5100" for this suite. Dec 24 12:56:43.083: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 24 12:56:43.245: INFO: namespace secrets-5100 deletion completed in 6.28218238s • [SLOW TEST:16.917 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 24 12:56:43.245: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Dec 24 12:56:43.359: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 24 12:57:06.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3528" for this suite. Dec 24 12:57:12.606: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 24 12:57:12.702: INFO: namespace pods-3528 deletion completed in 6.134504302s • [SLOW TEST:29.456 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 24 12:57:12.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-aa5d2fe3-4376-4749-a120-08ff3291b91d STEP: Creating a pod to test consume secrets Dec 24 12:57:12.896: INFO: Waiting up to 5m0s for pod "pod-secrets-2a4d0c85-0f73-4554-90bb-2aefafdc7806" in namespace "secrets-5487" to be "success or failure" Dec 24 12:57:12.913: INFO: Pod "pod-secrets-2a4d0c85-0f73-4554-90bb-2aefafdc7806": Phase="Pending", Reason="", readiness=false. Elapsed: 16.670676ms Dec 24 12:57:14.927: INFO: Pod "pod-secrets-2a4d0c85-0f73-4554-90bb-2aefafdc7806": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030050473s Dec 24 12:57:16.946: INFO: Pod "pod-secrets-2a4d0c85-0f73-4554-90bb-2aefafdc7806": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049173785s Dec 24 12:57:18.954: INFO: Pod "pod-secrets-2a4d0c85-0f73-4554-90bb-2aefafdc7806": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057367574s Dec 24 12:57:20.972: INFO: Pod "pod-secrets-2a4d0c85-0f73-4554-90bb-2aefafdc7806": Phase="Pending", Reason="", readiness=false. Elapsed: 8.075878423s Dec 24 12:57:23.194: INFO: Pod "pod-secrets-2a4d0c85-0f73-4554-90bb-2aefafdc7806": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.2970075s STEP: Saw pod success Dec 24 12:57:23.194: INFO: Pod "pod-secrets-2a4d0c85-0f73-4554-90bb-2aefafdc7806" satisfied condition "success or failure" Dec 24 12:57:23.201: INFO: Trying to get logs from node iruya-node pod pod-secrets-2a4d0c85-0f73-4554-90bb-2aefafdc7806 container secret-env-test: STEP: delete the pod Dec 24 12:57:23.288: INFO: Waiting for pod pod-secrets-2a4d0c85-0f73-4554-90bb-2aefafdc7806 to disappear Dec 24 12:57:23.294: INFO: Pod pod-secrets-2a4d0c85-0f73-4554-90bb-2aefafdc7806 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 24 12:57:23.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5487" for this suite. Dec 24 12:57:29.362: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 24 12:57:29.530: INFO: namespace secrets-5487 deletion completed in 6.23107607s • [SLOW TEST:16.828 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 24 12:57:29.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-015d060b-30f0-4c6e-8b38-45f0eb74e89e STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-015d060b-30f0-4c6e-8b38-45f0eb74e89e STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 24 12:59:01.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2498" for this suite. Dec 24 12:59:25.640: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 24 12:59:25.754: INFO: namespace projected-2498 deletion completed in 24.189256871s • [SLOW TEST:116.224 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 24 12:59:25.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 24 12:59:33.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-830" for this suite. Dec 24 13:00:20.030: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 24 13:00:20.148: INFO: namespace kubelet-test-830 deletion completed in 46.145263555s • [SLOW TEST:54.393 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 24 13:00:20.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-11abcb15-30ac-4e23-a416-e390250cd670 STEP: Creating a pod to test consume secrets Dec 24 13:00:20.246: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-92a00f4d-64fd-47b6-8237-6f9c313c2654" in namespace "projected-8037" to be "success or failure" Dec 24 13:00:20.375: INFO: Pod "pod-projected-secrets-92a00f4d-64fd-47b6-8237-6f9c313c2654": Phase="Pending", Reason="", readiness=false. Elapsed: 128.635642ms Dec 24 13:00:22.385: INFO: Pod "pod-projected-secrets-92a00f4d-64fd-47b6-8237-6f9c313c2654": Phase="Pending", Reason="", readiness=false. Elapsed: 2.13839164s Dec 24 13:00:24.395: INFO: Pod "pod-projected-secrets-92a00f4d-64fd-47b6-8237-6f9c313c2654": Phase="Pending", Reason="", readiness=false. Elapsed: 4.148871952s Dec 24 13:00:26.416: INFO: Pod "pod-projected-secrets-92a00f4d-64fd-47b6-8237-6f9c313c2654": Phase="Pending", Reason="", readiness=false. Elapsed: 6.169265674s Dec 24 13:00:28.430: INFO: Pod "pod-projected-secrets-92a00f4d-64fd-47b6-8237-6f9c313c2654": Phase="Pending", Reason="", readiness=false. Elapsed: 8.183924693s Dec 24 13:00:30.441: INFO: Pod "pod-projected-secrets-92a00f4d-64fd-47b6-8237-6f9c313c2654": Phase="Pending", Reason="", readiness=false. Elapsed: 10.194250134s Dec 24 13:00:32.453: INFO: Pod "pod-projected-secrets-92a00f4d-64fd-47b6-8237-6f9c313c2654": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.206366997s STEP: Saw pod success Dec 24 13:00:32.453: INFO: Pod "pod-projected-secrets-92a00f4d-64fd-47b6-8237-6f9c313c2654" satisfied condition "success or failure" Dec 24 13:00:32.459: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-92a00f4d-64fd-47b6-8237-6f9c313c2654 container secret-volume-test: STEP: delete the pod Dec 24 13:00:32.674: INFO: Waiting for pod pod-projected-secrets-92a00f4d-64fd-47b6-8237-6f9c313c2654 to disappear Dec 24 13:00:32.679: INFO: Pod pod-projected-secrets-92a00f4d-64fd-47b6-8237-6f9c313c2654 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 24 13:00:32.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8037" for this suite. Dec 24 13:00:38.710: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 24 13:00:38.838: INFO: namespace projected-8037 deletion completed in 6.15482701s • [SLOW TEST:18.690 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 24 13:00:38.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-7rj76 in namespace proxy-4004 I1224 13:00:39.101265 8 runners.go:180] Created replication controller with name: proxy-service-7rj76, namespace: proxy-4004, replica count: 1 I1224 13:00:40.152712 8 runners.go:180] proxy-service-7rj76 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1224 13:00:41.153165 8 runners.go:180] proxy-service-7rj76 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1224 13:00:42.153675 8 runners.go:180] proxy-service-7rj76 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1224 13:00:43.154058 8 runners.go:180] proxy-service-7rj76 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1224 13:00:44.154344 8 runners.go:180] proxy-service-7rj76 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1224 13:00:45.155738 8 runners.go:180] proxy-service-7rj76 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1224 13:00:46.156470 8 runners.go:180] proxy-service-7rj76 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1224 13:00:47.157259 8 runners.go:180] proxy-service-7rj76 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1224 13:00:48.157849 8 runners.go:180] proxy-service-7rj76 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1224 13:00:49.158225 8 runners.go:180] proxy-service-7rj76 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Dec 24 13:00:49.166: INFO: setup took 10.252183218s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Dec 24 13:00:49.221: INFO: (0) /api/v1/namespaces/proxy-4004/pods/proxy-service-7rj76-zl7h8:160/proxy/: foo (200; 54.386969ms) Dec 24 13:00:49.221: INFO: (0) /api/v1/namespaces/proxy-4004/pods/proxy-service-7rj76-zl7h8:162/proxy/: bar (200; 54.431811ms) Dec 24 13:00:49.221: INFO: (0) /api/v1/namespaces/proxy-4004/pods/http:proxy-service-7rj76-zl7h8:160/proxy/: foo (200; 54.541887ms) Dec 24 13:00:49.221: INFO: (0) /api/v1/namespaces/proxy-4004/services/proxy-service-7rj76:portname2/proxy/: bar (200; 54.402056ms) Dec 24 13:00:49.221: INFO: (0) /api/v1/namespaces/proxy-4004/services/http:proxy-service-7rj76:portname1/proxy/: foo (200; 54.698658ms) Dec 24 13:00:49.221: INFO: (0) /api/v1/namespaces/proxy-4004/pods/http:proxy-service-7rj76-zl7h8:162/proxy/: bar (200; 54.51882ms) Dec 24 13:00:49.223: INFO: (0) /api/v1/namespaces/proxy-4004/services/proxy-service-7rj76:portname1/proxy/: foo (200; 57.456978ms) Dec 24 13:00:49.223: INFO: (0) /api/v1/namespaces/proxy-4004/services/http:proxy-service-7rj76:portname2/proxy/: bar (200; 57.369109ms) Dec 24 13:00:49.224: INFO: (0) /api/v1/namespaces/proxy-4004/pods/proxy-service-7rj76-zl7h8:1080/proxy/: test<... (200; 57.703173ms) Dec 24 13:00:49.236: INFO: (0) /api/v1/namespaces/proxy-4004/pods/proxy-service-7rj76-zl7h8/proxy/: test (200; 70.246336ms) Dec 24 13:00:49.236: INFO: (0) /api/v1/namespaces/proxy-4004/pods/http:proxy-service-7rj76-zl7h8:1080/proxy/: ... (200; 70.135154ms) Dec 24 13:00:49.262: INFO: (0) /api/v1/namespaces/proxy-4004/pods/https:proxy-service-7rj76-zl7h8:462/proxy/: tls qux (200; 95.368442ms) Dec 24 13:00:49.268: INFO: (0) /api/v1/namespaces/proxy-4004/services/https:proxy-service-7rj76:tlsportname1/proxy/: tls baz (200; 101.740238ms) Dec 24 13:00:49.268: INFO: (0) /api/v1/namespaces/proxy-4004/pods/https:proxy-service-7rj76-zl7h8:460/proxy/: tls baz (200; 101.730793ms) Dec 24 13:00:49.269: INFO: (0) /api/v1/namespaces/proxy-4004/pods/https:proxy-service-7rj76-zl7h8:443/proxy/: test<... (200; 15.800837ms) Dec 24 13:00:49.288: INFO: (1) /api/v1/namespaces/proxy-4004/pods/proxy-service-7rj76-zl7h8/proxy/: test (200; 16.355662ms) Dec 24 13:00:49.292: INFO: (1) /api/v1/namespaces/proxy-4004/pods/proxy-service-7rj76-zl7h8:160/proxy/: foo (200; 19.823222ms) Dec 24 13:00:49.292: INFO: (1) /api/v1/namespaces/proxy-4004/pods/https:proxy-service-7rj76-zl7h8:460/proxy/: tls baz (200; 19.85262ms) Dec 24 13:00:49.292: INFO: (1) /api/v1/namespaces/proxy-4004/pods/http:proxy-service-7rj76-zl7h8:162/proxy/: bar (200; 19.960754ms) Dec 24 13:00:49.292: INFO: (1) /api/v1/namespaces/proxy-4004/pods/http:proxy-service-7rj76-zl7h8:160/proxy/: foo (200; 20.695157ms) Dec 24 13:00:49.293: INFO: (1) /api/v1/namespaces/proxy-4004/pods/https:proxy-service-7rj76-zl7h8:462/proxy/: tls qux (200; 21.340189ms) Dec 24 13:00:49.293: INFO: (1) /api/v1/namespaces/proxy-4004/services/http:proxy-service-7rj76:portname2/proxy/: bar (200; 21.472345ms) Dec 24 13:00:49.298: INFO: (1) /api/v1/namespaces/proxy-4004/pods/proxy-service-7rj76-zl7h8:162/proxy/: bar (200; 26.251239ms) Dec 24 13:00:49.301: INFO: (1) /api/v1/namespaces/proxy-4004/services/proxy-service-7rj76:portname1/proxy/: foo (200; 29.44641ms) Dec 24 13:00:49.302: INFO: (1) /api/v1/namespaces/proxy-4004/pods/http:proxy-service-7rj76-zl7h8:1080/proxy/: ... (200; 29.655827ms) Dec 24 13:00:49.302: INFO: (1) /api/v1/namespaces/proxy-4004/services/https:proxy-service-7rj76:tlsportname2/proxy/: tls qux (200; 30.710136ms) Dec 24 13:00:49.302: INFO: (1) /api/v1/namespaces/proxy-4004/services/http:proxy-service-7rj76:portname1/proxy/: foo (200; 30.080701ms) Dec 24 13:00:49.302: INFO: (1) /api/v1/namespaces/proxy-4004/pods/https:proxy-service-7rj76-zl7h8:443/proxy/: test (200; 11.402141ms) Dec 24 13:00:49.316: INFO: (2) /api/v1/namespaces/proxy-4004/pods/proxy-service-7rj76-zl7h8:1080/proxy/: test<... (200; 11.63695ms) Dec 24 13:00:49.316: INFO: (2) /api/v1/namespaces/proxy-4004/pods/http:proxy-service-7rj76-zl7h8:1080/proxy/: ... (200; 12.154531ms) Dec 24 13:00:49.318: INFO: (2) /api/v1/namespaces/proxy-4004/pods/http:proxy-service-7rj76-zl7h8:162/proxy/: bar (200; 13.674195ms) Dec 24 13:00:49.319: INFO: (2) /api/v1/namespaces/proxy-4004/pods/proxy-service-7rj76-zl7h8:160/proxy/: foo (200; 14.744052ms) Dec 24 13:00:49.326: INFO: (2) /api/v1/namespaces/proxy-4004/pods/proxy-service-7rj76-zl7h8:162/proxy/: bar (200; 22.266544ms) Dec 24 13:00:49.326: INFO: (2) /api/v1/namespaces/proxy-4004/services/https:proxy-service-7rj76:tlsportname2/proxy/: tls qux (200; 21.98168ms) Dec 24 13:00:49.327: INFO: (2) /api/v1/namespaces/proxy-4004/services/http:proxy-service-7rj76:portname1/proxy/: foo (200; 23.224132ms) Dec 24 13:00:49.327: INFO: (2) /api/v1/namespaces/proxy-4004/services/http:proxy-service-7rj76:portname2/proxy/: bar (200; 23.172032ms) Dec 24 13:00:49.328: INFO: (2) /api/v1/namespaces/proxy-4004/pods/https:proxy-service-7rj76-zl7h8:443/proxy/: ... (200; 15.82378ms) Dec 24 13:00:49.349: INFO: (3) /api/v1/namespaces/proxy-4004/pods/http:proxy-service-7rj76-zl7h8:160/proxy/: foo (200; 15.961328ms) Dec 24 13:00:49.349: INFO: (3) /api/v1/namespaces/proxy-4004/pods/proxy-service-7rj76-zl7h8:160/proxy/: foo (200; 15.490169ms) Dec 24 13:00:49.351: INFO: (3) /api/v1/namespaces/proxy-4004/pods/https:proxy-service-7rj76-zl7h8:462/proxy/: tls qux (200; 17.979442ms) Dec 24 13:00:49.351: INFO: (3) /api/v1/namespaces/proxy-4004/pods/https:proxy-service-7rj76-zl7h8:460/proxy/: tls baz (200; 17.914405ms) Dec 24 13:00:49.351: INFO: (3) /api/v1/namespaces/proxy-4004/pods/proxy-service-7rj76-zl7h8/proxy/: test (200; 18.228802ms) Dec 24 13:00:49.351: INFO: (3) /api/v1/namespaces/proxy-4004/services/proxy-service-7rj76:portname1/proxy/: foo (200; 18.735196ms) Dec 24 13:00:49.352: INFO: (3) /api/v1/namespaces/proxy-4004/pods/http:proxy-service-7rj76-zl7h8:162/proxy/: bar (200; 18.462477ms) Dec 24 13:00:49.352: INFO: (3) /api/v1/namespaces/proxy-4004/pods/proxy-service-7rj76-zl7h8:1080/proxy/: test<... (200; 18.791955ms) Dec 24 13:00:49.352: INFO: (3) /api/v1/namespaces/proxy-4004/pods/https:proxy-service-7rj76-zl7h8:443/proxy/: test<... (200; 8.078145ms) Dec 24 13:00:49.368: INFO: (4) /api/v1/namespaces/proxy-4004/pods/https:proxy-service-7rj76-zl7h8:462/proxy/: tls qux (200; 10.528196ms) Dec 24 13:00:49.369: INFO: (4) /api/v1/namespaces/proxy-4004/services/proxy-service-7rj76:portname2/proxy/: bar (200; 11.622211ms) Dec 24 13:00:49.369: INFO: (4) /api/v1/namespaces/proxy-4004/services/https:proxy-service-7rj76:tlsportname2/proxy/: tls qux (200; 11.617464ms) Dec 24 13:00:49.370: INFO: (4) /api/v1/namespaces/proxy-4004/services/http:proxy-service-7rj76:portname2/proxy/: bar (200; 12.128389ms) Dec 24 13:00:49.370: INFO: (4) /api/v1/namespaces/proxy-4004/services/proxy-service-7rj76:portname1/proxy/: foo (200; 12.344502ms) Dec 24 13:00:49.370: INFO: (4) /api/v1/namespaces/proxy-4004/services/http:proxy-service-7rj76:portname1/proxy/: foo (200; 12.995851ms) Dec 24 13:00:49.372: INFO: (4) /api/v1/namespaces/proxy-4004/pods/https:proxy-service-7rj76-zl7h8:460/proxy/: tls baz (200; 14.068021ms) Dec 24 13:00:49.372: INFO: (4) /api/v1/namespaces/proxy-4004/services/https:proxy-service-7rj76:tlsportname1/proxy/: tls baz (200; 14.158943ms) Dec 24 13:00:49.372: INFO: (4) /api/v1/namespaces/proxy-4004/pods/http:proxy-service-7rj76-zl7h8:160/proxy/: foo (200; 14.864364ms) Dec 24 13:00:49.372: INFO: (4) /api/v1/namespaces/proxy-4004/pods/proxy-service-7rj76-zl7h8:162/proxy/: bar (200; 14.958533ms) Dec 24 13:00:49.373: INFO: (4) /api/v1/namespaces/proxy-4004/pods/http:proxy-service-7rj76-zl7h8:1080/proxy/: ... (200; 15.089754ms) Dec 24 13:00:49.373: INFO: (4) /api/v1/namespaces/proxy-4004/pods/http:proxy-service-7rj76-zl7h8:162/proxy/: bar (200; 15.401555ms) Dec 24 13:00:49.373: INFO: (4) /api/v1/namespaces/proxy-4004/pods/https:proxy-service-7rj76-zl7h8:443/proxy/: test (200; 15.659256ms) Dec 24 13:00:49.378: INFO: (5) /api/v1/namespaces/proxy-4004/pods/https:proxy-service-7rj76-zl7h8:443/proxy/: test<... (200; 5.132545ms) Dec 24 13:00:49.380: INFO: (5) /api/v1/namespaces/proxy-4004/pods/proxy-service-7rj76-zl7h8:162/proxy/: bar (200; 5.762276ms) Dec 24 13:00:49.380: INFO: (5) /api/v1/namespaces/proxy-4004/pods/http:proxy-service-7rj76-zl7h8:1080/proxy/: ... (200; 5.559044ms) Dec 24 13:00:49.380: INFO: (5) /api/v1/namespaces/proxy-4004/pods/https:proxy-service-7rj76-zl7h8:462/proxy/: tls qux (200; 6.076472ms) Dec 24 13:00:49.380: INFO: (5) /api/v1/namespaces/proxy-4004/pods/proxy-service-7rj76-zl7h8/proxy/: test (200; 6.26487ms) Dec 24 13:00:49.380: INFO: (5) /api/v1/namespaces/proxy-4004/pods/http:proxy-service-7rj76-zl7h8:160/proxy/: foo (200; 6.231053ms) Dec 24 13:00:49.380: INFO: (5) /api/v1/namespaces/proxy-4004/pods/proxy-service-7rj76-zl7h8:160/proxy/: foo (200; 6.764341ms) Dec 24 13:00:49.381: INFO: (5) /api/v1/namespaces/proxy-4004/pods/https:proxy-service-7rj76-zl7h8:460/proxy/: tls baz (200; 6.611361ms) Dec 24 13:00:49.383: INFO: (5) /api/v1/namespaces/proxy-4004/pods/http:proxy-service-7rj76-zl7h8:162/proxy/: bar (200; 9.211593ms) Dec 24 13:00:49.384: INFO: (5) /api/v1/namespaces/proxy-4004/services/proxy-service-7rj76:portname1/proxy/: foo (200; 10.487373ms) Dec 24 13:00:49.385: INFO: (5) /api/v1/namespaces/proxy-4004/services/proxy-service-7rj76:portname2/proxy/: bar (200; 11.215759ms) Dec 24 13:00:49.385: INFO: (5) /api/v1/namespaces/proxy-4004/services/https:proxy-service-7rj76:tlsportname2/proxy/: tls qux (200; 11.371655ms) Dec 24 13:00:49.385: INFO: (5) /api/v1/namespaces/proxy-4004/services/https:proxy-service-7rj76:tlsportname1/proxy/: tls baz (200; 11.294188ms) Dec 24 13:00:49.385: INFO: (5) /api/v1/namespaces/proxy-4004/services/http:proxy-service-7rj76:portname2/proxy/: bar (200; 11.458168ms) Dec 24 13:00:49.386: INFO: (5) /api/v1/namespaces/proxy-4004/services/http:proxy-service-7rj76:portname1/proxy/: foo (200; 12.115672ms) Dec 24 13:00:49.394: INFO: (6) /api/v1/namespaces/proxy-4004/pods/http:proxy-service-7rj76-zl7h8:160/proxy/: foo (200; 8.405464ms) Dec 24 13:00:49.394: INFO: (6) /api/v1/namespaces/proxy-4004/pods/proxy-service-7rj76-zl7h8:162/proxy/: bar (200; 8.45371ms) Dec 24 13:00:49.395: INFO: (6) /api/v1/namespaces/proxy-4004/pods/proxy-service-7rj76-zl7h8:1080/proxy/: test<... (200; 9.029658ms) Dec 24 13:00:49.395: INFO: (6) /api/v1/namespaces/proxy-4004/pods/https:proxy-service-7rj76-zl7h8:443/proxy/: ... (200; 9.364916ms) Dec 24 13:00:49.396: INFO: (6) /api/v1/namespaces/proxy-4004/pods/proxy-service-7rj76-zl7h8:160/proxy/: foo (200; 9.822334ms) Dec 24 13:00:49.397: INFO: (6) /api/v1/namespaces/proxy-4004/pods/https:proxy-service-7rj76-zl7h8:462/proxy/: tls qux (200; 10.730754ms) Dec 24 13:00:49.397: INFO: (6) /api/v1/namespaces/proxy-4004/pods/proxy-service-7rj76-zl7h8/proxy/: test (200; 10.790439ms) Dec 24 13:00:49.397: INFO: (6) /api/v1/namespaces/proxy-4004/pods/http:proxy-service-7rj76-zl7h8:162/proxy/: bar (200; 11.360219ms) Dec 24 13:00:49.397: INFO: (6) /api/v1/namespaces/proxy-4004/pods/https:proxy-service-7rj76-zl7h8:460/proxy/: tls baz (200; 11.441509ms) Dec 24 13:00:49.398: INFO: (6) /api/v1/namespaces/proxy-4004/services/https:proxy-service-7rj76:tlsportname1/proxy/: tls baz (200; 12.157957ms) Dec 24 13:00:49.398: INFO: (6) /api/v1/namespaces/proxy-4004/services/http:proxy-service-7rj76:portname1/proxy/: foo (200; 12.404861ms) Dec 24 13:00:49.398: INFO: (6) /api/v1/namespaces/proxy-4004/services/http:proxy-service-7rj76:portname2/proxy/: bar (200; 12.427549ms) Dec 24 13:00:49.398: INFO: (6) /api/v1/namespaces/proxy-4004/services/proxy-service-7rj76:portname2/proxy/: bar (200; 12.441029ms) Dec 24 13:00:49.398: INFO: (6) /api/v1/namespaces/proxy-4004/services/proxy-service-7rj76:portname1/proxy/: foo (200; 12.535218ms) Dec 24 13:00:49.399: INFO: (6) /api/v1/namespaces/proxy-4004/services/https:proxy-service-7rj76:tlsportname2/proxy/: tls qux (200; 12.991044ms) Dec 24 13:00:49.415: INFO: (7) /api/v1/namespaces/proxy-4004/pods/https:proxy-service-7rj76-zl7h8:443/proxy/: ... (200; 20.428963ms) Dec 24 13:00:49.419: INFO: (7) /api/v1/namespaces/proxy-4004/pods/proxy-service-7rj76-zl7h8:160/proxy/: foo (200; 20.531387ms) Dec 24 13:00:49.419: INFO: (7) /api/v1/namespaces/proxy-4004/services/proxy-service-7rj76:portname2/proxy/: bar (200; 20.688474ms) Dec 24 13:00:49.420: INFO: (7) /api/v1/namespaces/proxy-4004/pods/proxy-service-7rj76-zl7h8:1080/proxy/: test<... (200; 20.645488ms) Dec 24 13:00:49.420: INFO: (7) /api/v1/namespaces/proxy-4004/pods/http:proxy-service-7rj76-zl7h8:162/proxy/: bar (200; 20.695623ms) Dec 24 13:00:49.421: INFO: (7) /api/v1/namespaces/proxy-4004/pods/proxy-service-7rj76-zl7h8:162/proxy/: bar (200; 21.737978ms) Dec 24 13:00:49.421: INFO: (7) /api/v1/namespaces/proxy-4004/services/http:proxy-service-7rj76:portname1/proxy/: foo (200; 22.445759ms) Dec 24 13:00:49.422: INFO: (7) /api/v1/namespaces/proxy-4004/pods/proxy-service-7rj76-zl7h8/proxy/: test (200; 23.091417ms) Dec 24 13:00:49.422: INFO: (7) /api/v1/namespaces/proxy-4004/services/https:proxy-service-7rj76:tlsportname1/proxy/: tls baz (200; 23.205921ms) Dec 24 13:00:49.422: INFO: (7) /api/v1/namespaces/proxy-4004/pods/https:proxy-service-7rj76-zl7h8:460/proxy/: tls baz (200; 23.356573ms) Dec 24 13:00:49.422: INFO: (7) /api/v1/namespaces/proxy-4004/services/https:proxy-service-7rj76:tlsportname2/proxy/: tls qux (200; 23.366036ms) Dec 24 13:00:49.423: INFO: (7) /api/v1/namespaces/proxy-4004/pods/https:proxy-service-7rj76-zl7h8:462/proxy/: tls qux (200; 23.847958ms) Dec 24 13:00:49.423: INFO: (7) /api/v1/namespaces/proxy-4004/pods/http:proxy-service-7rj76-zl7h8:160/proxy/: foo (200; 24.177572ms) Dec 24 13:00:49.434: INFO: (8) /api/v1/namespaces/proxy-4004/pods/proxy-service-7rj76-zl7h8:160/proxy/: foo (200; 10.806755ms) Dec 24 13:00:49.434: INFO: (8) /api/v1/namespaces/proxy-4004/pods/http:proxy-service-7rj76-zl7h8:162/proxy/: bar (200; 10.341382ms) Dec 24 13:00:49.434: INFO: (8) /api/v1/namespaces/proxy-4004/pods/proxy-service-7rj76-zl7h8/proxy/: test (200; 10.920737ms) Dec 24 13:00:49.434: INFO: (8) /api/v1/namespaces/proxy-4004/pods/http:proxy-service-7rj76-zl7h8:160/proxy/: foo (200; 10.321288ms) Dec 24 13:00:49.434: INFO: (8) /api/v1/namespaces/proxy-4004/pods/http:proxy-service-7rj76-zl7h8:1080/proxy/: ... (200; 10.37896ms) Dec 24 13:00:49.435: INFO: (8) /api/v1/namespaces/proxy-4004/pods/proxy-service-7rj76-zl7h8:1080/proxy/: test<... (200; 11.005586ms) Dec 24 13:00:49.435: INFO: (8) /api/v1/namespaces/proxy-4004/pods/https:proxy-service-7rj76-zl7h8:460/proxy/: tls baz (200; 11.000191ms) Dec 24 13:00:49.435: INFO: (8) /api/v1/namespaces/proxy-4004/pods/https:proxy-service-7rj76-zl7h8:462/proxy/: tls qux (200; 11.198093ms) Dec 24 13:00:49.435: INFO: (8) /api/v1/namespaces/proxy-4004/pods/https:proxy-service-7rj76-zl7h8:443/proxy/: test (200; 9.417703ms) Dec 24 13:00:49.450: INFO: (9) /api/v1/namespaces/proxy-4004/pods/proxy-service-7rj76-zl7h8:1080/proxy/: test<... (200; 10.028786ms) Dec 24 13:00:49.452: INFO: (9) /api/v1/namespaces/proxy-4004/services/http:proxy-service-7rj76:portname1/proxy/: foo (200; 12.597027ms) Dec 24 13:00:49.452: INFO: (9) /api/v1/namespaces/proxy-4004/pods/proxy-service-7rj76-zl7h8:160/proxy/: foo (200; 12.514334ms) Dec 24 13:00:49.452: INFO: (9) /api/v1/namespaces/proxy-4004/pods/http:proxy-service-7rj76-zl7h8:160/proxy/: foo (200; 12.833936ms) Dec 24 13:00:49.452: INFO: (9) /api/v1/namespaces/proxy-4004/pods/proxy-service-7rj76-zl7h8:162/proxy/: bar (200; 12.84161ms) Dec 24 13:00:49.453: INFO: (9) /api/v1/namespaces/proxy-4004/services/https:proxy-service-7rj76:tlsportname2/proxy/: tls qux (200; 13.264171ms) Dec 24 13:00:49.453: INFO: (9) /api/v1/namespaces/proxy-4004/pods/https:proxy-service-7rj76-zl7h8:462/proxy/: tls qux (200; 13.513717ms) Dec 24 13:00:49.453: INFO: (9) /api/v1/namespaces/proxy-4004/pods/http:proxy-service-7rj76-zl7h8:1080/proxy/: ... (200; 13.856645ms) Dec 24 13:00:49.453: INFO: (9) /api/v1/namespaces/proxy-4004/services/proxy-service-7rj76:portname1/proxy/: foo (200; 14.044441ms) Dec 24 13:00:49.454: INFO: (9) /api/v1/namespaces/proxy-4004/pods/http:proxy-service-7rj76-zl7h8:162/proxy/: bar (200; 14.078157ms) Dec 24 13:00:49.454: INFO: (9) /api/v1/namespaces/proxy-4004/pods/https:proxy-service-7rj76-zl7h8:443/proxy/: ... (200; 4.087787ms) Dec 24 13:00:49.466: INFO: (10) /api/v1/namespaces/proxy-4004/pods/https:proxy-service-7rj76-zl7h8:443/proxy/: test<... (200; 10.920967ms) Dec 24 13:00:49.467: INFO: (10) /api/v1/namespaces/proxy-4004/pods/proxy-service-7rj76-zl7h8/proxy/: test (200; 10.976895ms) Dec 24 13:00:49.467: INFO: (10) /api/v1/namespaces/proxy-4004/pods/http:proxy-service-7rj76-zl7h8:162/proxy/: bar (200; 10.945297ms) Dec 24 13:00:49.467: INFO: (10) /api/v1/namespaces/proxy-4004/pods/http:proxy-service-7rj76-zl7h8:160/proxy/: foo (200; 11.000397ms) Dec 24 13:00:49.467: INFO: (10) /api/v1/namespaces/proxy-4004/pods/proxy-service-7rj76-zl7h8:160/proxy/: foo (200; 11.145248ms) Dec 24 13:00:49.468: INFO: (10) /api/v1/namespaces/proxy-4004/services/https:proxy-service-7rj76:tlsportname1/proxy/: tls baz (200; 12.005925ms) Dec 24 13:00:49.468: INFO: (10) /api/v1/namespaces/proxy-4004/services/http:proxy-service-7rj76:portname1/proxy/: foo (200; 12.219714ms) Dec 24 13:00:49.468: INFO: (10) /api/v1/namespaces/proxy-4004/services/http:proxy-service-7rj76:portname2/proxy/: bar (200; 12.212797ms) Dec 24 13:00:49.470: INFO: (10) /api/v1/namespaces/proxy-4004/services/proxy-service-7rj76:portname1/proxy/: foo (200; 13.6415ms) Dec 24 13:00:49.470: INFO: (10) /api/v1/namespaces/proxy-4004/services/proxy-service-7rj76:portname2/proxy/: bar (200; 14.344849ms) Dec 24 13:00:49.470: INFO: (10) /api/v1/namespaces/proxy-4004/services/https:proxy-service-7rj76:tlsportname2/proxy/: tls qux (200; 14.385959ms) Dec 24 13:00:49.474: INFO: (11) /api/v1/namespaces/proxy-4004/pods/proxy-service-7rj76-zl7h8:160/proxy/: foo (200; 3.577786ms) Dec 24 13:00:49.474: INFO: (11) /api/v1/namespaces/proxy-4004/pods/proxy-service-7rj76-zl7h8/proxy/: test (200; 3.603843ms) Dec 24 13:00:49.478: INFO: (11) /api/v1/namespaces/proxy-4004/pods/http:proxy-service-7rj76-zl7h8:160/proxy/: foo (200; 7.212146ms) Dec 24 13:00:49.480: INFO: (11) /api/v1/namespaces/proxy-4004/pods/proxy-service-7rj76-zl7h8:162/proxy/: bar (200; 8.959951ms) Dec 24 13:00:49.480: INFO: (11) /api/v1/namespaces/proxy-4004/pods/http:proxy-service-7rj76-zl7h8:1080/proxy/: ... (200; 9.006954ms) Dec 24 13:00:49.480: INFO: (11) /api/v1/namespaces/proxy-4004/pods/https:proxy-service-7rj76-zl7h8:443/proxy/: test<... (200; 9.337278ms) Dec 24 13:00:49.480: INFO: (11) /api/v1/namespaces/proxy-4004/pods/https:proxy-service-7rj76-zl7h8:462/proxy/: tls qux (200; 9.841818ms) Dec 24 13:00:49.481: INFO: (11) /api/v1/namespaces/proxy-4004/services/http:proxy-service-7rj76:portname2/proxy/: bar (200; 10.731006ms) Dec 24 13:00:49.482: INFO: (11) /api/v1/namespaces/proxy-4004/services/proxy-service-7rj76:portname1/proxy/: foo (200; 11.034293ms) Dec 24 13:00:49.484: INFO: (11) /api/v1/namespaces/proxy-4004/services/http:proxy-service-7rj76:portname1/proxy/: foo (200; 13.127546ms) Dec 24 13:00:49.484: INFO: (11) /api/v1/namespaces/proxy-4004/services/https:proxy-service-7rj76:tlsportname2/proxy/: tls qux (200; 13.131604ms) Dec 24 13:00:49.484: INFO: (11) /api/v1/namespaces/proxy-4004/services/https:proxy-service-7rj76:tlsportname1/proxy/: tls baz (200; 13.16307ms) Dec 24 13:00:49.484: INFO: (11) /api/v1/namespaces/proxy-4004/services/proxy-service-7rj76:portname2/proxy/: bar (200; 13.141782ms) Dec 24 13:00:49.495: INFO: (12) /api/v1/namespaces/proxy-4004/pods/proxy-service-7rj76-zl7h8/proxy/: test (200; 10.653119ms) Dec 24 13:00:49.495: INFO: (12) /api/v1/namespaces/proxy-4004/pods/proxy-service-7rj76-zl7h8:162/proxy/: bar (200; 10.571993ms) Dec 24 13:00:49.495: INFO: (12) /api/v1/namespaces/proxy-4004/pods/http:proxy-service-7rj76-zl7h8:160/proxy/: foo (200; 10.469339ms) Dec 24 13:00:49.495: INFO: (12) /api/v1/namespaces/proxy-4004/pods/proxy-service-7rj76-zl7h8:1080/proxy/: test<... (200; 10.789272ms) Dec 24 13:00:49.495: INFO: (12) /api/v1/namespaces/proxy-4004/pods/https:proxy-service-7rj76-zl7h8:462/proxy/: tls qux (200; 10.706916ms) Dec 24 13:00:49.495: INFO: (12) /api/v1/namespaces/proxy-4004/pods/http:proxy-service-7rj76-zl7h8:162/proxy/: bar (200; 10.780489ms) Dec 24 13:00:49.496: INFO: (12) /api/v1/namespaces/proxy-4004/pods/http:proxy-service-7rj76-zl7h8:1080/proxy/: ... (200; 11.925925ms) Dec 24 13:00:49.496: INFO: (12) /api/v1/namespaces/proxy-4004/pods/https:proxy-service-7rj76-zl7h8:460/proxy/: tls baz (200; 11.983161ms) Dec 24 13:00:49.496: INFO: (12) /api/v1/namespaces/proxy-4004/pods/https:proxy-service-7rj76-zl7h8:443/proxy/: test<... (200; 7.052421ms) Dec 24 13:00:49.508: INFO: (13) /api/v1/namespaces/proxy-4004/pods/https:proxy-service-7rj76-zl7h8:462/proxy/: tls qux (200; 7.198607ms) Dec 24 13:00:49.509: INFO: (13) /api/v1/namespaces/proxy-4004/pods/http:proxy-service-7rj76-zl7h8:160/proxy/: foo (200; 8.477401ms) Dec 24 13:00:49.510: INFO: (13) /api/v1/namespaces/proxy-4004/pods/http:proxy-service-7rj76-zl7h8:1080/proxy/: ... (200; 8.667961ms) Dec 24 13:00:49.511: INFO: (13) /api/v1/namespaces/proxy-4004/services/proxy-service-7rj76:portname2/proxy/: bar (200; 9.926841ms) Dec 24 13:00:49.511: INFO: (13) /api/v1/namespaces/proxy-4004/pods/proxy-service-7rj76-zl7h8/proxy/: test (200; 10.072589ms) Dec 24 13:00:49.511: INFO: (13) /api/v1/namespaces/proxy-4004/services/https:proxy-service-7rj76:tlsportname2/proxy/: tls qux (200; 10.31181ms) Dec 24 13:00:49.511: INFO: (13) /api/v1/namespaces/proxy-4004/pods/https:proxy-service-7rj76-zl7h8:443/proxy/: test<... (200; 8.262195ms) Dec 24 13:00:49.522: INFO: (14) /api/v1/namespaces/proxy-4004/pods/proxy-service-7rj76-zl7h8:160/proxy/: foo (200; 8.395709ms) Dec 24 13:00:49.523: INFO: (14) /api/v1/namespaces/proxy-4004/pods/http:proxy-service-7rj76-zl7h8:160/proxy/: foo (200; 9.586986ms) Dec 24 13:00:49.524: INFO: (14) /api/v1/namespaces/proxy-4004/services/https:proxy-service-7rj76:tlsportname1/proxy/: tls baz (200; 10.975464ms) Dec 24 13:00:49.524: INFO: (14) /api/v1/namespaces/proxy-4004/services/proxy-service-7rj76:portname2/proxy/: bar (200; 11.103662ms) Dec 24 13:00:49.525: INFO: (14) /api/v1/namespaces/proxy-4004/pods/http:proxy-service-7rj76-zl7h8:1080/proxy/: ... (200; 11.728665ms) Dec 24 13:00:49.525: INFO: (14) /api/v1/namespaces/proxy-4004/pods/proxy-service-7rj76-zl7h8:162/proxy/: bar (200; 11.86677ms) Dec 24 13:00:49.525: INFO: (14) /api/v1/namespaces/proxy-4004/pods/proxy-service-7rj76-zl7h8/proxy/: test (200; 12.04411ms) Dec 24 13:00:49.525: INFO: (14) /api/v1/namespaces/proxy-4004/pods/https:proxy-service-7rj76-zl7h8:443/proxy/: test<... (200; 13.518445ms) Dec 24 13:00:49.541: INFO: (15) /api/v1/namespaces/proxy-4004/pods/proxy-service-7rj76-zl7h8/proxy/: test (200; 13.452548ms) Dec 24 13:00:49.541: INFO: (15) /api/v1/namespaces/proxy-4004/pods/https:proxy-service-7rj76-zl7h8:462/proxy/: tls qux (200; 13.451743ms) Dec 24 13:00:49.541: INFO: (15) /api/v1/namespaces/proxy-4004/pods/http:proxy-service-7rj76-zl7h8:1080/proxy/: ... (200; 13.648539ms) Dec 24 13:00:49.541: INFO: (15) /api/v1/namespaces/proxy-4004/pods/https:proxy-service-7rj76-zl7h8:443/proxy/: test<... (200; 9.574158ms) Dec 24 13:00:49.559: INFO: (16) /api/v1/namespaces/proxy-4004/pods/http:proxy-service-7rj76-zl7h8:162/proxy/: bar (200; 13.096941ms) Dec 24 13:00:49.559: INFO: (16) /api/v1/namespaces/proxy-4004/pods/http:proxy-service-7rj76-zl7h8:1080/proxy/: ... (200; 12.816645ms) Dec 24 13:00:49.559: INFO: (16) /api/v1/namespaces/proxy-4004/services/proxy-service-7rj76:portname2/proxy/: bar (200; 13.050289ms) Dec 24 13:00:49.559: INFO: (16) /api/v1/namespaces/proxy-4004/pods/http:proxy-service-7rj76-zl7h8:160/proxy/: foo (200; 12.895327ms) Dec 24 13:00:49.559: INFO: (16) /api/v1/namespaces/proxy-4004/pods/https:proxy-service-7rj76-zl7h8:460/proxy/: tls baz (200; 13.013325ms) Dec 24 13:00:49.559: INFO: (16) /api/v1/namespaces/proxy-4004/services/http:proxy-service-7rj76:portname2/proxy/: bar (200; 13.086491ms) Dec 24 13:00:49.559: INFO: (16) /api/v1/namespaces/proxy-4004/services/proxy-service-7rj76:portname1/proxy/: foo (200; 13.026662ms) Dec 24 13:00:49.559: INFO: (16) /api/v1/namespaces/proxy-4004/services/https:proxy-service-7rj76:tlsportname2/proxy/: tls qux (200; 13.074606ms) Dec 24 13:00:49.559: INFO: (16) /api/v1/namespaces/proxy-4004/services/https:proxy-service-7rj76:tlsportname1/proxy/: tls baz (200; 13.008571ms) Dec 24 13:00:49.559: INFO: (16) /api/v1/namespaces/proxy-4004/pods/https:proxy-service-7rj76-zl7h8:443/proxy/: test (200; 13.214581ms) Dec 24 13:00:49.574: INFO: (17) /api/v1/namespaces/proxy-4004/pods/http:proxy-service-7rj76-zl7h8:160/proxy/: foo (200; 14.238936ms) Dec 24 13:00:49.574: INFO: (17) /api/v1/namespaces/proxy-4004/pods/proxy-service-7rj76-zl7h8:1080/proxy/: test<... (200; 14.387648ms) Dec 24 13:00:49.576: INFO: (17) /api/v1/namespaces/proxy-4004/pods/https:proxy-service-7rj76-zl7h8:462/proxy/: tls qux (200; 15.871132ms) Dec 24 13:00:49.576: INFO: (17) /api/v1/namespaces/proxy-4004/pods/http:proxy-service-7rj76-zl7h8:162/proxy/: bar (200; 15.998798ms) Dec 24 13:00:49.576: INFO: (17) /api/v1/namespaces/proxy-4004/pods/proxy-service-7rj76-zl7h8:162/proxy/: bar (200; 16.009819ms) Dec 24 13:00:49.576: INFO: (17) /api/v1/namespaces/proxy-4004/pods/proxy-service-7rj76-zl7h8/proxy/: test (200; 15.909198ms) Dec 24 13:00:49.576: INFO: (17) /api/v1/namespaces/proxy-4004/pods/https:proxy-service-7rj76-zl7h8:460/proxy/: tls baz (200; 15.966466ms) Dec 24 13:00:49.576: INFO: (17) /api/v1/namespaces/proxy-4004/pods/proxy-service-7rj76-zl7h8:160/proxy/: foo (200; 16.050187ms) Dec 24 13:00:49.576: INFO: (17) /api/v1/namespaces/proxy-4004/pods/https:proxy-service-7rj76-zl7h8:443/proxy/: ... (200; 17.113166ms) Dec 24 13:00:49.578: INFO: (17) /api/v1/namespaces/proxy-4004/services/proxy-service-7rj76:portname1/proxy/: foo (200; 17.830562ms) Dec 24 13:00:49.578: INFO: (17) /api/v1/namespaces/proxy-4004/services/http:proxy-service-7rj76:portname1/proxy/: foo (200; 18.137992ms) Dec 24 13:00:49.578: INFO: (17) /api/v1/namespaces/proxy-4004/services/https:proxy-service-7rj76:tlsportname1/proxy/: tls baz (200; 18.130287ms) Dec 24 13:00:49.578: INFO: (17) /api/v1/namespaces/proxy-4004/services/proxy-service-7rj76:portname2/proxy/: bar (200; 18.185733ms) Dec 24 13:00:49.579: INFO: (17) /api/v1/namespaces/proxy-4004/services/https:proxy-service-7rj76:tlsportname2/proxy/: tls qux (200; 19.29267ms) Dec 24 13:00:49.579: INFO: (17) /api/v1/namespaces/proxy-4004/services/http:proxy-service-7rj76:portname2/proxy/: bar (200; 19.470825ms) Dec 24 13:00:49.594: INFO: (18) /api/v1/namespaces/proxy-4004/pods/proxy-service-7rj76-zl7h8:162/proxy/: bar (200; 14.029754ms) Dec 24 13:00:49.595: INFO: (18) /api/v1/namespaces/proxy-4004/services/https:proxy-service-7rj76:tlsportname1/proxy/: tls baz (200; 14.99055ms) Dec 24 13:00:49.595: INFO: (18) /api/v1/namespaces/proxy-4004/pods/https:proxy-service-7rj76-zl7h8:443/proxy/: test (200; 16.561993ms) Dec 24 13:00:49.596: INFO: (18) /api/v1/namespaces/proxy-4004/services/http:proxy-service-7rj76:portname1/proxy/: foo (200; 16.445822ms) Dec 24 13:00:49.596: INFO: (18) /api/v1/namespaces/proxy-4004/pods/proxy-service-7rj76-zl7h8:160/proxy/: foo (200; 17.211052ms) Dec 24 13:00:49.596: INFO: (18) /api/v1/namespaces/proxy-4004/pods/http:proxy-service-7rj76-zl7h8:1080/proxy/: ... (200; 16.86127ms) Dec 24 13:00:49.596: INFO: (18) /api/v1/namespaces/proxy-4004/services/proxy-service-7rj76:portname2/proxy/: bar (200; 17.040528ms) Dec 24 13:00:49.599: INFO: (18) /api/v1/namespaces/proxy-4004/services/https:proxy-service-7rj76:tlsportname2/proxy/: tls qux (200; 20.026581ms) Dec 24 13:00:49.599: INFO: (18) /api/v1/namespaces/proxy-4004/services/http:proxy-service-7rj76:portname2/proxy/: bar (200; 19.925046ms) Dec 24 13:00:49.599: INFO: (18) /api/v1/namespaces/proxy-4004/pods/http:proxy-service-7rj76-zl7h8:162/proxy/: bar (200; 20.261397ms) Dec 24 13:00:49.600: INFO: (18) /api/v1/namespaces/proxy-4004/pods/https:proxy-service-7rj76-zl7h8:462/proxy/: tls qux (200; 20.35927ms) Dec 24 13:00:49.600: INFO: (18) /api/v1/namespaces/proxy-4004/pods/http:proxy-service-7rj76-zl7h8:160/proxy/: foo (200; 20.455905ms) Dec 24 13:00:49.600: INFO: (18) /api/v1/namespaces/proxy-4004/pods/proxy-service-7rj76-zl7h8:1080/proxy/: test<... (200; 20.264804ms) Dec 24 13:00:49.601: INFO: (18) /api/v1/namespaces/proxy-4004/services/proxy-service-7rj76:portname1/proxy/: foo (200; 21.316152ms) Dec 24 13:00:49.603: INFO: (18) /api/v1/namespaces/proxy-4004/pods/https:proxy-service-7rj76-zl7h8:460/proxy/: tls baz (200; 23.628486ms) Dec 24 13:00:49.614: INFO: (19) /api/v1/namespaces/proxy-4004/pods/proxy-service-7rj76-zl7h8:162/proxy/: bar (200; 10.501995ms) Dec 24 13:00:49.614: INFO: (19) /api/v1/namespaces/proxy-4004/pods/http:proxy-service-7rj76-zl7h8:1080/proxy/: ... (200; 11.067586ms) Dec 24 13:00:49.614: INFO: (19) /api/v1/namespaces/proxy-4004/pods/http:proxy-service-7rj76-zl7h8:162/proxy/: bar (200; 10.958658ms) Dec 24 13:00:49.614: INFO: (19) /api/v1/namespaces/proxy-4004/pods/proxy-service-7rj76-zl7h8:160/proxy/: foo (200; 11.08377ms) Dec 24 13:00:49.619: INFO: (19) /api/v1/namespaces/proxy-4004/pods/https:proxy-service-7rj76-zl7h8:443/proxy/: test<... (200; 17.900522ms) Dec 24 13:00:49.621: INFO: (19) /api/v1/namespaces/proxy-4004/services/https:proxy-service-7rj76:tlsportname1/proxy/: tls baz (200; 17.908434ms) Dec 24 13:00:49.621: INFO: (19) /api/v1/namespaces/proxy-4004/pods/https:proxy-service-7rj76-zl7h8:460/proxy/: tls baz (200; 17.916908ms) Dec 24 13:00:49.621: INFO: (19) /api/v1/namespaces/proxy-4004/pods/proxy-service-7rj76-zl7h8/proxy/: test (200; 17.955382ms) Dec 24 13:00:49.621: INFO: (19) /api/v1/namespaces/proxy-4004/pods/http:proxy-service-7rj76-zl7h8:160/proxy/: foo (200; 18.159753ms) STEP: deleting ReplicationController proxy-service-7rj76 in namespace proxy-4004, will wait for the garbage collector to delete the pods Dec 24 13:00:49.682: INFO: Deleting ReplicationController proxy-service-7rj76 took: 5.844119ms Dec 24 13:00:49.983: INFO: Terminating ReplicationController proxy-service-7rj76 pods took: 300.943272ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 24 13:00:55.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-4004" for this suite. Dec 24 13:01:03.414: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 24 13:01:03.589: INFO: namespace proxy-4004 deletion completed in 8.201069975s • [SLOW TEST:24.750 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 24 13:01:03.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 24 13:01:03.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7787" for this suite. Dec 24 13:01:09.715: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 24 13:01:09.910: INFO: namespace services-7787 deletion completed in 6.213257897s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.321 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 24 13:01:09.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 24 13:01:20.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-1937" for this suite. Dec 24 13:01:26.421: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 24 13:01:26.530: INFO: namespace emptydir-wrapper-1937 deletion completed in 6.205399837s • [SLOW TEST:16.619 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 24 13:01:26.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-dgnr STEP: Creating a pod to test atomic-volume-subpath Dec 24 13:01:27.845: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-dgnr" in namespace "subpath-3839" to be "success or failure" Dec 24 13:01:27.875: INFO: Pod "pod-subpath-test-configmap-dgnr": Phase="Pending", Reason="", readiness=false. Elapsed: 30.577271ms Dec 24 13:01:29.889: INFO: Pod "pod-subpath-test-configmap-dgnr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043855659s Dec 24 13:01:31.902: INFO: Pod "pod-subpath-test-configmap-dgnr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057050908s Dec 24 13:01:33.913: INFO: Pod "pod-subpath-test-configmap-dgnr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068233592s Dec 24 13:01:35.922: INFO: Pod "pod-subpath-test-configmap-dgnr": Phase="Pending", Reason="", readiness=false. Elapsed: 8.077696007s Dec 24 13:01:37.935: INFO: Pod "pod-subpath-test-configmap-dgnr": Phase="Running", Reason="", readiness=true. Elapsed: 10.089807292s Dec 24 13:01:39.941: INFO: Pod "pod-subpath-test-configmap-dgnr": Phase="Running", Reason="", readiness=true. Elapsed: 12.096446972s Dec 24 13:01:41.968: INFO: Pod "pod-subpath-test-configmap-dgnr": Phase="Running", Reason="", readiness=true. Elapsed: 14.123619393s Dec 24 13:01:44.049: INFO: Pod "pod-subpath-test-configmap-dgnr": Phase="Running", Reason="", readiness=true. Elapsed: 16.20373169s Dec 24 13:01:46.055: INFO: Pod "pod-subpath-test-configmap-dgnr": Phase="Running", Reason="", readiness=true. Elapsed: 18.210149337s Dec 24 13:01:48.065: INFO: Pod "pod-subpath-test-configmap-dgnr": Phase="Running", Reason="", readiness=true. Elapsed: 20.220054131s Dec 24 13:01:50.078: INFO: Pod "pod-subpath-test-configmap-dgnr": Phase="Running", Reason="", readiness=true. Elapsed: 22.232731499s Dec 24 13:01:52.153: INFO: Pod "pod-subpath-test-configmap-dgnr": Phase="Running", Reason="", readiness=true. Elapsed: 24.307956749s Dec 24 13:01:54.159: INFO: Pod "pod-subpath-test-configmap-dgnr": Phase="Running", Reason="", readiness=true. Elapsed: 26.314657148s Dec 24 13:01:56.168: INFO: Pod "pod-subpath-test-configmap-dgnr": Phase="Running", Reason="", readiness=true. Elapsed: 28.323298009s Dec 24 13:01:58.175: INFO: Pod "pod-subpath-test-configmap-dgnr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.330326442s STEP: Saw pod success Dec 24 13:01:58.175: INFO: Pod "pod-subpath-test-configmap-dgnr" satisfied condition "success or failure" Dec 24 13:01:58.178: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-dgnr container test-container-subpath-configmap-dgnr: STEP: delete the pod Dec 24 13:01:58.287: INFO: Waiting for pod pod-subpath-test-configmap-dgnr to disappear Dec 24 13:01:58.467: INFO: Pod pod-subpath-test-configmap-dgnr no longer exists STEP: Deleting pod pod-subpath-test-configmap-dgnr Dec 24 13:01:58.468: INFO: Deleting pod "pod-subpath-test-configmap-dgnr" in namespace "subpath-3839" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 24 13:01:58.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3839" for this suite. Dec 24 13:02:05.302: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 24 13:02:05.427: INFO: namespace subpath-3839 deletion completed in 6.940366252s • [SLOW TEST:38.897 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 24 13:02:05.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 24 13:02:05.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-6177' Dec 24 13:02:07.572: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Dec 24 13:02:07.573: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Dec 24 13:02:07.645: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-qjpds] Dec 24 13:02:07.646: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-qjpds" in namespace "kubectl-6177" to be "running and ready" Dec 24 13:02:07.664: INFO: Pod "e2e-test-nginx-rc-qjpds": Phase="Pending", Reason="", readiness=false. Elapsed: 18.948182ms Dec 24 13:02:09.675: INFO: Pod "e2e-test-nginx-rc-qjpds": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029500339s Dec 24 13:02:11.685: INFO: Pod "e2e-test-nginx-rc-qjpds": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039696797s Dec 24 13:02:13.692: INFO: Pod "e2e-test-nginx-rc-qjpds": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046106793s Dec 24 13:02:15.703: INFO: Pod "e2e-test-nginx-rc-qjpds": Phase="Running", Reason="", readiness=true. Elapsed: 8.057141142s Dec 24 13:02:15.703: INFO: Pod "e2e-test-nginx-rc-qjpds" satisfied condition "running and ready" Dec 24 13:02:15.703: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-qjpds] Dec 24 13:02:15.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-6177' Dec 24 13:02:15.907: INFO: stderr: "" Dec 24 13:02:15.907: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 Dec 24 13:02:15.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-6177' Dec 24 13:02:16.054: INFO: stderr: "" Dec 24 13:02:16.054: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 24 13:02:16.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6177" for this suite. Dec 24 13:02:38.073: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 24 13:02:38.170: INFO: namespace kubectl-6177 deletion completed in 22.112124587s • [SLOW TEST:32.742 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 24 13:02:38.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Dec 24 13:02:38.250: INFO: Waiting up to 5m0s for pod "pod-5cc0213b-b89a-413b-9fa2-70c9aa70f122" in namespace "emptydir-1177" to be "success or failure" Dec 24 13:02:38.254: INFO: Pod "pod-5cc0213b-b89a-413b-9fa2-70c9aa70f122": Phase="Pending", Reason="", readiness=false. Elapsed: 3.501052ms Dec 24 13:02:40.263: INFO: Pod "pod-5cc0213b-b89a-413b-9fa2-70c9aa70f122": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013105844s Dec 24 13:02:42.276: INFO: Pod "pod-5cc0213b-b89a-413b-9fa2-70c9aa70f122": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026303925s Dec 24 13:02:44.290: INFO: Pod "pod-5cc0213b-b89a-413b-9fa2-70c9aa70f122": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039573154s Dec 24 13:02:46.301: INFO: Pod "pod-5cc0213b-b89a-413b-9fa2-70c9aa70f122": Phase="Pending", Reason="", readiness=false. Elapsed: 8.050541896s Dec 24 13:02:48.312: INFO: Pod "pod-5cc0213b-b89a-413b-9fa2-70c9aa70f122": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.061576967s STEP: Saw pod success Dec 24 13:02:48.312: INFO: Pod "pod-5cc0213b-b89a-413b-9fa2-70c9aa70f122" satisfied condition "success or failure" Dec 24 13:02:48.317: INFO: Trying to get logs from node iruya-node pod pod-5cc0213b-b89a-413b-9fa2-70c9aa70f122 container test-container: STEP: delete the pod Dec 24 13:02:48.391: INFO: Waiting for pod pod-5cc0213b-b89a-413b-9fa2-70c9aa70f122 to disappear Dec 24 13:02:48.416: INFO: Pod pod-5cc0213b-b89a-413b-9fa2-70c9aa70f122 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 24 13:02:48.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1177" for this suite. Dec 24 13:02:54.481: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 24 13:02:54.604: INFO: namespace emptydir-1177 deletion completed in 6.17295343s • [SLOW TEST:16.434 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 24 13:02:54.606: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8858.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-8858.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8858.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8858.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-8858.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8858.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Dec 24 13:03:06.939: INFO: Unable to read wheezy_udp@PodARecord from pod dns-8858/dns-test-1c9bbb65-16ed-4cc7-b260-a25301a1ba67: the server could not find the requested resource (get pods dns-test-1c9bbb65-16ed-4cc7-b260-a25301a1ba67) Dec 24 13:03:06.945: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-8858/dns-test-1c9bbb65-16ed-4cc7-b260-a25301a1ba67: the server could not find the requested resource (get pods dns-test-1c9bbb65-16ed-4cc7-b260-a25301a1ba67) Dec 24 13:03:06.951: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-8858.svc.cluster.local from pod dns-8858/dns-test-1c9bbb65-16ed-4cc7-b260-a25301a1ba67: the server could not find the requested resource (get pods dns-test-1c9bbb65-16ed-4cc7-b260-a25301a1ba67) Dec 24 13:03:06.956: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-8858/dns-test-1c9bbb65-16ed-4cc7-b260-a25301a1ba67: the server could not find the requested resource (get pods dns-test-1c9bbb65-16ed-4cc7-b260-a25301a1ba67) Dec 24 13:03:06.962: INFO: Unable to read jessie_udp@PodARecord from pod dns-8858/dns-test-1c9bbb65-16ed-4cc7-b260-a25301a1ba67: the server could not find the requested resource (get pods dns-test-1c9bbb65-16ed-4cc7-b260-a25301a1ba67) Dec 24 13:03:06.969: INFO: Unable to read jessie_tcp@PodARecord from pod dns-8858/dns-test-1c9bbb65-16ed-4cc7-b260-a25301a1ba67: the server could not find the requested resource (get pods dns-test-1c9bbb65-16ed-4cc7-b260-a25301a1ba67) Dec 24 13:03:06.969: INFO: Lookups using dns-8858/dns-test-1c9bbb65-16ed-4cc7-b260-a25301a1ba67 failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-8858.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord] Dec 24 13:03:12.107: INFO: DNS probes using dns-8858/dns-test-1c9bbb65-16ed-4cc7-b260-a25301a1ba67 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 24 13:03:12.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8858" for this suite. Dec 24 13:03:20.370: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 24 13:03:20.530: INFO: namespace dns-8858 deletion completed in 8.257498165s • [SLOW TEST:25.924 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 24 13:03:20.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-9235/secret-test-5d2cc539-e8da-408b-ab06-5fefcc0895a0 STEP: Creating a pod to test consume secrets Dec 24 13:03:20.669: INFO: Waiting up to 5m0s for pod "pod-configmaps-5cf33ad8-6e14-4628-9e94-6c86d9f93cbe" in namespace "secrets-9235" to be "success or failure" Dec 24 13:03:20.683: INFO: Pod "pod-configmaps-5cf33ad8-6e14-4628-9e94-6c86d9f93cbe": Phase="Pending", Reason="", readiness=false. Elapsed: 14.081317ms Dec 24 13:03:22.699: INFO: Pod "pod-configmaps-5cf33ad8-6e14-4628-9e94-6c86d9f93cbe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029564983s Dec 24 13:03:24.704: INFO: Pod "pod-configmaps-5cf33ad8-6e14-4628-9e94-6c86d9f93cbe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035261895s Dec 24 13:03:26.711: INFO: Pod "pod-configmaps-5cf33ad8-6e14-4628-9e94-6c86d9f93cbe": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042164147s Dec 24 13:03:28.730: INFO: Pod "pod-configmaps-5cf33ad8-6e14-4628-9e94-6c86d9f93cbe": Phase="Pending", Reason="", readiness=false. Elapsed: 8.060601861s Dec 24 13:03:30.738: INFO: Pod "pod-configmaps-5cf33ad8-6e14-4628-9e94-6c86d9f93cbe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.068847015s STEP: Saw pod success Dec 24 13:03:30.738: INFO: Pod "pod-configmaps-5cf33ad8-6e14-4628-9e94-6c86d9f93cbe" satisfied condition "success or failure" Dec 24 13:03:30.742: INFO: Trying to get logs from node iruya-node pod pod-configmaps-5cf33ad8-6e14-4628-9e94-6c86d9f93cbe container env-test: STEP: delete the pod Dec 24 13:03:30.849: INFO: Waiting for pod pod-configmaps-5cf33ad8-6e14-4628-9e94-6c86d9f93cbe to disappear Dec 24 13:03:30.861: INFO: Pod pod-configmaps-5cf33ad8-6e14-4628-9e94-6c86d9f93cbe no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 24 13:03:30.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9235" for this suite. Dec 24 13:03:36.973: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 24 13:03:37.060: INFO: namespace secrets-9235 deletion completed in 6.181782863s • [SLOW TEST:16.530 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 24 13:03:37.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 24 13:03:37.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Dec 24 13:03:37.326: INFO: stderr: "" Dec 24 13:03:37.326: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:55:20Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.1\", GitCommit:\"4485c6f18cee9a5d3c3b4e523bd27972b1b53892\", GitTreeState:\"clean\", BuildDate:\"2019-07-18T09:09:21Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 24 13:03:37.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3915" for this suite. Dec 24 13:03:43.514: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 24 13:03:43.684: INFO: namespace kubectl-3915 deletion completed in 6.343299413s • [SLOW TEST:6.624 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 24 13:03:43.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Dec 24 13:03:54.911: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 24 13:03:55.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-859" for this suite. Dec 24 13:04:20.078: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 24 13:04:20.284: INFO: namespace replicaset-859 deletion completed in 24.335174958s • [SLOW TEST:36.599 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 24 13:04:20.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 24 13:04:20.427: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Dec 24 13:04:30.499: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Dec 24 13:04:32.512: INFO: Creating deployment "test-rollover-deployment" Dec 24 13:04:32.565: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Dec 24 13:04:34.646: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Dec 24 13:04:34.652: INFO: Ensure that both replica sets have 1 created replica Dec 24 13:04:34.658: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Dec 24 13:04:34.670: INFO: Updating deployment test-rollover-deployment Dec 24 13:04:34.670: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Dec 24 13:04:37.531: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Dec 24 13:04:37.546: INFO: Make sure deployment "test-rollover-deployment" is complete Dec 24 13:04:37.555: INFO: all replica sets need to contain the pod-template-hash label Dec 24 13:04:37.555: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712789472, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712789472, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712789475, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712789472, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 24 13:04:39.573: INFO: all replica sets need to contain the pod-template-hash label Dec 24 13:04:39.573: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712789472, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712789472, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712789475, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712789472, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 24 13:04:42.051: INFO: all replica sets need to contain the pod-template-hash label Dec 24 13:04:42.052: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712789472, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712789472, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712789475, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712789472, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 24 13:04:44.101: INFO: all replica sets need to contain the pod-template-hash label Dec 24 13:04:44.101: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712789472, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712789472, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712789475, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712789472, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 24 13:04:45.571: INFO: all replica sets need to contain the pod-template-hash label Dec 24 13:04:45.571: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712789472, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712789472, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712789475, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712789472, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 24 13:04:47.568: INFO: all replica sets need to contain the pod-template-hash label Dec 24 13:04:47.568: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712789472, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712789472, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712789486, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712789472, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 24 13:04:49.569: INFO: all replica sets need to contain the pod-template-hash label Dec 24 13:04:49.569: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712789472, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712789472, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712789486, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712789472, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 24 13:04:51.570: INFO: all replica sets need to contain the pod-template-hash label Dec 24 13:04:51.570: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712789472, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712789472, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712789486, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712789472, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 24 13:04:53.568: INFO: all replica sets need to contain the pod-template-hash label Dec 24 13:04:53.568: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712789472, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712789472, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712789486, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712789472, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 24 13:04:55.569: INFO: all replica sets need to contain the pod-template-hash label Dec 24 13:04:55.569: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712789472, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712789472, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712789486, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712789472, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 24 13:04:57.583: INFO: Dec 24 13:04:57.583: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Dec 24 13:04:57.602: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-5022,SelfLink:/apis/apps/v1/namespaces/deployment-5022/deployments/test-rollover-deployment,UID:928fbe19-f16a-4525-8b3b-bc26decc823a,ResourceVersion:17887717,Generation:2,CreationTimestamp:2019-12-24 13:04:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-24 13:04:32 +0000 UTC 2019-12-24 13:04:32 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-24 13:04:56 +0000 UTC 2019-12-24 13:04:32 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Dec 24 13:04:57.608: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-5022,SelfLink:/apis/apps/v1/namespaces/deployment-5022/replicasets/test-rollover-deployment-854595fc44,UID:42660bd6-0111-442d-ba3a-aa9867ef3632,ResourceVersion:17887708,Generation:2,CreationTimestamp:2019-12-24 13:04:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 928fbe19-f16a-4525-8b3b-bc26decc823a 0xc0025577b7 0xc0025577b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Dec 24 13:04:57.608: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Dec 24 13:04:57.608: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-5022,SelfLink:/apis/apps/v1/namespaces/deployment-5022/replicasets/test-rollover-controller,UID:77add35c-cc5d-49a7-9cba-10c9a5f02a40,ResourceVersion:17887716,Generation:2,CreationTimestamp:2019-12-24 13:04:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 928fbe19-f16a-4525-8b3b-bc26decc823a 0xc0025576d7 0xc0025576d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Dec 24 13:04:57.608: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-5022,SelfLink:/apis/apps/v1/namespaces/deployment-5022/replicasets/test-rollover-deployment-9b8b997cf,UID:2952fff6-d9fe-4fc9-9c2a-a58678fbeaef,ResourceVersion:17887669,Generation:2,CreationTimestamp:2019-12-24 13:04:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 928fbe19-f16a-4525-8b3b-bc26decc823a 0xc002557880 0xc002557881}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Dec 24 13:04:57.613: INFO: Pod "test-rollover-deployment-854595fc44-knf2q" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-knf2q,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-5022,SelfLink:/api/v1/namespaces/deployment-5022/pods/test-rollover-deployment-854595fc44-knf2q,UID:f6c3585d-86bc-4114-bf29-cda88bae0027,ResourceVersion:17887692,Generation:0,CreationTimestamp:2019-12-24 13:04:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 42660bd6-0111-442d-ba3a-aa9867ef3632 0xc0019764c7 0xc0019764c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-79kbd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-79kbd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-79kbd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001976530} {node.kubernetes.io/unreachable Exists NoExecute 0xc001976550}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:04:35 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:04:46 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:04:46 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:04:34 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2019-12-24 13:04:35 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-24 13:04:45 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://2c43ff3f172de4b0deb3ec106ee4837d35247d775e8582cb22755fae4ec54204}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 24 13:04:57.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5022" for this suite. Dec 24 13:05:07.645: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 24 13:05:07.737: INFO: namespace deployment-5022 deletion completed in 10.118224149s • [SLOW TEST:47.452 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 24 13:05:07.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 24 13:05:07.831: INFO: Creating deployment "nginx-deployment" Dec 24 13:05:07.841: INFO: Waiting for observed generation 1 Dec 24 13:05:11.546: INFO: Waiting for all required pods to come up Dec 24 13:05:11.638: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Dec 24 13:05:37.935: INFO: Waiting for deployment "nginx-deployment" to complete Dec 24 13:05:37.940: INFO: Updating deployment "nginx-deployment" with a non-existent image Dec 24 13:05:37.947: INFO: Updating deployment nginx-deployment Dec 24 13:05:37.947: INFO: Waiting for observed generation 2 Dec 24 13:05:40.711: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Dec 24 13:05:41.963: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Dec 24 13:05:42.175: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Dec 24 13:05:45.764: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Dec 24 13:05:45.764: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Dec 24 13:05:45.771: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Dec 24 13:05:46.314: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Dec 24 13:05:46.314: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Dec 24 13:05:46.332: INFO: Updating deployment nginx-deployment Dec 24 13:05:46.332: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Dec 24 13:05:46.631: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Dec 24 13:05:52.005: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Dec 24 13:05:54.753: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-4915,SelfLink:/apis/apps/v1/namespaces/deployment-4915/deployments/nginx-deployment,UID:e1fc1d67-a8d6-4141-9576-015fa36b36e5,ResourceVersion:17888050,Generation:3,CreationTimestamp:2019-12-24 13:05:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2019-12-24 13:05:46 +0000 UTC 2019-12-24 13:05:46 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2019-12-24 13:05:50 +0000 UTC 2019-12-24 13:05:07 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},} Dec 24 13:05:56.694: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-4915,SelfLink:/apis/apps/v1/namespaces/deployment-4915/replicasets/nginx-deployment-55fb7cb77f,UID:f5afdac1-626d-43ea-9561-0ce926a6145f,ResourceVersion:17888044,Generation:3,CreationTimestamp:2019-12-24 13:05:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment e1fc1d67-a8d6-4141-9576-015fa36b36e5 0xc000a5b287 0xc000a5b288}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Dec 24 13:05:56.694: INFO: All old ReplicaSets of Deployment "nginx-deployment": Dec 24 13:05:56.694: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-4915,SelfLink:/apis/apps/v1/namespaces/deployment-4915/replicasets/nginx-deployment-7b8c6f4498,UID:3cdeb5b2-d7b0-4d2c-a7e1-ae5b2f93ccda,ResourceVersion:17888043,Generation:3,CreationTimestamp:2019-12-24 13:05:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment e1fc1d67-a8d6-4141-9576-015fa36b36e5 0xc000a5b3d7 0xc000a5b3d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Dec 24 13:05:57.920: INFO: Pod "nginx-deployment-55fb7cb77f-b5c6k" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-b5c6k,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4915,SelfLink:/api/v1/namespaces/deployment-4915/pods/nginx-deployment-55fb7cb77f-b5c6k,UID:b3740d3f-5c22-49b4-9a7a-523562450db5,ResourceVersion:17888063,Generation:0,CreationTimestamp:2019-12-24 13:05:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f5afdac1-626d-43ea-9561-0ce926a6145f 0xc001dde5a7 0xc001dde5a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lhf8w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lhf8w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-lhf8w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001dde650} {node.kubernetes.io/unreachable Exists NoExecute 0xc001dde670}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:46 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2019-12-24 13:05:47 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 24 13:05:57.920: INFO: Pod "nginx-deployment-55fb7cb77f-bjmn6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-bjmn6,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4915,SelfLink:/api/v1/namespaces/deployment-4915/pods/nginx-deployment-55fb7cb77f-bjmn6,UID:85005d0c-5bf4-41a0-a8fa-cac553aff808,ResourceVersion:17888008,Generation:0,CreationTimestamp:2019-12-24 13:05:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f5afdac1-626d-43ea-9561-0ce926a6145f 0xc001dde847 0xc001dde848}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lhf8w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lhf8w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-lhf8w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001dde910} {node.kubernetes.io/unreachable Exists NoExecute 0xc001dde990}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 24 13:05:57.920: INFO: Pod "nginx-deployment-55fb7cb77f-bnrft" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-bnrft,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4915,SelfLink:/api/v1/namespaces/deployment-4915/pods/nginx-deployment-55fb7cb77f-bnrft,UID:08bb4171-77dc-42fa-9ed6-8235d413a48b,ResourceVersion:17888022,Generation:0,CreationTimestamp:2019-12-24 13:05:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f5afdac1-626d-43ea-9561-0ce926a6145f 0xc001ddea17 0xc001ddea18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lhf8w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lhf8w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-lhf8w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ddea90} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ddeab0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 24 13:05:57.920: INFO: Pod "nginx-deployment-55fb7cb77f-gjwz5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-gjwz5,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4915,SelfLink:/api/v1/namespaces/deployment-4915/pods/nginx-deployment-55fb7cb77f-gjwz5,UID:c8ced7f1-aa0b-46b0-b8e0-0f72acab57ae,ResourceVersion:17888024,Generation:0,CreationTimestamp:2019-12-24 13:05:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f5afdac1-626d-43ea-9561-0ce926a6145f 0xc001ddeb37 0xc001ddeb38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lhf8w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lhf8w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-lhf8w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ddebb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ddebd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 24 13:05:57.921: INFO: Pod "nginx-deployment-55fb7cb77f-htclm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-htclm,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4915,SelfLink:/api/v1/namespaces/deployment-4915/pods/nginx-deployment-55fb7cb77f-htclm,UID:feb7505f-d17a-4bc9-9274-5119fb92f5b4,ResourceVersion:17887962,Generation:0,CreationTimestamp:2019-12-24 13:05:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f5afdac1-626d-43ea-9561-0ce926a6145f 0xc001ddec57 0xc001ddec58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lhf8w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lhf8w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-lhf8w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ddecd0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ddecf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:38 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-24 13:05:38 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 24 13:05:57.921: INFO: Pod "nginx-deployment-55fb7cb77f-kt9ps" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-kt9ps,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4915,SelfLink:/api/v1/namespaces/deployment-4915/pods/nginx-deployment-55fb7cb77f-kt9ps,UID:1eb58166-f431-4cb4-bda1-0eb035d90d55,ResourceVersion:17888064,Generation:0,CreationTimestamp:2019-12-24 13:05:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f5afdac1-626d-43ea-9561-0ce926a6145f 0xc001ddedc7 0xc001ddedc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lhf8w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lhf8w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-lhf8w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ddee40} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ddee60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:47 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-24 13:05:47 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 24 13:05:57.921: INFO: Pod "nginx-deployment-55fb7cb77f-mggd7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-mggd7,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4915,SelfLink:/api/v1/namespaces/deployment-4915/pods/nginx-deployment-55fb7cb77f-mggd7,UID:db1f5f91-1eef-41fb-98fe-451905bcabc8,ResourceVersion:17887972,Generation:0,CreationTimestamp:2019-12-24 13:05:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f5afdac1-626d-43ea-9561-0ce926a6145f 0xc001ddef37 0xc001ddef38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lhf8w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lhf8w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-lhf8w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ddefb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ddefd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:38 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2019-12-24 13:05:40 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 24 13:05:57.921: INFO: Pod "nginx-deployment-55fb7cb77f-q9kv7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-q9kv7,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4915,SelfLink:/api/v1/namespaces/deployment-4915/pods/nginx-deployment-55fb7cb77f-q9kv7,UID:9f05cf73-c0fd-4541-9fc4-baad94685ef7,ResourceVersion:17887952,Generation:0,CreationTimestamp:2019-12-24 13:05:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f5afdac1-626d-43ea-9561-0ce926a6145f 0xc001ddf0a7 0xc001ddf0a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lhf8w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lhf8w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-lhf8w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ddf1d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ddf1f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:38 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-24 13:05:38 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 24 13:05:57.921: INFO: Pod "nginx-deployment-55fb7cb77f-qxwhz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-qxwhz,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4915,SelfLink:/api/v1/namespaces/deployment-4915/pods/nginx-deployment-55fb7cb77f-qxwhz,UID:57fee0cf-42a1-4df1-a4b0-54168ffa93f4,ResourceVersion:17887947,Generation:0,CreationTimestamp:2019-12-24 13:05:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f5afdac1-626d-43ea-9561-0ce926a6145f 0xc001ddf337 0xc001ddf338}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lhf8w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lhf8w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-lhf8w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ddf3b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ddf440}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:38 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2019-12-24 13:05:38 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 24 13:05:57.922: INFO: Pod "nginx-deployment-55fb7cb77f-r86m8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-r86m8,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4915,SelfLink:/api/v1/namespaces/deployment-4915/pods/nginx-deployment-55fb7cb77f-r86m8,UID:2a508d5a-ecc6-42c8-aa4f-c1205b82be8b,ResourceVersion:17888032,Generation:0,CreationTimestamp:2019-12-24 13:05:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f5afdac1-626d-43ea-9561-0ce926a6145f 0xc001ddf567 0xc001ddf568}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lhf8w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lhf8w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-lhf8w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ddf5d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ddf5f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 24 13:05:57.922: INFO: Pod "nginx-deployment-55fb7cb77f-swfqd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-swfqd,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4915,SelfLink:/api/v1/namespaces/deployment-4915/pods/nginx-deployment-55fb7cb77f-swfqd,UID:c11c04ac-d295-40a8-84c5-bd97cdef8a87,ResourceVersion:17887975,Generation:0,CreationTimestamp:2019-12-24 13:05:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f5afdac1-626d-43ea-9561-0ce926a6145f 0xc001ddf6f7 0xc001ddf6f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lhf8w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lhf8w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-lhf8w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ddf7b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ddf840}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:38 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-24 13:05:40 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 24 13:05:57.922: INFO: Pod "nginx-deployment-55fb7cb77f-wfjq9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-wfjq9,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4915,SelfLink:/api/v1/namespaces/deployment-4915/pods/nginx-deployment-55fb7cb77f-wfjq9,UID:d3e91aa2-e46a-47da-a840-7e0fc44eaa41,ResourceVersion:17888023,Generation:0,CreationTimestamp:2019-12-24 13:05:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f5afdac1-626d-43ea-9561-0ce926a6145f 0xc001ddf937 0xc001ddf938}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lhf8w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lhf8w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-lhf8w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ddfa00} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ddfa20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 24 13:05:57.922: INFO: Pod "nginx-deployment-55fb7cb77f-wgwbk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-wgwbk,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4915,SelfLink:/api/v1/namespaces/deployment-4915/pods/nginx-deployment-55fb7cb77f-wgwbk,UID:b98af734-a15c-4dfb-9fc0-bcf59dc28bf7,ResourceVersion:17888054,Generation:0,CreationTimestamp:2019-12-24 13:05:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f5afdac1-626d-43ea-9561-0ce926a6145f 0xc001ddfaf7 0xc001ddfaf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lhf8w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lhf8w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-lhf8w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ddfbe0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ddfc70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:47 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-24 13:05:47 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 24 13:05:57.922: INFO: Pod "nginx-deployment-7b8c6f4498-48bzn" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-48bzn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4915,SelfLink:/api/v1/namespaces/deployment-4915/pods/nginx-deployment-7b8c6f4498-48bzn,UID:a38d417d-96cd-43dc-bbfa-1a3c4466b236,ResourceVersion:17887899,Generation:0,CreationTimestamp:2019-12-24 13:05:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 3cdeb5b2-d7b0-4d2c-a7e1-ae5b2f93ccda 0xc001ddfdd7 0xc001ddfdd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lhf8w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lhf8w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lhf8w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ddfe80} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ddfea0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:08 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:33 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:08 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.5,StartTime:2019-12-24 13:05:08 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-24 13:05:32 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://c3b6c01acfc65c948a20976ee8d2310e4c5beab476f043e213bd840f9170506f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 24 13:05:57.922: INFO: Pod "nginx-deployment-7b8c6f4498-4bmvh" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-4bmvh,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4915,SelfLink:/api/v1/namespaces/deployment-4915/pods/nginx-deployment-7b8c6f4498-4bmvh,UID:8b4b6d5f-a65f-4502-b3e5-99568d5264e0,ResourceVersion:17888034,Generation:0,CreationTimestamp:2019-12-24 13:05:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 3cdeb5b2-d7b0-4d2c-a7e1-ae5b2f93ccda 0xc001ddff77 0xc001ddff78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lhf8w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lhf8w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lhf8w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b92060} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b92080}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 24 13:05:57.923: INFO: Pod "nginx-deployment-7b8c6f4498-64m4x" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-64m4x,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4915,SelfLink:/api/v1/namespaces/deployment-4915/pods/nginx-deployment-7b8c6f4498-64m4x,UID:45273a28-17dc-4652-b059-37cc4311393c,ResourceVersion:17888025,Generation:0,CreationTimestamp:2019-12-24 13:05:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 3cdeb5b2-d7b0-4d2c-a7e1-ae5b2f93ccda 0xc000b92147 0xc000b92148}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lhf8w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lhf8w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lhf8w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b92260} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b922b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 24 13:05:57.923: INFO: Pod "nginx-deployment-7b8c6f4498-85tsh" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-85tsh,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4915,SelfLink:/api/v1/namespaces/deployment-4915/pods/nginx-deployment-7b8c6f4498-85tsh,UID:b305d465-f0a1-4b23-9903-e2a15cb5f58e,ResourceVersion:17888027,Generation:0,CreationTimestamp:2019-12-24 13:05:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 3cdeb5b2-d7b0-4d2c-a7e1-ae5b2f93ccda 0xc000b92347 0xc000b92348}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lhf8w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lhf8w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lhf8w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b923c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b924e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 24 13:05:57.923: INFO: Pod "nginx-deployment-7b8c6f4498-bschd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bschd,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4915,SelfLink:/api/v1/namespaces/deployment-4915/pods/nginx-deployment-7b8c6f4498-bschd,UID:f0fc3f89-7453-4e0f-bf23-e7c0ff8ad9ce,ResourceVersion:17888026,Generation:0,CreationTimestamp:2019-12-24 13:05:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 3cdeb5b2-d7b0-4d2c-a7e1-ae5b2f93ccda 0xc000b927b7 0xc000b927b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lhf8w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lhf8w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lhf8w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b92a50} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b92dd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 24 13:05:57.923: INFO: Pod "nginx-deployment-7b8c6f4498-bvdtf" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bvdtf,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4915,SelfLink:/api/v1/namespaces/deployment-4915/pods/nginx-deployment-7b8c6f4498-bvdtf,UID:cf6dfec9-6d16-4123-9013-de90fed82cb0,ResourceVersion:17887919,Generation:0,CreationTimestamp:2019-12-24 13:05:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 3cdeb5b2-d7b0-4d2c-a7e1-ae5b2f93ccda 0xc000b93027 0xc000b93028}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lhf8w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lhf8w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lhf8w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b93240} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b93270}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:08 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:37 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:07 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.8,StartTime:2019-12-24 13:05:08 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-24 13:05:36 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://31691650c5e0fe67da47ba38ebc3235b64ac4fbaf265412d2c2e39a5eebedddf}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 24 13:05:57.923: INFO: Pod "nginx-deployment-7b8c6f4498-g776v" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-g776v,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4915,SelfLink:/api/v1/namespaces/deployment-4915/pods/nginx-deployment-7b8c6f4498-g776v,UID:55c2c712-96e7-46d5-bd14-e0ff4fbe8b3e,ResourceVersion:17887889,Generation:0,CreationTimestamp:2019-12-24 13:05:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 3cdeb5b2-d7b0-4d2c-a7e1-ae5b2f93ccda 0xc000b93357 0xc000b93358}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lhf8w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lhf8w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lhf8w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b933d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b93400}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:08 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:32 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:08 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.4,StartTime:2019-12-24 13:05:08 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-24 13:05:32 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://d33f50ebb91996c50e2271d13ff29914442b164ce136886d416ee361d8496ecc}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 24 13:05:57.923: INFO: Pod "nginx-deployment-7b8c6f4498-gpsbb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-gpsbb,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4915,SelfLink:/api/v1/namespaces/deployment-4915/pods/nginx-deployment-7b8c6f4498-gpsbb,UID:e3aa23f1-b55f-4c2b-b2b9-97cf18a28691,ResourceVersion:17888035,Generation:0,CreationTimestamp:2019-12-24 13:05:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 3cdeb5b2-d7b0-4d2c-a7e1-ae5b2f93ccda 0xc000b93567 0xc000b93568}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lhf8w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lhf8w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lhf8w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b935f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b93610}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 24 13:05:57.923: INFO: Pod "nginx-deployment-7b8c6f4498-gqxrf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-gqxrf,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4915,SelfLink:/api/v1/namespaces/deployment-4915/pods/nginx-deployment-7b8c6f4498-gqxrf,UID:76ceb75a-884d-443a-9b51-fe412017b017,ResourceVersion:17888029,Generation:0,CreationTimestamp:2019-12-24 13:05:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 3cdeb5b2-d7b0-4d2c-a7e1-ae5b2f93ccda 0xc000b938d7 0xc000b938d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lhf8w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lhf8w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lhf8w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b93a20} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b93a40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 24 13:05:57.923: INFO: Pod "nginx-deployment-7b8c6f4498-hhc2d" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-hhc2d,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4915,SelfLink:/api/v1/namespaces/deployment-4915/pods/nginx-deployment-7b8c6f4498-hhc2d,UID:8490d263-9c4e-45b2-950e-2d247ea5842e,ResourceVersion:17888005,Generation:0,CreationTimestamp:2019-12-24 13:05:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 3cdeb5b2-d7b0-4d2c-a7e1-ae5b2f93ccda 0xc000b93b67 0xc000b93b68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lhf8w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lhf8w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lhf8w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b93c20} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b93d20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 24 13:05:57.924: INFO: Pod "nginx-deployment-7b8c6f4498-hhtt8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-hhtt8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4915,SelfLink:/api/v1/namespaces/deployment-4915/pods/nginx-deployment-7b8c6f4498-hhtt8,UID:db37cc8a-0980-4a6c-94fa-3cfa13d7f2ff,ResourceVersion:17888033,Generation:0,CreationTimestamp:2019-12-24 13:05:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 3cdeb5b2-d7b0-4d2c-a7e1-ae5b2f93ccda 0xc000b93dc7 0xc000b93dc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lhf8w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lhf8w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lhf8w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b93ed0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b93ef0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 24 13:05:57.924: INFO: Pod "nginx-deployment-7b8c6f4498-jvz2x" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jvz2x,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4915,SelfLink:/api/v1/namespaces/deployment-4915/pods/nginx-deployment-7b8c6f4498-jvz2x,UID:eec22504-cdb8-48c3-84a4-d56bd5505474,ResourceVersion:17888036,Generation:0,CreationTimestamp:2019-12-24 13:05:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 3cdeb5b2-d7b0-4d2c-a7e1-ae5b2f93ccda 0xc000b66087 0xc000b66088}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lhf8w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lhf8w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lhf8w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b66140} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b66160}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 24 13:05:57.924: INFO: Pod "nginx-deployment-7b8c6f4498-lg7cn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-lg7cn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4915,SelfLink:/api/v1/namespaces/deployment-4915/pods/nginx-deployment-7b8c6f4498-lg7cn,UID:95ec04b6-5177-4799-ad2b-82efe87423f3,ResourceVersion:17888045,Generation:0,CreationTimestamp:2019-12-24 13:05:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 3cdeb5b2-d7b0-4d2c-a7e1-ae5b2f93ccda 0xc000b661f7 0xc000b661f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lhf8w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lhf8w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lhf8w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b662a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b662c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:47 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-24 13:05:47 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 24 13:05:57.925: INFO: Pod "nginx-deployment-7b8c6f4498-nkrkw" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-nkrkw,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4915,SelfLink:/api/v1/namespaces/deployment-4915/pods/nginx-deployment-7b8c6f4498-nkrkw,UID:cafea867-8267-4625-8486-def398472d6c,ResourceVersion:17887908,Generation:0,CreationTimestamp:2019-12-24 13:05:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 3cdeb5b2-d7b0-4d2c-a7e1-ae5b2f93ccda 0xc000b66487 0xc000b66488}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lhf8w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lhf8w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lhf8w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b666b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b666e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:08 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:36 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:36 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:08 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.7,StartTime:2019-12-24 13:05:08 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-24 13:05:36 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://071d3f46a3ef037dc4ffc22a8c2aa8b72a4ad675820167d6375bd14c46bb34e6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 24 13:05:57.925: INFO: Pod "nginx-deployment-7b8c6f4498-qgl69" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-qgl69,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4915,SelfLink:/api/v1/namespaces/deployment-4915/pods/nginx-deployment-7b8c6f4498-qgl69,UID:ab5f24a6-f1f5-4625-88e2-0dbea426d4a2,ResourceVersion:17887867,Generation:0,CreationTimestamp:2019-12-24 13:05:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 3cdeb5b2-d7b0-4d2c-a7e1-ae5b2f93ccda 0xc000b669a7 0xc000b669a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lhf8w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lhf8w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lhf8w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b66a30} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b66a50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:08 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:28 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:08 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2019-12-24 13:05:08 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-24 13:05:26 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://a301232295d3e1364f3f4f85bf719ddeefc122939397175d8da340e0fd8781cb}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 24 13:05:57.925: INFO: Pod "nginx-deployment-7b8c6f4498-qzsdx" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-qzsdx,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4915,SelfLink:/api/v1/namespaces/deployment-4915/pods/nginx-deployment-7b8c6f4498-qzsdx,UID:16cafc27-022b-4963-bebb-e8f2cbfff5a4,ResourceVersion:17887875,Generation:0,CreationTimestamp:2019-12-24 13:05:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 3cdeb5b2-d7b0-4d2c-a7e1-ae5b2f93ccda 0xc000b66b27 0xc000b66b28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lhf8w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lhf8w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lhf8w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b66c80} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b66ca0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:08 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:31 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:31 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:08 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2019-12-24 13:05:08 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-24 13:05:30 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://e543554a2e1295f2b696d25878fb596bd894dd427b287ccefb396bb8300d8b78}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 24 13:05:57.925: INFO: Pod "nginx-deployment-7b8c6f4498-tph5s" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-tph5s,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4915,SelfLink:/api/v1/namespaces/deployment-4915/pods/nginx-deployment-7b8c6f4498-tph5s,UID:32a6a73e-2926-4386-9653-20cb6e872758,ResourceVersion:17888038,Generation:0,CreationTimestamp:2019-12-24 13:05:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 3cdeb5b2-d7b0-4d2c-a7e1-ae5b2f93ccda 0xc000b66d77 0xc000b66d78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lhf8w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lhf8w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lhf8w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b66ec0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b66f00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:46 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2019-12-24 13:05:47 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 24 13:05:57.925: INFO: Pod "nginx-deployment-7b8c6f4498-tshx5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-tshx5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4915,SelfLink:/api/v1/namespaces/deployment-4915/pods/nginx-deployment-7b8c6f4498-tshx5,UID:1672bfe6-a560-4ea7-8968-a75ce0824504,ResourceVersion:17888037,Generation:0,CreationTimestamp:2019-12-24 13:05:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 3cdeb5b2-d7b0-4d2c-a7e1-ae5b2f93ccda 0xc000b67067 0xc000b67068}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lhf8w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lhf8w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lhf8w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b670f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b67240}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 24 13:05:57.925: INFO: Pod "nginx-deployment-7b8c6f4498-z8b89" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-z8b89,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4915,SelfLink:/api/v1/namespaces/deployment-4915/pods/nginx-deployment-7b8c6f4498-z8b89,UID:0c9e1ede-15ef-4076-80e2-fdb6673eaebe,ResourceVersion:17887896,Generation:0,CreationTimestamp:2019-12-24 13:05:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 3cdeb5b2-d7b0-4d2c-a7e1-ae5b2f93ccda 0xc000b67377 0xc000b67378}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lhf8w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lhf8w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lhf8w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b674a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b674c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:07 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:33 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:07 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.3,StartTime:2019-12-24 13:05:07 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-24 13:05:32 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://ce3a16c9ee4f540b2a1329630cd69ac3c5f1d9ca63b81ea5947a789d5ebc39d7}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 24 13:05:57.926: INFO: Pod "nginx-deployment-7b8c6f4498-zf775" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zf775,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4915,SelfLink:/api/v1/namespaces/deployment-4915/pods/nginx-deployment-7b8c6f4498-zf775,UID:d164dd82-c5b8-4b91-8fa2-3de34ca1da2a,ResourceVersion:17887883,Generation:0,CreationTimestamp:2019-12-24 13:05:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 3cdeb5b2-d7b0-4d2c-a7e1-ae5b2f93ccda 0xc000b676a7 0xc000b676a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lhf8w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lhf8w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lhf8w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b67720} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b67740}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:08 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:32 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 13:05:07 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2019-12-24 13:05:08 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-24 13:05:32 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://89fd2f6bd4405dd92c84413d7593090e3779ebb2a1bef39678e81c70c539c5fc}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 24 13:05:57.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4915" for this suite. Dec 24 13:06:52.361: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 24 13:06:52.445: INFO: namespace deployment-4915 deletion completed in 51.089692966s • [SLOW TEST:104.708 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 24 13:06:52.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 24 13:06:52.668: INFO: Waiting up to 5m0s for pod "downwardapi-volume-af871729-0146-47d0-9ecf-dbc49edd8d90" in namespace "projected-57" to be "success or failure" Dec 24 13:06:52.796: INFO: Pod "downwardapi-volume-af871729-0146-47d0-9ecf-dbc49edd8d90": Phase="Pending", Reason="", readiness=false. Elapsed: 127.611424ms Dec 24 13:06:54.805: INFO: Pod "downwardapi-volume-af871729-0146-47d0-9ecf-dbc49edd8d90": Phase="Pending", Reason="", readiness=false. Elapsed: 2.136702597s Dec 24 13:06:56.813: INFO: Pod "downwardapi-volume-af871729-0146-47d0-9ecf-dbc49edd8d90": Phase="Pending", Reason="", readiness=false. Elapsed: 4.144741252s Dec 24 13:06:58.823: INFO: Pod "downwardapi-volume-af871729-0146-47d0-9ecf-dbc49edd8d90": Phase="Pending", Reason="", readiness=false. Elapsed: 6.154911821s Dec 24 13:07:00.829: INFO: Pod "downwardapi-volume-af871729-0146-47d0-9ecf-dbc49edd8d90": Phase="Pending", Reason="", readiness=false. Elapsed: 8.161385979s Dec 24 13:07:02.852: INFO: Pod "downwardapi-volume-af871729-0146-47d0-9ecf-dbc49edd8d90": Phase="Pending", Reason="", readiness=false. Elapsed: 10.184085521s Dec 24 13:07:04.908: INFO: Pod "downwardapi-volume-af871729-0146-47d0-9ecf-dbc49edd8d90": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.240079552s STEP: Saw pod success Dec 24 13:07:04.908: INFO: Pod "downwardapi-volume-af871729-0146-47d0-9ecf-dbc49edd8d90" satisfied condition "success or failure" Dec 24 13:07:04.913: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-af871729-0146-47d0-9ecf-dbc49edd8d90 container client-container: STEP: delete the pod Dec 24 13:07:04.961: INFO: Waiting for pod downwardapi-volume-af871729-0146-47d0-9ecf-dbc49edd8d90 to disappear Dec 24 13:07:04.966: INFO: Pod downwardapi-volume-af871729-0146-47d0-9ecf-dbc49edd8d90 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 24 13:07:04.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-57" for this suite. Dec 24 13:07:11.054: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 24 13:07:11.143: INFO: namespace projected-57 deletion completed in 6.172409015s • [SLOW TEST:18.697 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 24 13:07:11.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-193db862-bfed-4859-80be-47c9a88b4258 STEP: Creating a pod to test consume configMaps Dec 24 13:07:11.384: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-657d9311-dea4-4ba5-aaea-cb15a4196e52" in namespace "projected-4428" to be "success or failure" Dec 24 13:07:11.394: INFO: Pod "pod-projected-configmaps-657d9311-dea4-4ba5-aaea-cb15a4196e52": Phase="Pending", Reason="", readiness=false. Elapsed: 9.575909ms Dec 24 13:07:13.405: INFO: Pod "pod-projected-configmaps-657d9311-dea4-4ba5-aaea-cb15a4196e52": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021042336s Dec 24 13:07:15.415: INFO: Pod "pod-projected-configmaps-657d9311-dea4-4ba5-aaea-cb15a4196e52": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030973305s Dec 24 13:07:17.422: INFO: Pod "pod-projected-configmaps-657d9311-dea4-4ba5-aaea-cb15a4196e52": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037755453s Dec 24 13:07:19.431: INFO: Pod "pod-projected-configmaps-657d9311-dea4-4ba5-aaea-cb15a4196e52": Phase="Pending", Reason="", readiness=false. Elapsed: 8.046330475s Dec 24 13:07:21.445: INFO: Pod "pod-projected-configmaps-657d9311-dea4-4ba5-aaea-cb15a4196e52": Phase="Pending", Reason="", readiness=false. Elapsed: 10.060518145s Dec 24 13:07:23.454: INFO: Pod "pod-projected-configmaps-657d9311-dea4-4ba5-aaea-cb15a4196e52": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.069995855s STEP: Saw pod success Dec 24 13:07:23.454: INFO: Pod "pod-projected-configmaps-657d9311-dea4-4ba5-aaea-cb15a4196e52" satisfied condition "success or failure" Dec 24 13:07:23.462: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-657d9311-dea4-4ba5-aaea-cb15a4196e52 container projected-configmap-volume-test: STEP: delete the pod Dec 24 13:07:23.523: INFO: Waiting for pod pod-projected-configmaps-657d9311-dea4-4ba5-aaea-cb15a4196e52 to disappear Dec 24 13:07:23.537: INFO: Pod pod-projected-configmaps-657d9311-dea4-4ba5-aaea-cb15a4196e52 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 24 13:07:23.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4428" for this suite. Dec 24 13:07:29.655: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 24 13:07:29.880: INFO: namespace projected-4428 deletion completed in 6.335826405s • [SLOW TEST:18.737 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 24 13:07:29.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-rrlv STEP: Creating a pod to test atomic-volume-subpath Dec 24 13:07:30.093: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-rrlv" in namespace "subpath-8923" to be "success or failure" Dec 24 13:07:30.109: INFO: Pod "pod-subpath-test-downwardapi-rrlv": Phase="Pending", Reason="", readiness=false. Elapsed: 16.094945ms Dec 24 13:07:32.126: INFO: Pod "pod-subpath-test-downwardapi-rrlv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03290836s Dec 24 13:07:34.141: INFO: Pod "pod-subpath-test-downwardapi-rrlv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047815757s Dec 24 13:07:36.147: INFO: Pod "pod-subpath-test-downwardapi-rrlv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05404266s Dec 24 13:07:38.161: INFO: Pod "pod-subpath-test-downwardapi-rrlv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.068452514s Dec 24 13:07:40.173: INFO: Pod "pod-subpath-test-downwardapi-rrlv": Phase="Running", Reason="", readiness=true. Elapsed: 10.079585922s Dec 24 13:07:42.193: INFO: Pod "pod-subpath-test-downwardapi-rrlv": Phase="Running", Reason="", readiness=true. Elapsed: 12.100240163s Dec 24 13:07:44.202: INFO: Pod "pod-subpath-test-downwardapi-rrlv": Phase="Running", Reason="", readiness=true. Elapsed: 14.108970273s Dec 24 13:07:46.207: INFO: Pod "pod-subpath-test-downwardapi-rrlv": Phase="Running", Reason="", readiness=true. Elapsed: 16.114450861s Dec 24 13:07:48.223: INFO: Pod "pod-subpath-test-downwardapi-rrlv": Phase="Running", Reason="", readiness=true. Elapsed: 18.130211052s Dec 24 13:07:50.242: INFO: Pod "pod-subpath-test-downwardapi-rrlv": Phase="Running", Reason="", readiness=true. Elapsed: 20.149349354s Dec 24 13:07:52.870: INFO: Pod "pod-subpath-test-downwardapi-rrlv": Phase="Running", Reason="", readiness=true. Elapsed: 22.777197951s Dec 24 13:07:54.877: INFO: Pod "pod-subpath-test-downwardapi-rrlv": Phase="Running", Reason="", readiness=true. Elapsed: 24.78429753s Dec 24 13:07:56.904: INFO: Pod "pod-subpath-test-downwardapi-rrlv": Phase="Running", Reason="", readiness=true. Elapsed: 26.811560225s Dec 24 13:07:58.913: INFO: Pod "pod-subpath-test-downwardapi-rrlv": Phase="Running", Reason="", readiness=true. Elapsed: 28.819722818s Dec 24 13:08:00.931: INFO: Pod "pod-subpath-test-downwardapi-rrlv": Phase="Running", Reason="", readiness=true. Elapsed: 30.837732545s Dec 24 13:08:02.945: INFO: Pod "pod-subpath-test-downwardapi-rrlv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.852399194s STEP: Saw pod success Dec 24 13:08:02.945: INFO: Pod "pod-subpath-test-downwardapi-rrlv" satisfied condition "success or failure" Dec 24 13:08:02.952: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-downwardapi-rrlv container test-container-subpath-downwardapi-rrlv: STEP: delete the pod Dec 24 13:08:03.170: INFO: Waiting for pod pod-subpath-test-downwardapi-rrlv to disappear Dec 24 13:08:03.180: INFO: Pod pod-subpath-test-downwardapi-rrlv no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-rrlv Dec 24 13:08:03.180: INFO: Deleting pod "pod-subpath-test-downwardapi-rrlv" in namespace "subpath-8923" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 24 13:08:03.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8923" for this suite. Dec 24 13:08:09.249: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 24 13:08:09.382: INFO: namespace subpath-8923 deletion completed in 6.159572087s • [SLOW TEST:39.501 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 24 13:08:09.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Dec 24 13:08:20.220: INFO: Successfully updated pod "labelsupdatee2e5b218-18bb-4cec-8557-6584ec323511" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 24 13:08:22.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3022" for this suite. Dec 24 13:08:44.361: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 24 13:08:44.465: INFO: namespace downward-api-3022 deletion completed in 22.151758373s • [SLOW TEST:35.083 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 24 13:08:44.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Dec 24 13:08:56.748: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Dec 24 13:09:06.935: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 24 13:09:06.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9123" for this suite. Dec 24 13:09:12.983: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 24 13:09:13.104: INFO: namespace pods-9123 deletion completed in 6.155094947s • [SLOW TEST:28.638 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 24 13:09:13.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all Dec 24 13:09:13.335: INFO: Waiting up to 5m0s for pod "client-containers-25e2f05d-147a-4251-af47-5830f53f7073" in namespace "containers-1772" to be "success or failure" Dec 24 13:09:13.467: INFO: Pod "client-containers-25e2f05d-147a-4251-af47-5830f53f7073": Phase="Pending", Reason="", readiness=false. Elapsed: 131.560151ms Dec 24 13:09:15.476: INFO: Pod "client-containers-25e2f05d-147a-4251-af47-5830f53f7073": Phase="Pending", Reason="", readiness=false. Elapsed: 2.140265141s Dec 24 13:09:17.489: INFO: Pod "client-containers-25e2f05d-147a-4251-af47-5830f53f7073": Phase="Pending", Reason="", readiness=false. Elapsed: 4.154011459s Dec 24 13:09:19.513: INFO: Pod "client-containers-25e2f05d-147a-4251-af47-5830f53f7073": Phase="Pending", Reason="", readiness=false. Elapsed: 6.177180014s Dec 24 13:09:21.533: INFO: Pod "client-containers-25e2f05d-147a-4251-af47-5830f53f7073": Phase="Pending", Reason="", readiness=false. Elapsed: 8.197972096s Dec 24 13:09:23.541: INFO: Pod "client-containers-25e2f05d-147a-4251-af47-5830f53f7073": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.205966581s STEP: Saw pod success Dec 24 13:09:23.541: INFO: Pod "client-containers-25e2f05d-147a-4251-af47-5830f53f7073" satisfied condition "success or failure" Dec 24 13:09:23.545: INFO: Trying to get logs from node iruya-node pod client-containers-25e2f05d-147a-4251-af47-5830f53f7073 container test-container: STEP: delete the pod Dec 24 13:09:23.687: INFO: Waiting for pod client-containers-25e2f05d-147a-4251-af47-5830f53f7073 to disappear Dec 24 13:09:23.700: INFO: Pod client-containers-25e2f05d-147a-4251-af47-5830f53f7073 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 24 13:09:23.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1772" for this suite. Dec 24 13:09:29.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 24 13:09:30.069: INFO: namespace containers-1772 deletion completed in 6.361824541s • [SLOW TEST:16.965 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 24 13:09:30.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-fp2v STEP: Creating a pod to test atomic-volume-subpath Dec 24 13:09:30.385: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-fp2v" in namespace "subpath-1990" to be "success or failure" Dec 24 13:09:30.397: INFO: Pod "pod-subpath-test-projected-fp2v": Phase="Pending", Reason="", readiness=false. Elapsed: 11.809476ms Dec 24 13:09:32.448: INFO: Pod "pod-subpath-test-projected-fp2v": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062488509s Dec 24 13:09:34.461: INFO: Pod "pod-subpath-test-projected-fp2v": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075606832s Dec 24 13:09:36.474: INFO: Pod "pod-subpath-test-projected-fp2v": Phase="Pending", Reason="", readiness=false. Elapsed: 6.08875658s Dec 24 13:09:38.489: INFO: Pod "pod-subpath-test-projected-fp2v": Phase="Pending", Reason="", readiness=false. Elapsed: 8.103722497s Dec 24 13:09:40.507: INFO: Pod "pod-subpath-test-projected-fp2v": Phase="Running", Reason="", readiness=true. Elapsed: 10.122012766s Dec 24 13:09:42.539: INFO: Pod "pod-subpath-test-projected-fp2v": Phase="Running", Reason="", readiness=true. Elapsed: 12.153896854s Dec 24 13:09:44.549: INFO: Pod "pod-subpath-test-projected-fp2v": Phase="Running", Reason="", readiness=true. Elapsed: 14.163753684s Dec 24 13:09:46.560: INFO: Pod "pod-subpath-test-projected-fp2v": Phase="Running", Reason="", readiness=true. Elapsed: 16.174420593s Dec 24 13:09:48.584: INFO: Pod "pod-subpath-test-projected-fp2v": Phase="Running", Reason="", readiness=true. Elapsed: 18.198185991s Dec 24 13:09:50.608: INFO: Pod "pod-subpath-test-projected-fp2v": Phase="Running", Reason="", readiness=true. Elapsed: 20.223068165s Dec 24 13:09:52.624: INFO: Pod "pod-subpath-test-projected-fp2v": Phase="Running", Reason="", readiness=true. Elapsed: 22.238207713s Dec 24 13:09:54.644: INFO: Pod "pod-subpath-test-projected-fp2v": Phase="Running", Reason="", readiness=true. Elapsed: 24.258996832s Dec 24 13:09:56.654: INFO: Pod "pod-subpath-test-projected-fp2v": Phase="Running", Reason="", readiness=true. Elapsed: 26.269002101s Dec 24 13:09:58.664: INFO: Pod "pod-subpath-test-projected-fp2v": Phase="Running", Reason="", readiness=true. Elapsed: 28.278459899s Dec 24 13:10:00.670: INFO: Pod "pod-subpath-test-projected-fp2v": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.284893928s STEP: Saw pod success Dec 24 13:10:00.670: INFO: Pod "pod-subpath-test-projected-fp2v" satisfied condition "success or failure" Dec 24 13:10:00.673: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-projected-fp2v container test-container-subpath-projected-fp2v: STEP: delete the pod Dec 24 13:10:00.784: INFO: Waiting for pod pod-subpath-test-projected-fp2v to disappear Dec 24 13:10:00.794: INFO: Pod pod-subpath-test-projected-fp2v no longer exists STEP: Deleting pod pod-subpath-test-projected-fp2v Dec 24 13:10:00.794: INFO: Deleting pod "pod-subpath-test-projected-fp2v" in namespace "subpath-1990" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 24 13:10:00.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1990" for this suite. Dec 24 13:10:06.881: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 24 13:10:07.051: INFO: namespace subpath-1990 deletion completed in 6.209815273s • [SLOW TEST:36.982 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 24 13:10:07.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 24 13:10:07.190: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1e294327-6d4c-47f2-b38d-e90abae792e8" in namespace "downward-api-8520" to be "success or failure" Dec 24 13:10:07.224: INFO: Pod "downwardapi-volume-1e294327-6d4c-47f2-b38d-e90abae792e8": Phase="Pending", Reason="", readiness=false. Elapsed: 33.927135ms Dec 24 13:10:09.237: INFO: Pod "downwardapi-volume-1e294327-6d4c-47f2-b38d-e90abae792e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047224409s Dec 24 13:10:11.246: INFO: Pod "downwardapi-volume-1e294327-6d4c-47f2-b38d-e90abae792e8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056178082s Dec 24 13:10:13.253: INFO: Pod "downwardapi-volume-1e294327-6d4c-47f2-b38d-e90abae792e8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063074673s Dec 24 13:10:15.258: INFO: Pod "downwardapi-volume-1e294327-6d4c-47f2-b38d-e90abae792e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.068772411s STEP: Saw pod success Dec 24 13:10:15.259: INFO: Pod "downwardapi-volume-1e294327-6d4c-47f2-b38d-e90abae792e8" satisfied condition "success or failure" Dec 24 13:10:15.261: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-1e294327-6d4c-47f2-b38d-e90abae792e8 container client-container: STEP: delete the pod Dec 24 13:10:15.300: INFO: Waiting for pod downwardapi-volume-1e294327-6d4c-47f2-b38d-e90abae792e8 to disappear Dec 24 13:10:15.463: INFO: Pod downwardapi-volume-1e294327-6d4c-47f2-b38d-e90abae792e8 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 24 13:10:15.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8520" for this suite. Dec 24 13:10:21.494: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 24 13:10:21.650: INFO: namespace downward-api-8520 deletion completed in 6.179053678s • [SLOW TEST:14.598 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 24 13:10:21.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-19161fed-4fea-46df-b5b9-740893f27136 in namespace container-probe-9010 Dec 24 13:10:29.845: INFO: Started pod liveness-19161fed-4fea-46df-b5b9-740893f27136 in namespace container-probe-9010 STEP: checking the pod's current state and verifying that restartCount is present Dec 24 13:10:29.854: INFO: Initial restart count of pod liveness-19161fed-4fea-46df-b5b9-740893f27136 is 0 Dec 24 13:10:45.984: INFO: Restart count of pod container-probe-9010/liveness-19161fed-4fea-46df-b5b9-740893f27136 is now 1 (16.130230777s elapsed) Dec 24 13:11:04.175: INFO: Restart count of pod container-probe-9010/liveness-19161fed-4fea-46df-b5b9-740893f27136 is now 2 (34.321576917s elapsed) Dec 24 13:11:24.295: INFO: Restart count of pod container-probe-9010/liveness-19161fed-4fea-46df-b5b9-740893f27136 is now 3 (54.44143796s elapsed) Dec 24 13:11:44.451: INFO: Restart count of pod container-probe-9010/liveness-19161fed-4fea-46df-b5b9-740893f27136 is now 4 (1m14.59753101s elapsed) Dec 24 13:12:04.729: INFO: Restart count of pod container-probe-9010/liveness-19161fed-4fea-46df-b5b9-740893f27136 is now 5 (1m34.875348281s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 24 13:12:04.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9010" for this suite. Dec 24 13:12:10.881: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 24 13:12:11.030: INFO: namespace container-probe-9010 deletion completed in 6.195466671s • [SLOW TEST:109.380 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 24 13:12:11.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Dec 24 13:12:19.337: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 24 13:12:19.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2077" for this suite. Dec 24 13:12:25.438: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 24 13:12:25.537: INFO: namespace container-runtime-2077 deletion completed in 6.1164414s • [SLOW TEST:14.507 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 24 13:12:25.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W1224 13:12:29.439354 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 24 13:12:29.439: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 24 13:12:29.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-33" for this suite. Dec 24 13:12:38.584: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 24 13:12:38.722: INFO: namespace gc-33 deletion completed in 9.278718255s • [SLOW TEST:13.184 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 24 13:12:38.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-5bb6f489-a7f6-4d73-b65a-0a76e68949e3 STEP: Creating a pod to test consume configMaps Dec 24 13:12:38.896: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1d553fbf-daa7-45fe-bf3d-bfec79815a0b" in namespace "projected-6946" to be "success or failure" Dec 24 13:12:38.945: INFO: Pod "pod-projected-configmaps-1d553fbf-daa7-45fe-bf3d-bfec79815a0b": Phase="Pending", Reason="", readiness=false. Elapsed: 48.785307ms Dec 24 13:12:40.953: INFO: Pod "pod-projected-configmaps-1d553fbf-daa7-45fe-bf3d-bfec79815a0b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057313059s Dec 24 13:12:42.968: INFO: Pod "pod-projected-configmaps-1d553fbf-daa7-45fe-bf3d-bfec79815a0b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071886962s Dec 24 13:12:44.980: INFO: Pod "pod-projected-configmaps-1d553fbf-daa7-45fe-bf3d-bfec79815a0b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083764592s Dec 24 13:12:46.994: INFO: Pod "pod-projected-configmaps-1d553fbf-daa7-45fe-bf3d-bfec79815a0b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.098316171s Dec 24 13:12:49.004: INFO: Pod "pod-projected-configmaps-1d553fbf-daa7-45fe-bf3d-bfec79815a0b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.108291571s STEP: Saw pod success Dec 24 13:12:49.004: INFO: Pod "pod-projected-configmaps-1d553fbf-daa7-45fe-bf3d-bfec79815a0b" satisfied condition "success or failure" Dec 24 13:12:49.008: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-1d553fbf-daa7-45fe-bf3d-bfec79815a0b container projected-configmap-volume-test: STEP: delete the pod Dec 24 13:12:49.178: INFO: Waiting for pod pod-projected-configmaps-1d553fbf-daa7-45fe-bf3d-bfec79815a0b to disappear Dec 24 13:12:49.196: INFO: Pod pod-projected-configmaps-1d553fbf-daa7-45fe-bf3d-bfec79815a0b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 24 13:12:49.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6946" for this suite. Dec 24 13:12:55.229: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 24 13:12:55.330: INFO: namespace projected-6946 deletion completed in 6.126298277s • [SLOW TEST:16.607 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 24 13:12:55.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command Dec 24 13:12:55.599: INFO: Waiting up to 5m0s for pod "var-expansion-2f894669-62cc-4010-aa87-02eb29495a19" in namespace "var-expansion-9998" to be "success or failure" Dec 24 13:12:55.608: INFO: Pod "var-expansion-2f894669-62cc-4010-aa87-02eb29495a19": Phase="Pending", Reason="", readiness=false. Elapsed: 9.607545ms Dec 24 13:12:57.623: INFO: Pod "var-expansion-2f894669-62cc-4010-aa87-02eb29495a19": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023945705s Dec 24 13:12:59.642: INFO: Pod "var-expansion-2f894669-62cc-4010-aa87-02eb29495a19": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043234836s Dec 24 13:13:01.650: INFO: Pod "var-expansion-2f894669-62cc-4010-aa87-02eb29495a19": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051443146s Dec 24 13:13:03.861: INFO: Pod "var-expansion-2f894669-62cc-4010-aa87-02eb29495a19": Phase="Pending", Reason="", readiness=false. Elapsed: 8.261715059s Dec 24 13:13:05.875: INFO: Pod "var-expansion-2f894669-62cc-4010-aa87-02eb29495a19": Phase="Pending", Reason="", readiness=false. Elapsed: 10.276402912s Dec 24 13:13:07.888: INFO: Pod "var-expansion-2f894669-62cc-4010-aa87-02eb29495a19": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.289103923s STEP: Saw pod success Dec 24 13:13:07.888: INFO: Pod "var-expansion-2f894669-62cc-4010-aa87-02eb29495a19" satisfied condition "success or failure" Dec 24 13:13:07.894: INFO: Trying to get logs from node iruya-node pod var-expansion-2f894669-62cc-4010-aa87-02eb29495a19 container dapi-container: STEP: delete the pod Dec 24 13:13:07.958: INFO: Waiting for pod var-expansion-2f894669-62cc-4010-aa87-02eb29495a19 to disappear Dec 24 13:13:07.997: INFO: Pod var-expansion-2f894669-62cc-4010-aa87-02eb29495a19 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 24 13:13:07.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9998" for this suite. Dec 24 13:13:14.035: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 24 13:13:14.143: INFO: namespace var-expansion-9998 deletion completed in 6.139141502s • [SLOW TEST:18.813 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 24 13:13:14.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-809f2950-e1f6-410a-aa4f-460d9e18e327 STEP: Creating secret with name secret-projected-all-test-volume-8bcb5912-86fc-43b9-afed-028eee03403b STEP: Creating a pod to test Check all projections for projected volume plugin Dec 24 13:13:14.327: INFO: Waiting up to 5m0s for pod "projected-volume-15e17e1a-95f0-409c-b878-f5804c4a9a96" in namespace "projected-1574" to be "success or failure" Dec 24 13:13:14.340: INFO: Pod "projected-volume-15e17e1a-95f0-409c-b878-f5804c4a9a96": Phase="Pending", Reason="", readiness=false. Elapsed: 12.755288ms Dec 24 13:13:16.360: INFO: Pod "projected-volume-15e17e1a-95f0-409c-b878-f5804c4a9a96": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033387369s Dec 24 13:13:18.378: INFO: Pod "projected-volume-15e17e1a-95f0-409c-b878-f5804c4a9a96": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05043354s Dec 24 13:13:20.386: INFO: Pod "projected-volume-15e17e1a-95f0-409c-b878-f5804c4a9a96": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059038745s Dec 24 13:13:22.394: INFO: Pod "projected-volume-15e17e1a-95f0-409c-b878-f5804c4a9a96": Phase="Pending", Reason="", readiness=false. Elapsed: 8.067039701s Dec 24 13:13:24.407: INFO: Pod "projected-volume-15e17e1a-95f0-409c-b878-f5804c4a9a96": Phase="Running", Reason="", readiness=true. Elapsed: 10.080240201s Dec 24 13:13:26.426: INFO: Pod "projected-volume-15e17e1a-95f0-409c-b878-f5804c4a9a96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.099164832s STEP: Saw pod success Dec 24 13:13:26.426: INFO: Pod "projected-volume-15e17e1a-95f0-409c-b878-f5804c4a9a96" satisfied condition "success or failure" Dec 24 13:13:26.436: INFO: Trying to get logs from node iruya-node pod projected-volume-15e17e1a-95f0-409c-b878-f5804c4a9a96 container projected-all-volume-test: STEP: delete the pod Dec 24 13:13:26.557: INFO: Waiting for pod projected-volume-15e17e1a-95f0-409c-b878-f5804c4a9a96 to disappear Dec 24 13:13:26.565: INFO: Pod projected-volume-15e17e1a-95f0-409c-b878-f5804c4a9a96 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 24 13:13:26.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1574" for this suite. Dec 24 13:13:32.654: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 24 13:13:32.769: INFO: namespace projected-1574 deletion completed in 6.192833626s • [SLOW TEST:18.626 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 24 13:13:32.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-4aa999c9-c4bd-4e2f-855d-690befbe1047 STEP: Creating a pod to test consume secrets Dec 24 13:13:32.985: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3fe751f8-328b-4d6b-b7e7-d8cbc0f0c643" in namespace "projected-6723" to be "success or failure" Dec 24 13:13:32.991: INFO: Pod "pod-projected-secrets-3fe751f8-328b-4d6b-b7e7-d8cbc0f0c643": Phase="Pending", Reason="", readiness=false. Elapsed: 6.689806ms Dec 24 13:13:35.006: INFO: Pod "pod-projected-secrets-3fe751f8-328b-4d6b-b7e7-d8cbc0f0c643": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021802527s Dec 24 13:13:37.016: INFO: Pod "pod-projected-secrets-3fe751f8-328b-4d6b-b7e7-d8cbc0f0c643": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031013078s Dec 24 13:13:39.034: INFO: Pod "pod-projected-secrets-3fe751f8-328b-4d6b-b7e7-d8cbc0f0c643": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04919454s Dec 24 13:13:41.044: INFO: Pod "pod-projected-secrets-3fe751f8-328b-4d6b-b7e7-d8cbc0f0c643": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.059742681s STEP: Saw pod success Dec 24 13:13:41.044: INFO: Pod "pod-projected-secrets-3fe751f8-328b-4d6b-b7e7-d8cbc0f0c643" satisfied condition "success or failure" Dec 24 13:13:41.048: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-3fe751f8-328b-4d6b-b7e7-d8cbc0f0c643 container projected-secret-volume-test: STEP: delete the pod Dec 24 13:13:41.121: INFO: Waiting for pod pod-projected-secrets-3fe751f8-328b-4d6b-b7e7-d8cbc0f0c643 to disappear Dec 24 13:13:41.131: INFO: Pod pod-projected-secrets-3fe751f8-328b-4d6b-b7e7-d8cbc0f0c643 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 24 13:13:41.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6723" for this suite. Dec 24 13:13:47.184: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 24 13:13:47.353: INFO: namespace projected-6723 deletion completed in 6.215831946s • [SLOW TEST:14.582 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 24 13:13:47.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-9904 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-9904 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9904 Dec 24 13:13:47.625: INFO: Found 0 stateful pods, waiting for 1 Dec 24 13:13:57.647: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Dec 24 13:13:57.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9904 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 24 13:14:00.873: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Dec 24 13:14:00.873: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 24 13:14:00.873: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 24 13:14:00.916: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Dec 24 13:14:00.916: INFO: Waiting for statefulset status.replicas updated to 0 Dec 24 13:14:00.941: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999998714s Dec 24 13:14:01.953: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.995111833s Dec 24 13:14:02.963: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.982611855s Dec 24 13:14:03.971: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.972512739s Dec 24 13:14:04.979: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.965200547s Dec 24 13:14:05.989: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.957092742s Dec 24 13:14:07.005: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.946117715s Dec 24 13:14:08.013: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.930379122s Dec 24 13:14:09.026: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.922784427s Dec 24 13:14:10.033: INFO: Verifying statefulset ss doesn't scale past 1 for another 909.154677ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9904 Dec 24 13:14:11.042: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9904 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 24 13:14:11.617: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" Dec 24 13:14:11.617: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 24 13:14:11.617: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 24 13:14:11.630: INFO: Found 1 stateful pods, waiting for 3 Dec 24 13:14:21.652: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Dec 24 13:14:21.652: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Dec 24 13:14:21.652: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Dec 24 13:14:31.649: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Dec 24 13:14:31.649: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Dec 24 13:14:31.650: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Dec 24 13:14:31.662: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9904 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 24 13:14:32.466: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Dec 24 13:14:32.466: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 24 13:14:32.466: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 24 13:14:32.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9904 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 24 13:14:33.237: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Dec 24 13:14:33.237: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 24 13:14:33.237: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 24 13:14:33.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9904 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 24 13:14:34.153: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Dec 24 13:14:34.153: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 24 13:14:34.153: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 24 13:14:34.153: INFO: Waiting for statefulset status.replicas updated to 0 Dec 24 13:14:34.162: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Dec 24 13:14:44.183: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Dec 24 13:14:44.183: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Dec 24 13:14:44.183: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Dec 24 13:14:44.263: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999997905s Dec 24 13:14:45.273: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.949819576s Dec 24 13:14:46.288: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.939719071s Dec 24 13:14:47.301: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.925026133s Dec 24 13:14:48.809: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.911438713s Dec 24 13:14:49.855: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.403581951s Dec 24 13:14:50.869: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.357306815s Dec 24 13:14:51.887: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.343859467s Dec 24 13:14:52.895: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.325432916s Dec 24 13:14:53.925: INFO: Verifying statefulset ss doesn't scale past 3 for another 317.152198ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9904 Dec 24 13:14:54.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9904 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 24 13:14:55.474: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" Dec 24 13:14:55.474: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 24 13:14:55.474: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 24 13:14:55.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9904 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 24 13:14:55.951: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" Dec 24 13:14:55.952: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 24 13:14:55.952: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 24 13:14:55.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9904 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 24 13:14:56.675: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" Dec 24 13:14:56.676: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 24 13:14:56.676: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 24 13:14:56.676: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Dec 24 13:15:16.706: INFO: Deleting all statefulset in ns statefulset-9904 Dec 24 13:15:16.710: INFO: Scaling statefulset ss to 0 Dec 24 13:15:16.721: INFO: Waiting for statefulset status.replicas updated to 0 Dec 24 13:15:16.725: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 24 13:15:16.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9904" for this suite. Dec 24 13:15:22.914: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 24 13:15:23.009: INFO: namespace statefulset-9904 deletion completed in 6.142393426s • [SLOW TEST:95.653 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 24 13:15:23.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-d0759627-1826-4ba8-bfca-0f8974a7af15 in namespace container-probe-3071 Dec 24 13:15:33.104: INFO: Started pod busybox-d0759627-1826-4ba8-bfca-0f8974a7af15 in namespace container-probe-3071 STEP: checking the pod's current state and verifying that restartCount is present Dec 24 13:15:33.108: INFO: Initial restart count of pod busybox-d0759627-1826-4ba8-bfca-0f8974a7af15 is 0 Dec 24 13:16:27.633: INFO: Restart count of pod container-probe-3071/busybox-d0759627-1826-4ba8-bfca-0f8974a7af15 is now 1 (54.524936076s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 24 13:16:27.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3071" for this suite. Dec 24 13:16:33.774: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 24 13:16:33.920: INFO: namespace container-probe-3071 deletion completed in 6.191382244s • [SLOW TEST:70.912 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 24 13:16:33.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 24 13:16:34.109: INFO: Waiting up to 5m0s for pod "downwardapi-volume-86bfe83c-df6f-4a4a-879c-44a21e811cea" in namespace "downward-api-3574" to be "success or failure" Dec 24 13:16:34.124: INFO: Pod "downwardapi-volume-86bfe83c-df6f-4a4a-879c-44a21e811cea": Phase="Pending", Reason="", readiness=false. Elapsed: 15.168062ms Dec 24 13:16:36.142: INFO: Pod "downwardapi-volume-86bfe83c-df6f-4a4a-879c-44a21e811cea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033330844s Dec 24 13:16:38.153: INFO: Pod "downwardapi-volume-86bfe83c-df6f-4a4a-879c-44a21e811cea": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044242914s Dec 24 13:16:40.174: INFO: Pod "downwardapi-volume-86bfe83c-df6f-4a4a-879c-44a21e811cea": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064914263s Dec 24 13:16:42.189: INFO: Pod "downwardapi-volume-86bfe83c-df6f-4a4a-879c-44a21e811cea": Phase="Pending", Reason="", readiness=false. Elapsed: 8.079789986s Dec 24 13:16:44.297: INFO: Pod "downwardapi-volume-86bfe83c-df6f-4a4a-879c-44a21e811cea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.188522381s STEP: Saw pod success Dec 24 13:16:44.298: INFO: Pod "downwardapi-volume-86bfe83c-df6f-4a4a-879c-44a21e811cea" satisfied condition "success or failure" Dec 24 13:16:44.304: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-86bfe83c-df6f-4a4a-879c-44a21e811cea container client-container: STEP: delete the pod Dec 24 13:16:44.487: INFO: Waiting for pod downwardapi-volume-86bfe83c-df6f-4a4a-879c-44a21e811cea to disappear Dec 24 13:16:44.507: INFO: Pod downwardapi-volume-86bfe83c-df6f-4a4a-879c-44a21e811cea no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 24 13:16:44.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3574" for this suite. Dec 24 13:16:50.548: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 24 13:16:50.764: INFO: namespace downward-api-3574 deletion completed in 6.248382167s • [SLOW TEST:16.843 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 24 13:16:50.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Dec 24 13:16:51.025: INFO: Waiting up to 5m0s for pod "pod-1b08ced6-9c18-4412-9f57-f91ba801edac" in namespace "emptydir-8921" to be "success or failure" Dec 24 13:16:51.032: INFO: Pod "pod-1b08ced6-9c18-4412-9f57-f91ba801edac": Phase="Pending", Reason="", readiness=false. Elapsed: 6.117157ms Dec 24 13:16:53.041: INFO: Pod "pod-1b08ced6-9c18-4412-9f57-f91ba801edac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015512886s Dec 24 13:16:55.052: INFO: Pod "pod-1b08ced6-9c18-4412-9f57-f91ba801edac": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025977054s Dec 24 13:16:57.056: INFO: Pod "pod-1b08ced6-9c18-4412-9f57-f91ba801edac": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030703967s Dec 24 13:16:59.075: INFO: Pod "pod-1b08ced6-9c18-4412-9f57-f91ba801edac": Phase="Pending", Reason="", readiness=false. Elapsed: 8.049875629s Dec 24 13:17:01.094: INFO: Pod "pod-1b08ced6-9c18-4412-9f57-f91ba801edac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.068430208s STEP: Saw pod success Dec 24 13:17:01.094: INFO: Pod "pod-1b08ced6-9c18-4412-9f57-f91ba801edac" satisfied condition "success or failure" Dec 24 13:17:01.102: INFO: Trying to get logs from node iruya-node pod pod-1b08ced6-9c18-4412-9f57-f91ba801edac container test-container: STEP: delete the pod Dec 24 13:17:01.220: INFO: Waiting for pod pod-1b08ced6-9c18-4412-9f57-f91ba801edac to disappear Dec 24 13:17:01.309: INFO: Pod pod-1b08ced6-9c18-4412-9f57-f91ba801edac no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 24 13:17:01.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8921" for this suite. Dec 24 13:17:07.362: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 24 13:17:07.449: INFO: namespace emptydir-8921 deletion completed in 6.135784747s • [SLOW TEST:16.685 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 24 13:17:07.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-d41fc183-1587-4f81-95f7-973542d370df STEP: Creating a pod to test consume configMaps Dec 24 13:17:07.572: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8b5f3939-1012-4c16-b09f-9ee99fa4f3d9" in namespace "projected-4077" to be "success or failure" Dec 24 13:17:07.648: INFO: Pod "pod-projected-configmaps-8b5f3939-1012-4c16-b09f-9ee99fa4f3d9": Phase="Pending", Reason="", readiness=false. Elapsed: 76.030184ms Dec 24 13:17:09.941: INFO: Pod "pod-projected-configmaps-8b5f3939-1012-4c16-b09f-9ee99fa4f3d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.368720674s Dec 24 13:17:11.959: INFO: Pod "pod-projected-configmaps-8b5f3939-1012-4c16-b09f-9ee99fa4f3d9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.386274989s Dec 24 13:17:13.973: INFO: Pod "pod-projected-configmaps-8b5f3939-1012-4c16-b09f-9ee99fa4f3d9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.401057963s Dec 24 13:17:15.982: INFO: Pod "pod-projected-configmaps-8b5f3939-1012-4c16-b09f-9ee99fa4f3d9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.409452339s Dec 24 13:17:17.997: INFO: Pod "pod-projected-configmaps-8b5f3939-1012-4c16-b09f-9ee99fa4f3d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.425141631s STEP: Saw pod success Dec 24 13:17:17.998: INFO: Pod "pod-projected-configmaps-8b5f3939-1012-4c16-b09f-9ee99fa4f3d9" satisfied condition "success or failure" Dec 24 13:17:18.002: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-8b5f3939-1012-4c16-b09f-9ee99fa4f3d9 container projected-configmap-volume-test: STEP: delete the pod Dec 24 13:17:18.257: INFO: Waiting for pod pod-projected-configmaps-8b5f3939-1012-4c16-b09f-9ee99fa4f3d9 to disappear Dec 24 13:17:18.269: INFO: Pod pod-projected-configmaps-8b5f3939-1012-4c16-b09f-9ee99fa4f3d9 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 24 13:17:18.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4077" for this suite. Dec 24 13:17:24.311: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 24 13:17:24.393: INFO: namespace projected-4077 deletion completed in 6.114164715s • [SLOW TEST:16.943 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 24 13:17:24.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-daf0f450-b49b-4a40-8367-c1656b2aaa07 in namespace container-probe-8230 Dec 24 13:17:32.540: INFO: Started pod busybox-daf0f450-b49b-4a40-8367-c1656b2aaa07 in namespace container-probe-8230 STEP: checking the pod's current state and verifying that restartCount is present Dec 24 13:17:32.546: INFO: Initial restart count of pod busybox-daf0f450-b49b-4a40-8367-c1656b2aaa07 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 24 13:21:33.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8230" for this suite. Dec 24 13:21:39.231: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 24 13:21:39.355: INFO: namespace container-probe-8230 deletion completed in 6.293970092s • [SLOW TEST:254.962 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 24 13:21:39.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-d8059b98-caf9-406f-8d2e-a05d31d18d4d STEP: Creating a pod to test consume secrets Dec 24 13:21:39.542: INFO: Waiting up to 5m0s for pod "pod-secrets-0d4ff3a9-88ea-447e-8f92-f5de6d2fddf5" in namespace "secrets-7145" to be "success or failure" Dec 24 13:21:39.593: INFO: Pod "pod-secrets-0d4ff3a9-88ea-447e-8f92-f5de6d2fddf5": Phase="Pending", Reason="", readiness=false. Elapsed: 51.179835ms Dec 24 13:21:41.602: INFO: Pod "pod-secrets-0d4ff3a9-88ea-447e-8f92-f5de6d2fddf5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06077718s Dec 24 13:21:43.632: INFO: Pod "pod-secrets-0d4ff3a9-88ea-447e-8f92-f5de6d2fddf5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090578678s Dec 24 13:21:45.642: INFO: Pod "pod-secrets-0d4ff3a9-88ea-447e-8f92-f5de6d2fddf5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.100653918s Dec 24 13:21:47.652: INFO: Pod "pod-secrets-0d4ff3a9-88ea-447e-8f92-f5de6d2fddf5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.109891231s Dec 24 13:21:49.664: INFO: Pod "pod-secrets-0d4ff3a9-88ea-447e-8f92-f5de6d2fddf5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.121832071s STEP: Saw pod success Dec 24 13:21:49.664: INFO: Pod "pod-secrets-0d4ff3a9-88ea-447e-8f92-f5de6d2fddf5" satisfied condition "success or failure" Dec 24 13:21:49.677: INFO: Trying to get logs from node iruya-node pod pod-secrets-0d4ff3a9-88ea-447e-8f92-f5de6d2fddf5 container secret-volume-test: STEP: delete the pod Dec 24 13:21:49.765: INFO: Waiting for pod pod-secrets-0d4ff3a9-88ea-447e-8f92-f5de6d2fddf5 to disappear Dec 24 13:21:49.883: INFO: Pod pod-secrets-0d4ff3a9-88ea-447e-8f92-f5de6d2fddf5 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 24 13:21:49.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7145" for this suite. Dec 24 13:21:55.928: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 24 13:21:56.074: INFO: namespace secrets-7145 deletion completed in 6.177743082s • [SLOW TEST:16.719 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 24 13:21:56.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-f818ed72-79e2-4967-94dc-aa8de6090c9c STEP: Creating a pod to test consume configMaps Dec 24 13:21:56.285: INFO: Waiting up to 5m0s for pod "pod-configmaps-95c8bbe6-03b1-46fa-a30b-e889136d772a" in namespace "configmap-4265" to be "success or failure" Dec 24 13:21:56.343: INFO: Pod "pod-configmaps-95c8bbe6-03b1-46fa-a30b-e889136d772a": Phase="Pending", Reason="", readiness=false. Elapsed: 58.416566ms Dec 24 13:21:58.355: INFO: Pod "pod-configmaps-95c8bbe6-03b1-46fa-a30b-e889136d772a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0706301s Dec 24 13:22:00.386: INFO: Pod "pod-configmaps-95c8bbe6-03b1-46fa-a30b-e889136d772a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.101690192s Dec 24 13:22:02.399: INFO: Pod "pod-configmaps-95c8bbe6-03b1-46fa-a30b-e889136d772a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.114080855s Dec 24 13:22:04.405: INFO: Pod "pod-configmaps-95c8bbe6-03b1-46fa-a30b-e889136d772a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.120414878s Dec 24 13:22:06.426: INFO: Pod "pod-configmaps-95c8bbe6-03b1-46fa-a30b-e889136d772a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.140816808s STEP: Saw pod success Dec 24 13:22:06.426: INFO: Pod "pod-configmaps-95c8bbe6-03b1-46fa-a30b-e889136d772a" satisfied condition "success or failure" Dec 24 13:22:06.444: INFO: Trying to get logs from node iruya-node pod pod-configmaps-95c8bbe6-03b1-46fa-a30b-e889136d772a container configmap-volume-test: STEP: delete the pod Dec 24 13:22:06.629: INFO: Waiting for pod pod-configmaps-95c8bbe6-03b1-46fa-a30b-e889136d772a to disappear Dec 24 13:22:06.636: INFO: Pod pod-configmaps-95c8bbe6-03b1-46fa-a30b-e889136d772a no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 24 13:22:06.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4265" for this suite. Dec 24 13:22:12.686: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 24 13:22:12.799: INFO: namespace configmap-4265 deletion completed in 6.15575818s • [SLOW TEST:16.725 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 24 13:22:12.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-8000a7bf-2104-4f9d-92a1-d83e47d020ba STEP: Creating a pod to test consume secrets Dec 24 13:22:12.924: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-24631839-4800-4894-b5fc-9345573f670c" in namespace "projected-5437" to be "success or failure" Dec 24 13:22:12.932: INFO: Pod "pod-projected-secrets-24631839-4800-4894-b5fc-9345573f670c": Phase="Pending", Reason="", readiness=false. Elapsed: 7.658523ms Dec 24 13:22:14.944: INFO: Pod "pod-projected-secrets-24631839-4800-4894-b5fc-9345573f670c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019156469s Dec 24 13:22:16.960: INFO: Pod "pod-projected-secrets-24631839-4800-4894-b5fc-9345573f670c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035983872s Dec 24 13:22:18.969: INFO: Pod "pod-projected-secrets-24631839-4800-4894-b5fc-9345573f670c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044681146s Dec 24 13:22:20.979: INFO: Pod "pod-projected-secrets-24631839-4800-4894-b5fc-9345573f670c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.05441505s STEP: Saw pod success Dec 24 13:22:20.979: INFO: Pod "pod-projected-secrets-24631839-4800-4894-b5fc-9345573f670c" satisfied condition "success or failure" Dec 24 13:22:20.992: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-24631839-4800-4894-b5fc-9345573f670c container projected-secret-volume-test: STEP: delete the pod Dec 24 13:22:21.284: INFO: Waiting for pod pod-projected-secrets-24631839-4800-4894-b5fc-9345573f670c to disappear Dec 24 13:22:21.295: INFO: Pod pod-projected-secrets-24631839-4800-4894-b5fc-9345573f670c no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 24 13:22:21.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5437" for this suite. Dec 24 13:22:29.343: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 24 13:22:29.475: INFO: namespace projected-5437 deletion completed in 8.158407684s • [SLOW TEST:16.675 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 24 13:22:29.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Dec 24 13:22:29.562: INFO: PodSpec: initContainers in spec.initContainers Dec 24 13:23:32.498: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-5dcb3272-0a7a-4205-811d-fd5aeae5fc07", GenerateName:"", Namespace:"init-container-2458", SelfLink:"/api/v1/namespaces/init-container-2458/pods/pod-init-5dcb3272-0a7a-4205-811d-fd5aeae5fc07", UID:"ad769ba8-8d8f-4c28-94f3-199dd3cd7f1c", ResourceVersion:"17890499", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63712790549, loc:(*time.Location)(0x7ea48a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"562233473"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-dlf6g", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002521540), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-dlf6g", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-dlf6g", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-dlf6g", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002851f28), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002597380), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002851fb0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002851fd0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002851fd8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002851fdc), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712790549, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712790549, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712790549, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712790549, loc:(*time.Location)(0x7ea48a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.3.65", PodIP:"10.44.0.1", StartTime:(*v1.Time)(0xc001061120), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0021dc000)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0021dc070)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://2e5eb61c4180ea0ac9b1fab4b41215aed0ec206ef0197b863bbb8bc4bd8a643f"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001061160), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001061140), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 24 13:23:32.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2458" for this suite. Dec 24 13:23:54.572: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 24 13:23:54.717: INFO: namespace init-container-2458 deletion completed in 22.195875053s • [SLOW TEST:85.242 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 24 13:23:54.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 24 13:23:54.841: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 24 13:23:55.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1423" for this suite. Dec 24 13:24:02.207: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 24 13:24:02.327: INFO: namespace custom-resource-definition-1423 deletion completed in 6.366518181s • [SLOW TEST:7.610 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 24 13:24:02.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-5847 I1224 13:24:02.492679 8 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-5847, replica count: 1 I1224 13:24:03.544019 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1224 13:24:04.545111 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1224 13:24:05.545905 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1224 13:24:06.546846 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1224 13:24:07.547346 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1224 13:24:08.547912 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1224 13:24:09.548435 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1224 13:24:10.549195 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Dec 24 13:24:10.749: INFO: Created: latency-svc-z9zjd Dec 24 13:24:10.778: INFO: Got endpoints: latency-svc-z9zjd [128.895864ms] Dec 24 13:24:10.852: INFO: Created: latency-svc-zzhch Dec 24 13:24:10.981: INFO: Got endpoints: latency-svc-zzhch [201.098455ms] Dec 24 13:24:10.988: INFO: Created: latency-svc-z46jg Dec 24 13:24:11.040: INFO: Got endpoints: latency-svc-z46jg [260.005803ms] Dec 24 13:24:11.042: INFO: Created: latency-svc-nchmn Dec 24 13:24:11.064: INFO: Got endpoints: latency-svc-nchmn [283.681558ms] Dec 24 13:24:11.251: INFO: Created: latency-svc-vp4kq Dec 24 13:24:11.278: INFO: Got endpoints: latency-svc-vp4kq [498.100544ms] Dec 24 13:24:11.338: INFO: Created: latency-svc-229wr Dec 24 13:24:11.519: INFO: Got endpoints: latency-svc-229wr [738.227328ms] Dec 24 13:24:11.551: INFO: Created: latency-svc-qsxkq Dec 24 13:24:11.556: INFO: Got endpoints: latency-svc-qsxkq [776.940862ms] Dec 24 13:24:11.621: INFO: Created: latency-svc-5h5ml Dec 24 13:24:11.721: INFO: Created: latency-svc-6b9tf Dec 24 13:24:11.723: INFO: Got endpoints: latency-svc-5h5ml [943.637506ms] Dec 24 13:24:11.734: INFO: Got endpoints: latency-svc-6b9tf [178.242729ms] Dec 24 13:24:11.802: INFO: Created: latency-svc-fg8dt Dec 24 13:24:11.810: INFO: Got endpoints: latency-svc-fg8dt [1.029522928s] Dec 24 13:24:11.958: INFO: Created: latency-svc-p6g24 Dec 24 13:24:11.976: INFO: Got endpoints: latency-svc-p6g24 [1.195814076s] Dec 24 13:24:12.029: INFO: Created: latency-svc-8ltcb Dec 24 13:24:12.042: INFO: Got endpoints: latency-svc-8ltcb [1.26272183s] Dec 24 13:24:12.160: INFO: Created: latency-svc-84bhx Dec 24 13:24:12.180: INFO: Got endpoints: latency-svc-84bhx [1.399121564s] Dec 24 13:24:12.312: INFO: Created: latency-svc-qr4sm Dec 24 13:24:12.318: INFO: Got endpoints: latency-svc-qr4sm [1.536788292s] Dec 24 13:24:12.587: INFO: Created: latency-svc-p76qc Dec 24 13:24:12.613: INFO: Got endpoints: latency-svc-p76qc [1.83286865s] Dec 24 13:24:12.687: INFO: Created: latency-svc-6jt7m Dec 24 13:24:12.791: INFO: Got endpoints: latency-svc-6jt7m [2.011535994s] Dec 24 13:24:12.845: INFO: Created: latency-svc-sxn7s Dec 24 13:24:12.873: INFO: Got endpoints: latency-svc-sxn7s [2.092559729s] Dec 24 13:24:12.967: INFO: Created: latency-svc-lrkhs Dec 24 13:24:12.973: INFO: Got endpoints: latency-svc-lrkhs [1.992038497s] Dec 24 13:24:13.054: INFO: Created: latency-svc-q7lhz Dec 24 13:24:13.169: INFO: Got endpoints: latency-svc-q7lhz [2.12843016s] Dec 24 13:24:13.201: INFO: Created: latency-svc-7g7rr Dec 24 13:24:13.240: INFO: Created: latency-svc-qq89k Dec 24 13:24:13.242: INFO: Got endpoints: latency-svc-7g7rr [2.177667192s] Dec 24 13:24:13.250: INFO: Got endpoints: latency-svc-qq89k [1.971165495s] Dec 24 13:24:13.377: INFO: Created: latency-svc-7qrz4 Dec 24 13:24:13.388: INFO: Got endpoints: latency-svc-7qrz4 [1.868735648s] Dec 24 13:24:13.475: INFO: Created: latency-svc-ld4vl Dec 24 13:24:13.560: INFO: Got endpoints: latency-svc-ld4vl [1.836819624s] Dec 24 13:24:13.601: INFO: Created: latency-svc-xnrh5 Dec 24 13:24:13.610: INFO: Got endpoints: latency-svc-xnrh5 [1.875331786s] Dec 24 13:24:13.669: INFO: Created: latency-svc-b99mz Dec 24 13:24:13.783: INFO: Got endpoints: latency-svc-b99mz [1.972618431s] Dec 24 13:24:13.844: INFO: Created: latency-svc-9qfgj Dec 24 13:24:13.860: INFO: Got endpoints: latency-svc-9qfgj [1.884163976s] Dec 24 13:24:13.986: INFO: Created: latency-svc-4jq8d Dec 24 13:24:13.988: INFO: Got endpoints: latency-svc-4jq8d [1.945201434s] Dec 24 13:24:14.058: INFO: Created: latency-svc-lhvzn Dec 24 13:24:14.067: INFO: Got endpoints: latency-svc-lhvzn [1.886494619s] Dec 24 13:24:14.170: INFO: Created: latency-svc-g9zsl Dec 24 13:24:14.180: INFO: Got endpoints: latency-svc-g9zsl [1.862348977s] Dec 24 13:24:14.246: INFO: Created: latency-svc-wntff Dec 24 13:24:14.318: INFO: Got endpoints: latency-svc-wntff [1.7051879s] Dec 24 13:24:14.340: INFO: Created: latency-svc-vxccp Dec 24 13:24:14.374: INFO: Got endpoints: latency-svc-vxccp [1.583194626s] Dec 24 13:24:14.491: INFO: Created: latency-svc-t5cbc Dec 24 13:24:14.491: INFO: Got endpoints: latency-svc-t5cbc [1.61757156s] Dec 24 13:24:14.532: INFO: Created: latency-svc-945sh Dec 24 13:24:14.613: INFO: Got endpoints: latency-svc-945sh [1.639337794s] Dec 24 13:24:14.623: INFO: Created: latency-svc-2k2qr Dec 24 13:24:14.630: INFO: Got endpoints: latency-svc-2k2qr [1.460695795s] Dec 24 13:24:14.685: INFO: Created: latency-svc-4lpcm Dec 24 13:24:14.685: INFO: Got endpoints: latency-svc-4lpcm [1.442718391s] Dec 24 13:24:14.799: INFO: Created: latency-svc-qdw58 Dec 24 13:24:14.811: INFO: Got endpoints: latency-svc-qdw58 [1.561017757s] Dec 24 13:24:14.867: INFO: Created: latency-svc-lfmpf Dec 24 13:24:14.877: INFO: Got endpoints: latency-svc-lfmpf [1.489024888s] Dec 24 13:24:14.979: INFO: Created: latency-svc-mc6sf Dec 24 13:24:15.052: INFO: Got endpoints: latency-svc-mc6sf [1.492541843s] Dec 24 13:24:15.056: INFO: Created: latency-svc-tpx2f Dec 24 13:24:15.067: INFO: Got endpoints: latency-svc-tpx2f [1.456633608s] Dec 24 13:24:15.168: INFO: Created: latency-svc-zrwxb Dec 24 13:24:15.208: INFO: Got endpoints: latency-svc-zrwxb [1.424874566s] Dec 24 13:24:15.254: INFO: Created: latency-svc-vwtg9 Dec 24 13:24:15.255: INFO: Got endpoints: latency-svc-vwtg9 [1.394701535s] Dec 24 13:24:15.408: INFO: Created: latency-svc-9h9zq Dec 24 13:24:15.427: INFO: Got endpoints: latency-svc-9h9zq [1.439050287s] Dec 24 13:24:15.628: INFO: Created: latency-svc-mgpjw Dec 24 13:24:15.644: INFO: Got endpoints: latency-svc-mgpjw [1.576782755s] Dec 24 13:24:15.694: INFO: Created: latency-svc-6r4z4 Dec 24 13:24:15.702: INFO: Got endpoints: latency-svc-6r4z4 [1.521636479s] Dec 24 13:24:15.803: INFO: Created: latency-svc-chhxm Dec 24 13:24:15.812: INFO: Got endpoints: latency-svc-chhxm [1.494020209s] Dec 24 13:24:15.874: INFO: Created: latency-svc-z566z Dec 24 13:24:15.958: INFO: Got endpoints: latency-svc-z566z [1.583333645s] Dec 24 13:24:16.017: INFO: Created: latency-svc-nt8zl Dec 24 13:24:16.017: INFO: Created: latency-svc-28dc5 Dec 24 13:24:16.027: INFO: Got endpoints: latency-svc-28dc5 [1.414458342s] Dec 24 13:24:16.032: INFO: Got endpoints: latency-svc-nt8zl [1.540969993s] Dec 24 13:24:16.105: INFO: Created: latency-svc-8j4zw Dec 24 13:24:16.112: INFO: Got endpoints: latency-svc-8j4zw [1.482116375s] Dec 24 13:24:16.178: INFO: Created: latency-svc-t29kh Dec 24 13:24:16.182: INFO: Got endpoints: latency-svc-t29kh [1.496480855s] Dec 24 13:24:16.255: INFO: Created: latency-svc-77dcj Dec 24 13:24:16.261: INFO: Got endpoints: latency-svc-77dcj [1.449765474s] Dec 24 13:24:16.319: INFO: Created: latency-svc-x5b5x Dec 24 13:24:16.348: INFO: Got endpoints: latency-svc-x5b5x [1.47085509s] Dec 24 13:24:16.423: INFO: Created: latency-svc-5pj6l Dec 24 13:24:16.427: INFO: Got endpoints: latency-svc-5pj6l [1.374182237s] Dec 24 13:24:16.478: INFO: Created: latency-svc-9p9wf Dec 24 13:24:16.492: INFO: Got endpoints: latency-svc-9p9wf [1.425622059s] Dec 24 13:24:16.602: INFO: Created: latency-svc-h82qp Dec 24 13:24:16.616: INFO: Got endpoints: latency-svc-h82qp [1.407110461s] Dec 24 13:24:16.659: INFO: Created: latency-svc-ssx46 Dec 24 13:24:16.671: INFO: Got endpoints: latency-svc-ssx46 [1.415470848s] Dec 24 13:24:16.760: INFO: Created: latency-svc-2wgsb Dec 24 13:24:16.820: INFO: Got endpoints: latency-svc-2wgsb [1.393251159s] Dec 24 13:24:16.831: INFO: Created: latency-svc-4tcps Dec 24 13:24:16.832: INFO: Got endpoints: latency-svc-4tcps [1.188020349s] Dec 24 13:24:16.981: INFO: Created: latency-svc-pcmds Dec 24 13:24:17.021: INFO: Got endpoints: latency-svc-pcmds [1.318849312s] Dec 24 13:24:17.023: INFO: Created: latency-svc-rc8hx Dec 24 13:24:17.040: INFO: Got endpoints: latency-svc-rc8hx [1.227394785s] Dec 24 13:24:17.148: INFO: Created: latency-svc-k8bwt Dec 24 13:24:17.160: INFO: Got endpoints: latency-svc-k8bwt [1.201466705s] Dec 24 13:24:17.211: INFO: Created: latency-svc-nhnpt Dec 24 13:24:17.223: INFO: Got endpoints: latency-svc-nhnpt [1.195715049s] Dec 24 13:24:17.358: INFO: Created: latency-svc-gvbdz Dec 24 13:24:17.378: INFO: Got endpoints: latency-svc-gvbdz [1.346080092s] Dec 24 13:24:17.484: INFO: Created: latency-svc-2v5r6 Dec 24 13:24:17.499: INFO: Got endpoints: latency-svc-2v5r6 [1.38655724s] Dec 24 13:24:17.561: INFO: Created: latency-svc-2w6bf Dec 24 13:24:17.569: INFO: Got endpoints: latency-svc-2w6bf [1.387230775s] Dec 24 13:24:17.752: INFO: Created: latency-svc-8jkwg Dec 24 13:24:17.765: INFO: Got endpoints: latency-svc-8jkwg [1.503971976s] Dec 24 13:24:17.818: INFO: Created: latency-svc-fmbv6 Dec 24 13:24:17.891: INFO: Got endpoints: latency-svc-fmbv6 [1.54272156s] Dec 24 13:24:17.925: INFO: Created: latency-svc-p9nm8 Dec 24 13:24:17.948: INFO: Got endpoints: latency-svc-p9nm8 [1.521121211s] Dec 24 13:24:17.985: INFO: Created: latency-svc-bp5vq Dec 24 13:24:18.075: INFO: Got endpoints: latency-svc-bp5vq [1.582476581s] Dec 24 13:24:18.112: INFO: Created: latency-svc-b6tq4 Dec 24 13:24:18.124: INFO: Got endpoints: latency-svc-b6tq4 [1.508037652s] Dec 24 13:24:18.157: INFO: Created: latency-svc-mq7qq Dec 24 13:24:18.249: INFO: Got endpoints: latency-svc-mq7qq [1.578089472s] Dec 24 13:24:18.270: INFO: Created: latency-svc-22sf4 Dec 24 13:24:18.306: INFO: Got endpoints: latency-svc-22sf4 [1.485327699s] Dec 24 13:24:18.342: INFO: Created: latency-svc-4jccq Dec 24 13:24:18.440: INFO: Got endpoints: latency-svc-4jccq [1.607612319s] Dec 24 13:24:18.452: INFO: Created: latency-svc-cqt6m Dec 24 13:24:18.473: INFO: Got endpoints: latency-svc-cqt6m [1.452146865s] Dec 24 13:24:18.525: INFO: Created: latency-svc-rmn9t Dec 24 13:24:18.644: INFO: Created: latency-svc-74hmg Dec 24 13:24:18.644: INFO: Got endpoints: latency-svc-rmn9t [1.604164982s] Dec 24 13:24:18.650: INFO: Got endpoints: latency-svc-74hmg [1.489866569s] Dec 24 13:24:18.715: INFO: Created: latency-svc-7jksr Dec 24 13:24:18.725: INFO: Got endpoints: latency-svc-7jksr [1.50208501s] Dec 24 13:24:18.894: INFO: Created: latency-svc-bnbzh Dec 24 13:24:18.915: INFO: Got endpoints: latency-svc-bnbzh [1.536004114s] Dec 24 13:24:18.976: INFO: Created: latency-svc-624wm Dec 24 13:24:19.160: INFO: Got endpoints: latency-svc-624wm [1.660907587s] Dec 24 13:24:19.218: INFO: Created: latency-svc-bb48t Dec 24 13:24:19.227: INFO: Got endpoints: latency-svc-bb48t [1.658123236s] Dec 24 13:24:19.396: INFO: Created: latency-svc-454gv Dec 24 13:24:19.413: INFO: Got endpoints: latency-svc-454gv [1.648477123s] Dec 24 13:24:19.596: INFO: Created: latency-svc-884sh Dec 24 13:24:19.600: INFO: Got endpoints: latency-svc-884sh [1.708129095s] Dec 24 13:24:19.678: INFO: Created: latency-svc-hl7fv Dec 24 13:24:19.858: INFO: Got endpoints: latency-svc-hl7fv [1.909380136s] Dec 24 13:24:19.929: INFO: Created: latency-svc-7wmgx Dec 24 13:24:19.951: INFO: Got endpoints: latency-svc-7wmgx [1.875659271s] Dec 24 13:24:20.119: INFO: Created: latency-svc-hb2ls Dec 24 13:24:20.136: INFO: Got endpoints: latency-svc-hb2ls [2.012049458s] Dec 24 13:24:20.288: INFO: Created: latency-svc-9smd6 Dec 24 13:24:20.295: INFO: Got endpoints: latency-svc-9smd6 [2.045983331s] Dec 24 13:24:20.363: INFO: Created: latency-svc-5w4hp Dec 24 13:24:20.370: INFO: Got endpoints: latency-svc-5w4hp [2.063992297s] Dec 24 13:24:20.537: INFO: Created: latency-svc-d2lpw Dec 24 13:24:20.551: INFO: Got endpoints: latency-svc-d2lpw [2.111431605s] Dec 24 13:24:20.728: INFO: Created: latency-svc-h2clf Dec 24 13:24:20.748: INFO: Got endpoints: latency-svc-h2clf [2.274600237s] Dec 24 13:24:20.827: INFO: Created: latency-svc-6mjh5 Dec 24 13:24:20.941: INFO: Got endpoints: latency-svc-6mjh5 [2.296299768s] Dec 24 13:24:20.948: INFO: Created: latency-svc-4lvh8 Dec 24 13:24:20.954: INFO: Got endpoints: latency-svc-4lvh8 [2.304091614s] Dec 24 13:24:21.019: INFO: Created: latency-svc-86q5g Dec 24 13:24:21.224: INFO: Got endpoints: latency-svc-86q5g [2.498626279s] Dec 24 13:24:21.232: INFO: Created: latency-svc-44q8m Dec 24 13:24:21.256: INFO: Got endpoints: latency-svc-44q8m [2.341249897s] Dec 24 13:24:21.296: INFO: Created: latency-svc-twgbr Dec 24 13:24:21.309: INFO: Got endpoints: latency-svc-twgbr [2.148788165s] Dec 24 13:24:21.709: INFO: Created: latency-svc-k9ztd Dec 24 13:24:21.719: INFO: Got endpoints: latency-svc-k9ztd [2.491111832s] Dec 24 13:24:21.776: INFO: Created: latency-svc-l94zk Dec 24 13:24:21.790: INFO: Got endpoints: latency-svc-l94zk [2.376271081s] Dec 24 13:24:21.919: INFO: Created: latency-svc-5t7c6 Dec 24 13:24:21.925: INFO: Got endpoints: latency-svc-5t7c6 [2.325283582s] Dec 24 13:24:21.984: INFO: Created: latency-svc-tzssw Dec 24 13:24:22.121: INFO: Created: latency-svc-hkbnp Dec 24 13:24:22.122: INFO: Got endpoints: latency-svc-tzssw [2.263413412s] Dec 24 13:24:22.138: INFO: Got endpoints: latency-svc-hkbnp [2.186507433s] Dec 24 13:24:22.188: INFO: Created: latency-svc-gth8v Dec 24 13:24:22.202: INFO: Got endpoints: latency-svc-gth8v [2.065769842s] Dec 24 13:24:22.322: INFO: Created: latency-svc-bn2tj Dec 24 13:24:22.349: INFO: Got endpoints: latency-svc-bn2tj [2.05392476s] Dec 24 13:24:22.538: INFO: Created: latency-svc-rrkv8 Dec 24 13:24:22.564: INFO: Got endpoints: latency-svc-rrkv8 [2.192999654s] Dec 24 13:24:22.627: INFO: Created: latency-svc-jmn9n Dec 24 13:24:22.639: INFO: Got endpoints: latency-svc-jmn9n [2.086973509s] Dec 24 13:24:22.811: INFO: Created: latency-svc-84cw9 Dec 24 13:24:22.817: INFO: Got endpoints: latency-svc-84cw9 [2.067953804s] Dec 24 13:24:22.856: INFO: Created: latency-svc-k9d69 Dec 24 13:24:22.864: INFO: Got endpoints: latency-svc-k9d69 [1.923250717s] Dec 24 13:24:22.998: INFO: Created: latency-svc-h2k8c Dec 24 13:24:23.050: INFO: Got endpoints: latency-svc-h2k8c [2.095848059s] Dec 24 13:24:23.053: INFO: Created: latency-svc-w7kc5 Dec 24 13:24:23.083: INFO: Got endpoints: latency-svc-w7kc5 [1.858980558s] Dec 24 13:24:23.211: INFO: Created: latency-svc-z6cjq Dec 24 13:24:23.250: INFO: Got endpoints: latency-svc-z6cjq [1.994022593s] Dec 24 13:24:23.457: INFO: Created: latency-svc-2kzpn Dec 24 13:24:23.463: INFO: Got endpoints: latency-svc-2kzpn [2.154534906s] Dec 24 13:24:23.689: INFO: Created: latency-svc-t6v7x Dec 24 13:24:23.700: INFO: Got endpoints: latency-svc-t6v7x [1.981434581s] Dec 24 13:24:23.763: INFO: Created: latency-svc-g9cqw Dec 24 13:24:23.858: INFO: Got endpoints: latency-svc-g9cqw [2.068039093s] Dec 24 13:24:23.955: INFO: Created: latency-svc-b4xc5 Dec 24 13:24:24.067: INFO: Got endpoints: latency-svc-b4xc5 [2.142245242s] Dec 24 13:24:24.123: INFO: Created: latency-svc-pkml2 Dec 24 13:24:24.127: INFO: Got endpoints: latency-svc-pkml2 [2.005041186s] Dec 24 13:24:24.302: INFO: Created: latency-svc-vrrj2 Dec 24 13:24:24.311: INFO: Got endpoints: latency-svc-vrrj2 [2.173072836s] Dec 24 13:24:24.365: INFO: Created: latency-svc-ztjqv Dec 24 13:24:24.381: INFO: Got endpoints: latency-svc-ztjqv [2.178048125s] Dec 24 13:24:24.504: INFO: Created: latency-svc-xr47w Dec 24 13:24:24.511: INFO: Got endpoints: latency-svc-xr47w [2.161975731s] Dec 24 13:24:24.553: INFO: Created: latency-svc-nhvxp Dec 24 13:24:24.572: INFO: Got endpoints: latency-svc-nhvxp [2.008364315s] Dec 24 13:24:24.713: INFO: Created: latency-svc-w4ln8 Dec 24 13:24:24.729: INFO: Got endpoints: latency-svc-w4ln8 [2.090285248s] Dec 24 13:24:24.779: INFO: Created: latency-svc-tcdth Dec 24 13:24:24.789: INFO: Got endpoints: latency-svc-tcdth [1.972489725s] Dec 24 13:24:24.919: INFO: Created: latency-svc-bd8wz Dec 24 13:24:25.156: INFO: Got endpoints: latency-svc-bd8wz [2.291882289s] Dec 24 13:24:25.160: INFO: Created: latency-svc-mdj69 Dec 24 13:24:25.204: INFO: Got endpoints: latency-svc-mdj69 [2.153661404s] Dec 24 13:24:25.444: INFO: Created: latency-svc-4xdlg Dec 24 13:24:25.451: INFO: Got endpoints: latency-svc-4xdlg [2.367420281s] Dec 24 13:24:25.514: INFO: Created: latency-svc-smsk9 Dec 24 13:24:25.616: INFO: Got endpoints: latency-svc-smsk9 [2.365435722s] Dec 24 13:24:25.641: INFO: Created: latency-svc-7s7hc Dec 24 13:24:25.659: INFO: Got endpoints: latency-svc-7s7hc [2.195236978s] Dec 24 13:24:25.846: INFO: Created: latency-svc-fs8vj Dec 24 13:24:25.900: INFO: Got endpoints: latency-svc-fs8vj [2.199557898s] Dec 24 13:24:25.912: INFO: Created: latency-svc-ptpll Dec 24 13:24:25.914: INFO: Got endpoints: latency-svc-ptpll [2.055629642s] Dec 24 13:24:26.047: INFO: Created: latency-svc-986h5 Dec 24 13:24:26.056: INFO: Got endpoints: latency-svc-986h5 [1.988201269s] Dec 24 13:24:26.097: INFO: Created: latency-svc-czfzs Dec 24 13:24:26.107: INFO: Got endpoints: latency-svc-czfzs [1.979933385s] Dec 24 13:24:26.216: INFO: Created: latency-svc-4v4tm Dec 24 13:24:26.231: INFO: Got endpoints: latency-svc-4v4tm [1.919275931s] Dec 24 13:24:26.270: INFO: Created: latency-svc-gmf7k Dec 24 13:24:26.279: INFO: Got endpoints: latency-svc-gmf7k [1.898008588s] Dec 24 13:24:26.308: INFO: Created: latency-svc-8lnd2 Dec 24 13:24:26.386: INFO: Got endpoints: latency-svc-8lnd2 [1.874752152s] Dec 24 13:24:26.408: INFO: Created: latency-svc-nwt5p Dec 24 13:24:26.414: INFO: Got endpoints: latency-svc-nwt5p [1.84163017s] Dec 24 13:24:26.480: INFO: Created: latency-svc-57bx6 Dec 24 13:24:26.668: INFO: Got endpoints: latency-svc-57bx6 [1.938203467s] Dec 24 13:24:26.674: INFO: Created: latency-svc-9kjrl Dec 24 13:24:26.764: INFO: Got endpoints: latency-svc-9kjrl [1.974331251s] Dec 24 13:24:26.795: INFO: Created: latency-svc-ztkhd Dec 24 13:24:26.827: INFO: Got endpoints: latency-svc-ztkhd [1.670716355s] Dec 24 13:24:26.829: INFO: Created: latency-svc-6nk6j Dec 24 13:24:26.935: INFO: Created: latency-svc-b9r7w Dec 24 13:24:26.935: INFO: Got endpoints: latency-svc-6nk6j [1.730636511s] Dec 24 13:24:26.959: INFO: Got endpoints: latency-svc-b9r7w [1.507855904s] Dec 24 13:24:26.989: INFO: Created: latency-svc-hjplc Dec 24 13:24:27.001: INFO: Got endpoints: latency-svc-hjplc [1.384283906s] Dec 24 13:24:27.144: INFO: Created: latency-svc-prfh4 Dec 24 13:24:27.179: INFO: Got endpoints: latency-svc-prfh4 [1.520297509s] Dec 24 13:24:27.185: INFO: Created: latency-svc-h7x6r Dec 24 13:24:27.210: INFO: Got endpoints: latency-svc-h7x6r [1.309999719s] Dec 24 13:24:27.245: INFO: Created: latency-svc-dvtq4 Dec 24 13:24:27.398: INFO: Got endpoints: latency-svc-dvtq4 [1.483803481s] Dec 24 13:24:27.441: INFO: Created: latency-svc-4v5cf Dec 24 13:24:27.456: INFO: Got endpoints: latency-svc-4v5cf [1.399949784s] Dec 24 13:24:27.500: INFO: Created: latency-svc-hpqvk Dec 24 13:24:27.622: INFO: Got endpoints: latency-svc-hpqvk [1.514843314s] Dec 24 13:24:27.641: INFO: Created: latency-svc-tpg58 Dec 24 13:24:27.647: INFO: Got endpoints: latency-svc-tpg58 [1.416583788s] Dec 24 13:24:27.859: INFO: Created: latency-svc-zcdtl Dec 24 13:24:27.877: INFO: Got endpoints: latency-svc-zcdtl [1.597902293s] Dec 24 13:24:27.908: INFO: Created: latency-svc-nxxkb Dec 24 13:24:27.918: INFO: Got endpoints: latency-svc-nxxkb [1.53098657s] Dec 24 13:24:27.945: INFO: Created: latency-svc-wftw4 Dec 24 13:24:28.061: INFO: Got endpoints: latency-svc-wftw4 [1.646295162s] Dec 24 13:24:28.112: INFO: Created: latency-svc-6m8hr Dec 24 13:24:28.113: INFO: Got endpoints: latency-svc-6m8hr [1.445648234s] Dec 24 13:24:28.145: INFO: Created: latency-svc-h76c2 Dec 24 13:24:28.147: INFO: Got endpoints: latency-svc-h76c2 [1.383446651s] Dec 24 13:24:28.280: INFO: Created: latency-svc-zjrs5 Dec 24 13:24:28.287: INFO: Got endpoints: latency-svc-zjrs5 [1.459389717s] Dec 24 13:24:28.330: INFO: Created: latency-svc-txsq6 Dec 24 13:24:28.342: INFO: Got endpoints: latency-svc-txsq6 [1.406434552s] Dec 24 13:24:28.450: INFO: Created: latency-svc-255vk Dec 24 13:24:28.472: INFO: Got endpoints: latency-svc-255vk [1.513186903s] Dec 24 13:24:28.534: INFO: Created: latency-svc-ckvmj Dec 24 13:24:28.640: INFO: Got endpoints: latency-svc-ckvmj [1.639530221s] Dec 24 13:24:28.650: INFO: Created: latency-svc-dx727 Dec 24 13:24:28.668: INFO: Got endpoints: latency-svc-dx727 [1.488344358s] Dec 24 13:24:28.808: INFO: Created: latency-svc-t5zw5 Dec 24 13:24:28.840: INFO: Got endpoints: latency-svc-t5zw5 [1.630092447s] Dec 24 13:24:28.884: INFO: Created: latency-svc-hcvx4 Dec 24 13:24:28.949: INFO: Got endpoints: latency-svc-hcvx4 [1.550737841s] Dec 24 13:24:29.003: INFO: Created: latency-svc-jsqpn Dec 24 13:24:29.013: INFO: Got endpoints: latency-svc-jsqpn [1.556983645s] Dec 24 13:24:29.178: INFO: Created: latency-svc-66kh5 Dec 24 13:24:29.189: INFO: Got endpoints: latency-svc-66kh5 [1.566439197s] Dec 24 13:24:29.249: INFO: Created: latency-svc-dcz96 Dec 24 13:24:29.250: INFO: Got endpoints: latency-svc-dcz96 [1.60223832s] Dec 24 13:24:29.336: INFO: Created: latency-svc-xb7r2 Dec 24 13:24:29.339: INFO: Got endpoints: latency-svc-xb7r2 [1.461273602s] Dec 24 13:24:29.491: INFO: Created: latency-svc-28nmz Dec 24 13:24:29.500: INFO: Got endpoints: latency-svc-28nmz [1.582504924s] Dec 24 13:24:29.540: INFO: Created: latency-svc-hg4r7 Dec 24 13:24:29.581: INFO: Created: latency-svc-8fr2n Dec 24 13:24:29.583: INFO: Got endpoints: latency-svc-hg4r7 [1.522062444s] Dec 24 13:24:29.664: INFO: Got endpoints: latency-svc-8fr2n [1.550256168s] Dec 24 13:24:29.683: INFO: Created: latency-svc-ftzzb Dec 24 13:24:29.703: INFO: Got endpoints: latency-svc-ftzzb [1.555635064s] Dec 24 13:24:29.755: INFO: Created: latency-svc-kc2fl Dec 24 13:24:29.876: INFO: Got endpoints: latency-svc-kc2fl [1.588598737s] Dec 24 13:24:29.906: INFO: Created: latency-svc-8hgtv Dec 24 13:24:29.912: INFO: Got endpoints: latency-svc-8hgtv [1.569403991s] Dec 24 13:24:30.047: INFO: Created: latency-svc-pjg6k Dec 24 13:24:30.059: INFO: Got endpoints: latency-svc-pjg6k [1.587014036s] Dec 24 13:24:30.130: INFO: Created: latency-svc-2lq8v Dec 24 13:24:30.324: INFO: Got endpoints: latency-svc-2lq8v [1.683001385s] Dec 24 13:24:30.329: INFO: Created: latency-svc-wh4pt Dec 24 13:24:30.365: INFO: Got endpoints: latency-svc-wh4pt [1.696343278s] Dec 24 13:24:30.504: INFO: Created: latency-svc-876vn Dec 24 13:24:30.521: INFO: Got endpoints: latency-svc-876vn [1.680220588s] Dec 24 13:24:30.587: INFO: Created: latency-svc-zdkjx Dec 24 13:24:30.678: INFO: Got endpoints: latency-svc-zdkjx [1.728531566s] Dec 24 13:24:30.730: INFO: Created: latency-svc-gdt6c Dec 24 13:24:30.746: INFO: Got endpoints: latency-svc-gdt6c [1.73286269s] Dec 24 13:24:30.828: INFO: Created: latency-svc-txd8h Dec 24 13:24:30.841: INFO: Got endpoints: latency-svc-txd8h [1.652179382s] Dec 24 13:24:30.878: INFO: Created: latency-svc-h2wk8 Dec 24 13:24:30.894: INFO: Got endpoints: latency-svc-h2wk8 [1.6436992s] Dec 24 13:24:30.986: INFO: Created: latency-svc-bnkmv Dec 24 13:24:30.998: INFO: Got endpoints: latency-svc-bnkmv [1.659257944s] Dec 24 13:24:31.064: INFO: Created: latency-svc-ghjwm Dec 24 13:24:31.067: INFO: Got endpoints: latency-svc-ghjwm [1.566438053s] Dec 24 13:24:31.226: INFO: Created: latency-svc-wpwg6 Dec 24 13:24:31.272: INFO: Got endpoints: latency-svc-wpwg6 [1.688567397s] Dec 24 13:24:31.276: INFO: Created: latency-svc-qhjn9 Dec 24 13:24:31.285: INFO: Got endpoints: latency-svc-qhjn9 [1.620585692s] Dec 24 13:24:31.476: INFO: Created: latency-svc-mzqvm Dec 24 13:24:31.476: INFO: Got endpoints: latency-svc-mzqvm [1.772717196s] Dec 24 13:24:31.525: INFO: Created: latency-svc-8jz52 Dec 24 13:24:31.556: INFO: Got endpoints: latency-svc-8jz52 [1.680010228s] Dec 24 13:24:31.631: INFO: Created: latency-svc-mgbtx Dec 24 13:24:31.664: INFO: Got endpoints: latency-svc-mgbtx [1.751711236s] Dec 24 13:24:31.670: INFO: Created: latency-svc-2286n Dec 24 13:24:31.677: INFO: Got endpoints: latency-svc-2286n [1.617013303s] Dec 24 13:24:31.714: INFO: Created: latency-svc-zcw5r Dec 24 13:24:31.724: INFO: Got endpoints: latency-svc-zcw5r [1.399795802s] Dec 24 13:24:31.836: INFO: Created: latency-svc-m8hl6 Dec 24 13:24:31.837: INFO: Got endpoints: latency-svc-m8hl6 [1.472485074s] Dec 24 13:24:31.884: INFO: Created: latency-svc-hkmb9 Dec 24 13:24:31.941: INFO: Got endpoints: latency-svc-hkmb9 [1.420267597s] Dec 24 13:24:31.974: INFO: Created: latency-svc-mxmwv Dec 24 13:24:31.992: INFO: Got endpoints: latency-svc-mxmwv [1.314011998s] Dec 24 13:24:32.016: INFO: Created: latency-svc-v6cxd Dec 24 13:24:32.100: INFO: Got endpoints: latency-svc-v6cxd [1.353438024s] Dec 24 13:24:32.118: INFO: Created: latency-svc-qcl2r Dec 24 13:24:32.126: INFO: Got endpoints: latency-svc-qcl2r [1.284397065s] Dec 24 13:24:32.180: INFO: Created: latency-svc-gq2zb Dec 24 13:24:32.181: INFO: Got endpoints: latency-svc-gq2zb [1.287215877s] Dec 24 13:24:32.301: INFO: Created: latency-svc-g7c9q Dec 24 13:24:32.314: INFO: Got endpoints: latency-svc-g7c9q [1.315484693s] Dec 24 13:24:32.378: INFO: Created: latency-svc-89nn6 Dec 24 13:24:32.470: INFO: Got endpoints: latency-svc-89nn6 [1.402993214s] Dec 24 13:24:32.475: INFO: Created: latency-svc-j7wqt Dec 24 13:24:32.510: INFO: Got endpoints: latency-svc-j7wqt [1.237047941s] Dec 24 13:24:32.524: INFO: Created: latency-svc-hqk9b Dec 24 13:24:32.525: INFO: Got endpoints: latency-svc-hqk9b [1.240335502s] Dec 24 13:24:32.644: INFO: Created: latency-svc-6mqrc Dec 24 13:24:32.653: INFO: Got endpoints: latency-svc-6mqrc [1.176848798s] Dec 24 13:24:32.660: INFO: Created: latency-svc-4qh5w Dec 24 13:24:32.666: INFO: Got endpoints: latency-svc-4qh5w [1.109357103s] Dec 24 13:24:32.719: INFO: Created: latency-svc-s6dfc Dec 24 13:24:32.832: INFO: Got endpoints: latency-svc-s6dfc [1.167777392s] Dec 24 13:24:32.840: INFO: Created: latency-svc-bt4hk Dec 24 13:24:32.854: INFO: Got endpoints: latency-svc-bt4hk [1.177474355s] Dec 24 13:24:32.898: INFO: Created: latency-svc-dhq9t Dec 24 13:24:32.909: INFO: Got endpoints: latency-svc-dhq9t [1.184388704s] Dec 24 13:24:33.030: INFO: Created: latency-svc-5s85g Dec 24 13:24:33.033: INFO: Got endpoints: latency-svc-5s85g [1.195737238s] Dec 24 13:24:33.697: INFO: Created: latency-svc-fbdrs Dec 24 13:24:33.707: INFO: Got endpoints: latency-svc-fbdrs [1.765537197s] Dec 24 13:24:33.787: INFO: Created: latency-svc-dxzjx Dec 24 13:24:33.853: INFO: Got endpoints: latency-svc-dxzjx [1.86085243s] Dec 24 13:24:33.854: INFO: Latencies: [178.242729ms 201.098455ms 260.005803ms 283.681558ms 498.100544ms 738.227328ms 776.940862ms 943.637506ms 1.029522928s 1.109357103s 1.167777392s 1.176848798s 1.177474355s 1.184388704s 1.188020349s 1.195715049s 1.195737238s 1.195814076s 1.201466705s 1.227394785s 1.237047941s 1.240335502s 1.26272183s 1.284397065s 1.287215877s 1.309999719s 1.314011998s 1.315484693s 1.318849312s 1.346080092s 1.353438024s 1.374182237s 1.383446651s 1.384283906s 1.38655724s 1.387230775s 1.393251159s 1.394701535s 1.399121564s 1.399795802s 1.399949784s 1.402993214s 1.406434552s 1.407110461s 1.414458342s 1.415470848s 1.416583788s 1.420267597s 1.424874566s 1.425622059s 1.439050287s 1.442718391s 1.445648234s 1.449765474s 1.452146865s 1.456633608s 1.459389717s 1.460695795s 1.461273602s 1.47085509s 1.472485074s 1.482116375s 1.483803481s 1.485327699s 1.488344358s 1.489024888s 1.489866569s 1.492541843s 1.494020209s 1.496480855s 1.50208501s 1.503971976s 1.507855904s 1.508037652s 1.513186903s 1.514843314s 1.520297509s 1.521121211s 1.521636479s 1.522062444s 1.53098657s 1.536004114s 1.536788292s 1.540969993s 1.54272156s 1.550256168s 1.550737841s 1.555635064s 1.556983645s 1.561017757s 1.566438053s 1.566439197s 1.569403991s 1.576782755s 1.578089472s 1.582476581s 1.582504924s 1.583194626s 1.583333645s 1.587014036s 1.588598737s 1.597902293s 1.60223832s 1.604164982s 1.607612319s 1.617013303s 1.61757156s 1.620585692s 1.630092447s 1.639337794s 1.639530221s 1.6436992s 1.646295162s 1.648477123s 1.652179382s 1.658123236s 1.659257944s 1.660907587s 1.670716355s 1.680010228s 1.680220588s 1.683001385s 1.688567397s 1.696343278s 1.7051879s 1.708129095s 1.728531566s 1.730636511s 1.73286269s 1.751711236s 1.765537197s 1.772717196s 1.83286865s 1.836819624s 1.84163017s 1.858980558s 1.86085243s 1.862348977s 1.868735648s 1.874752152s 1.875331786s 1.875659271s 1.884163976s 1.886494619s 1.898008588s 1.909380136s 1.919275931s 1.923250717s 1.938203467s 1.945201434s 1.971165495s 1.972489725s 1.972618431s 1.974331251s 1.979933385s 1.981434581s 1.988201269s 1.992038497s 1.994022593s 2.005041186s 2.008364315s 2.011535994s 2.012049458s 2.045983331s 2.05392476s 2.055629642s 2.063992297s 2.065769842s 2.067953804s 2.068039093s 2.086973509s 2.090285248s 2.092559729s 2.095848059s 2.111431605s 2.12843016s 2.142245242s 2.148788165s 2.153661404s 2.154534906s 2.161975731s 2.173072836s 2.177667192s 2.178048125s 2.186507433s 2.192999654s 2.195236978s 2.199557898s 2.263413412s 2.274600237s 2.291882289s 2.296299768s 2.304091614s 2.325283582s 2.341249897s 2.365435722s 2.367420281s 2.376271081s 2.491111832s 2.498626279s] Dec 24 13:24:33.854: INFO: 50 %ile: 1.588598737s Dec 24 13:24:33.854: INFO: 90 %ile: 2.161975731s Dec 24 13:24:33.854: INFO: 99 %ile: 2.491111832s Dec 24 13:24:33.854: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 24 13:24:33.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-5847" for this suite. Dec 24 13:25:11.913: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 24 13:25:12.094: INFO: namespace svc-latency-5847 deletion completed in 38.221850605s • [SLOW TEST:69.766 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 24 13:25:12.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 24 13:25:12.235: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2330364f-f188-47fa-9853-28d080b457e1" in namespace "projected-4604" to be "success or failure" Dec 24 13:25:12.254: INFO: Pod "downwardapi-volume-2330364f-f188-47fa-9853-28d080b457e1": Phase="Pending", Reason="", readiness=false. Elapsed: 18.319853ms Dec 24 13:25:14.264: INFO: Pod "downwardapi-volume-2330364f-f188-47fa-9853-28d080b457e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029043992s Dec 24 13:25:16.275: INFO: Pod "downwardapi-volume-2330364f-f188-47fa-9853-28d080b457e1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039828709s Dec 24 13:25:18.285: INFO: Pod "downwardapi-volume-2330364f-f188-47fa-9853-28d080b457e1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049491951s Dec 24 13:25:20.294: INFO: Pod "downwardapi-volume-2330364f-f188-47fa-9853-28d080b457e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.058549317s STEP: Saw pod success Dec 24 13:25:20.294: INFO: Pod "downwardapi-volume-2330364f-f188-47fa-9853-28d080b457e1" satisfied condition "success or failure" Dec 24 13:25:20.298: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-2330364f-f188-47fa-9853-28d080b457e1 container client-container: STEP: delete the pod Dec 24 13:25:20.403: INFO: Waiting for pod downwardapi-volume-2330364f-f188-47fa-9853-28d080b457e1 to disappear Dec 24 13:25:20.421: INFO: Pod downwardapi-volume-2330364f-f188-47fa-9853-28d080b457e1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 24 13:25:20.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4604" for this suite. Dec 24 13:25:26.455: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 24 13:25:26.605: INFO: namespace projected-4604 deletion completed in 6.175928941s • [SLOW TEST:14.511 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 24 13:25:26.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Dec 24 13:25:34.032: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 24 13:25:34.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3110" for this suite. Dec 24 13:25:40.179: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 24 13:25:40.281: INFO: namespace container-runtime-3110 deletion completed in 6.124618921s • [SLOW TEST:13.676 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 24 13:25:40.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 24 13:25:40.382: INFO: (0) /api/v1/nodes/iruya-node/proxy/logs/:
alternatives.log
alternatives.l... (200; 17.076758ms)
Dec 24 13:25:40.446: INFO: (1) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 63.333694ms)
Dec 24 13:25:40.456: INFO: (2) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.674241ms)
Dec 24 13:25:40.464: INFO: (3) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.626247ms)
Dec 24 13:25:40.471: INFO: (4) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.753369ms)
Dec 24 13:25:40.482: INFO: (5) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.233204ms)
Dec 24 13:25:40.490: INFO: (6) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.920903ms)
Dec 24 13:25:40.495: INFO: (7) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.261744ms)
Dec 24 13:25:40.503: INFO: (8) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.220503ms)
Dec 24 13:25:40.509: INFO: (9) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.125284ms)
Dec 24 13:25:40.515: INFO: (10) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.071616ms)
Dec 24 13:25:40.521: INFO: (11) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.865256ms)
Dec 24 13:25:40.527: INFO: (12) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.439005ms)
Dec 24 13:25:40.534: INFO: (13) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.919197ms)
Dec 24 13:25:40.540: INFO: (14) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.604486ms)
Dec 24 13:25:40.546: INFO: (15) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.522776ms)
Dec 24 13:25:40.553: INFO: (16) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.732936ms)
Dec 24 13:25:40.564: INFO: (17) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.575345ms)
Dec 24 13:25:40.569: INFO: (18) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.754218ms)
Dec 24 13:25:40.573: INFO: (19) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.524122ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 13:25:40.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-705" for this suite.
Dec 24 13:25:46.605: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:25:46.716: INFO: namespace proxy-705 deletion completed in 6.137937985s

• [SLOW TEST:6.435 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 13:25:46.717: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 24 13:25:47.005: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Dec 24 13:25:47.017: INFO: Number of nodes with available pods: 0
Dec 24 13:25:47.017: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Dec 24 13:25:47.110: INFO: Number of nodes with available pods: 0
Dec 24 13:25:47.110: INFO: Node iruya-node is running more than one daemon pod
Dec 24 13:25:48.123: INFO: Number of nodes with available pods: 0
Dec 24 13:25:48.123: INFO: Node iruya-node is running more than one daemon pod
Dec 24 13:25:49.129: INFO: Number of nodes with available pods: 0
Dec 24 13:25:49.129: INFO: Node iruya-node is running more than one daemon pod
Dec 24 13:25:50.122: INFO: Number of nodes with available pods: 0
Dec 24 13:25:50.122: INFO: Node iruya-node is running more than one daemon pod
Dec 24 13:25:51.135: INFO: Number of nodes with available pods: 0
Dec 24 13:25:51.135: INFO: Node iruya-node is running more than one daemon pod
Dec 24 13:25:52.117: INFO: Number of nodes with available pods: 0
Dec 24 13:25:52.117: INFO: Node iruya-node is running more than one daemon pod
Dec 24 13:25:53.295: INFO: Number of nodes with available pods: 0
Dec 24 13:25:53.295: INFO: Node iruya-node is running more than one daemon pod
Dec 24 13:25:54.153: INFO: Number of nodes with available pods: 0
Dec 24 13:25:54.153: INFO: Node iruya-node is running more than one daemon pod
Dec 24 13:25:55.177: INFO: Number of nodes with available pods: 0
Dec 24 13:25:55.177: INFO: Node iruya-node is running more than one daemon pod
Dec 24 13:25:56.133: INFO: Number of nodes with available pods: 1
Dec 24 13:25:56.133: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Dec 24 13:25:56.202: INFO: Number of nodes with available pods: 1
Dec 24 13:25:56.202: INFO: Number of running nodes: 0, number of available pods: 1
Dec 24 13:25:57.209: INFO: Number of nodes with available pods: 0
Dec 24 13:25:57.209: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Dec 24 13:25:57.257: INFO: Number of nodes with available pods: 0
Dec 24 13:25:57.257: INFO: Node iruya-node is running more than one daemon pod
Dec 24 13:25:58.270: INFO: Number of nodes with available pods: 0
Dec 24 13:25:58.270: INFO: Node iruya-node is running more than one daemon pod
Dec 24 13:25:59.269: INFO: Number of nodes with available pods: 0
Dec 24 13:25:59.269: INFO: Node iruya-node is running more than one daemon pod
Dec 24 13:26:00.272: INFO: Number of nodes with available pods: 0
Dec 24 13:26:00.272: INFO: Node iruya-node is running more than one daemon pod
Dec 24 13:26:01.281: INFO: Number of nodes with available pods: 0
Dec 24 13:26:01.281: INFO: Node iruya-node is running more than one daemon pod
Dec 24 13:26:02.269: INFO: Number of nodes with available pods: 0
Dec 24 13:26:02.269: INFO: Node iruya-node is running more than one daemon pod
Dec 24 13:26:03.265: INFO: Number of nodes with available pods: 0
Dec 24 13:26:03.265: INFO: Node iruya-node is running more than one daemon pod
Dec 24 13:26:04.276: INFO: Number of nodes with available pods: 0
Dec 24 13:26:04.276: INFO: Node iruya-node is running more than one daemon pod
Dec 24 13:26:05.274: INFO: Number of nodes with available pods: 0
Dec 24 13:26:05.274: INFO: Node iruya-node is running more than one daemon pod
Dec 24 13:26:06.274: INFO: Number of nodes with available pods: 0
Dec 24 13:26:06.274: INFO: Node iruya-node is running more than one daemon pod
Dec 24 13:26:07.290: INFO: Number of nodes with available pods: 0
Dec 24 13:26:07.290: INFO: Node iruya-node is running more than one daemon pod
Dec 24 13:26:08.272: INFO: Number of nodes with available pods: 0
Dec 24 13:26:08.272: INFO: Node iruya-node is running more than one daemon pod
Dec 24 13:26:09.271: INFO: Number of nodes with available pods: 0
Dec 24 13:26:09.271: INFO: Node iruya-node is running more than one daemon pod
Dec 24 13:26:10.266: INFO: Number of nodes with available pods: 0
Dec 24 13:26:10.266: INFO: Node iruya-node is running more than one daemon pod
Dec 24 13:26:11.267: INFO: Number of nodes with available pods: 1
Dec 24 13:26:11.267: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2568, will wait for the garbage collector to delete the pods
Dec 24 13:26:11.395: INFO: Deleting DaemonSet.extensions daemon-set took: 30.626286ms
Dec 24 13:26:11.695: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.966047ms
Dec 24 13:26:26.709: INFO: Number of nodes with available pods: 0
Dec 24 13:26:26.709: INFO: Number of running nodes: 0, number of available pods: 0
Dec 24 13:26:26.717: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2568/daemonsets","resourceVersion":"17892292"},"items":null}

Dec 24 13:26:26.721: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2568/pods","resourceVersion":"17892292"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 13:26:26.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-2568" for this suite.
Dec 24 13:26:32.849: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:26:32.969: INFO: namespace daemonsets-2568 deletion completed in 6.15338025s

• [SLOW TEST:46.253 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 13:26:32.970: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-secret-kzc6
STEP: Creating a pod to test atomic-volume-subpath
Dec 24 13:26:33.082: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-kzc6" in namespace "subpath-6778" to be "success or failure"
Dec 24 13:26:33.091: INFO: Pod "pod-subpath-test-secret-kzc6": Phase="Pending", Reason="", readiness=false. Elapsed: 9.126609ms
Dec 24 13:26:35.105: INFO: Pod "pod-subpath-test-secret-kzc6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022719283s
Dec 24 13:26:37.111: INFO: Pod "pod-subpath-test-secret-kzc6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028812372s
Dec 24 13:26:39.121: INFO: Pod "pod-subpath-test-secret-kzc6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038400316s
Dec 24 13:26:41.134: INFO: Pod "pod-subpath-test-secret-kzc6": Phase="Running", Reason="", readiness=true. Elapsed: 8.051393065s
Dec 24 13:26:43.142: INFO: Pod "pod-subpath-test-secret-kzc6": Phase="Running", Reason="", readiness=true. Elapsed: 10.060096499s
Dec 24 13:26:45.154: INFO: Pod "pod-subpath-test-secret-kzc6": Phase="Running", Reason="", readiness=true. Elapsed: 12.07197038s
Dec 24 13:26:47.162: INFO: Pod "pod-subpath-test-secret-kzc6": Phase="Running", Reason="", readiness=true. Elapsed: 14.080166324s
Dec 24 13:26:49.173: INFO: Pod "pod-subpath-test-secret-kzc6": Phase="Running", Reason="", readiness=true. Elapsed: 16.090586561s
Dec 24 13:26:51.180: INFO: Pod "pod-subpath-test-secret-kzc6": Phase="Running", Reason="", readiness=true. Elapsed: 18.097746053s
Dec 24 13:26:53.310: INFO: Pod "pod-subpath-test-secret-kzc6": Phase="Running", Reason="", readiness=true. Elapsed: 20.227918334s
Dec 24 13:26:55.317: INFO: Pod "pod-subpath-test-secret-kzc6": Phase="Running", Reason="", readiness=true. Elapsed: 22.23503192s
Dec 24 13:26:57.329: INFO: Pod "pod-subpath-test-secret-kzc6": Phase="Running", Reason="", readiness=true. Elapsed: 24.246278446s
Dec 24 13:26:59.342: INFO: Pod "pod-subpath-test-secret-kzc6": Phase="Running", Reason="", readiness=true. Elapsed: 26.259908488s
Dec 24 13:27:01.358: INFO: Pod "pod-subpath-test-secret-kzc6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.275911596s
STEP: Saw pod success
Dec 24 13:27:01.358: INFO: Pod "pod-subpath-test-secret-kzc6" satisfied condition "success or failure"
Dec 24 13:27:01.364: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-secret-kzc6 container test-container-subpath-secret-kzc6: 
STEP: delete the pod
Dec 24 13:27:01.423: INFO: Waiting for pod pod-subpath-test-secret-kzc6 to disappear
Dec 24 13:27:01.429: INFO: Pod pod-subpath-test-secret-kzc6 no longer exists
STEP: Deleting pod pod-subpath-test-secret-kzc6
Dec 24 13:27:01.429: INFO: Deleting pod "pod-subpath-test-secret-kzc6" in namespace "subpath-6778"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 13:27:01.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-6778" for this suite.
Dec 24 13:27:07.525: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:27:07.637: INFO: namespace subpath-6778 deletion completed in 6.195960306s

• [SLOW TEST:34.667 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 13:27:07.637: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service multi-endpoint-test in namespace services-45
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-45 to expose endpoints map[]
Dec 24 13:27:07.847: INFO: successfully validated that service multi-endpoint-test in namespace services-45 exposes endpoints map[] (15.338854ms elapsed)
STEP: Creating pod pod1 in namespace services-45
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-45 to expose endpoints map[pod1:[100]]
Dec 24 13:27:12.012: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.140146708s elapsed, will retry)
Dec 24 13:27:15.082: INFO: successfully validated that service multi-endpoint-test in namespace services-45 exposes endpoints map[pod1:[100]] (7.210357256s elapsed)
STEP: Creating pod pod2 in namespace services-45
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-45 to expose endpoints map[pod1:[100] pod2:[101]]
Dec 24 13:27:19.649: INFO: Unexpected endpoints: found map[e00c284c-7c28-4b42-836b-0b932cc54694:[100]], expected map[pod1:[100] pod2:[101]] (4.557821155s elapsed, will retry)
Dec 24 13:27:23.739: INFO: successfully validated that service multi-endpoint-test in namespace services-45 exposes endpoints map[pod1:[100] pod2:[101]] (8.646953396s elapsed)
STEP: Deleting pod pod1 in namespace services-45
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-45 to expose endpoints map[pod2:[101]]
Dec 24 13:27:24.858: INFO: successfully validated that service multi-endpoint-test in namespace services-45 exposes endpoints map[pod2:[101]] (1.104753205s elapsed)
STEP: Deleting pod pod2 in namespace services-45
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-45 to expose endpoints map[]
Dec 24 13:27:24.981: INFO: successfully validated that service multi-endpoint-test in namespace services-45 exposes endpoints map[] (95.401762ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 13:27:25.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-45" for this suite.
Dec 24 13:27:49.187: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:27:49.446: INFO: namespace services-45 deletion completed in 22.732448678s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:41.809 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 13:27:49.446: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-fa5e94e9-c7e0-4420-931c-730cbe14fe44
STEP: Creating a pod to test consume secrets
Dec 24 13:27:49.748: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-34602423-96f9-4be2-94a3-88b47a9276b8" in namespace "projected-7812" to be "success or failure"
Dec 24 13:27:49.765: INFO: Pod "pod-projected-secrets-34602423-96f9-4be2-94a3-88b47a9276b8": Phase="Pending", Reason="", readiness=false. Elapsed: 17.159939ms
Dec 24 13:27:51.781: INFO: Pod "pod-projected-secrets-34602423-96f9-4be2-94a3-88b47a9276b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032344072s
Dec 24 13:27:53.791: INFO: Pod "pod-projected-secrets-34602423-96f9-4be2-94a3-88b47a9276b8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043134923s
Dec 24 13:27:55.801: INFO: Pod "pod-projected-secrets-34602423-96f9-4be2-94a3-88b47a9276b8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052978616s
Dec 24 13:27:57.828: INFO: Pod "pod-projected-secrets-34602423-96f9-4be2-94a3-88b47a9276b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.079877769s
STEP: Saw pod success
Dec 24 13:27:57.828: INFO: Pod "pod-projected-secrets-34602423-96f9-4be2-94a3-88b47a9276b8" satisfied condition "success or failure"
Dec 24 13:27:57.858: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-34602423-96f9-4be2-94a3-88b47a9276b8 container projected-secret-volume-test: 
STEP: delete the pod
Dec 24 13:27:58.064: INFO: Waiting for pod pod-projected-secrets-34602423-96f9-4be2-94a3-88b47a9276b8 to disappear
Dec 24 13:27:58.079: INFO: Pod pod-projected-secrets-34602423-96f9-4be2-94a3-88b47a9276b8 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 13:27:58.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7812" for this suite.
Dec 24 13:28:04.120: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:28:04.234: INFO: namespace projected-7812 deletion completed in 6.140720576s

• [SLOW TEST:14.788 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 13:28:04.234: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-26f864af-e869-4498-8f7e-6cbd4d10390b
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 13:28:14.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9139" for this suite.
Dec 24 13:28:36.758: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:28:36.908: INFO: namespace configmap-9139 deletion completed in 22.174595123s

• [SLOW TEST:32.674 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 13:28:36.909: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 24 13:28:37.033: INFO: Waiting up to 5m0s for pod "downwardapi-volume-29882265-d357-45d7-b97d-4de6d16c0dac" in namespace "downward-api-7305" to be "success or failure"
Dec 24 13:28:37.041: INFO: Pod "downwardapi-volume-29882265-d357-45d7-b97d-4de6d16c0dac": Phase="Pending", Reason="", readiness=false. Elapsed: 7.715591ms
Dec 24 13:28:39.051: INFO: Pod "downwardapi-volume-29882265-d357-45d7-b97d-4de6d16c0dac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017912237s
Dec 24 13:28:41.060: INFO: Pod "downwardapi-volume-29882265-d357-45d7-b97d-4de6d16c0dac": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02693312s
Dec 24 13:28:43.077: INFO: Pod "downwardapi-volume-29882265-d357-45d7-b97d-4de6d16c0dac": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044193073s
Dec 24 13:28:45.088: INFO: Pod "downwardapi-volume-29882265-d357-45d7-b97d-4de6d16c0dac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.055248194s
STEP: Saw pod success
Dec 24 13:28:45.088: INFO: Pod "downwardapi-volume-29882265-d357-45d7-b97d-4de6d16c0dac" satisfied condition "success or failure"
Dec 24 13:28:45.110: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-29882265-d357-45d7-b97d-4de6d16c0dac container client-container: 
STEP: delete the pod
Dec 24 13:28:45.175: INFO: Waiting for pod downwardapi-volume-29882265-d357-45d7-b97d-4de6d16c0dac to disappear
Dec 24 13:28:45.219: INFO: Pod downwardapi-volume-29882265-d357-45d7-b97d-4de6d16c0dac no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 13:28:45.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7305" for this suite.
Dec 24 13:28:51.344: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:28:51.466: INFO: namespace downward-api-7305 deletion completed in 6.155725513s

• [SLOW TEST:14.558 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 13:28:51.467: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod test-webserver-45e03af6-8a0f-4393-a145-7af0ad86c363 in namespace container-probe-627
Dec 24 13:28:59.631: INFO: Started pod test-webserver-45e03af6-8a0f-4393-a145-7af0ad86c363 in namespace container-probe-627
STEP: checking the pod's current state and verifying that restartCount is present
Dec 24 13:28:59.640: INFO: Initial restart count of pod test-webserver-45e03af6-8a0f-4393-a145-7af0ad86c363 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 13:33:01.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-627" for this suite.
Dec 24 13:33:07.516: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:33:07.627: INFO: namespace container-probe-627 deletion completed in 6.222935089s

• [SLOW TEST:256.161 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 13:33:07.628: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 24 13:33:07.734: INFO: Waiting up to 5m0s for pod "downwardapi-volume-185c514a-847e-4b1a-b73f-5e9a3ecf3ee7" in namespace "projected-880" to be "success or failure"
Dec 24 13:33:07.746: INFO: Pod "downwardapi-volume-185c514a-847e-4b1a-b73f-5e9a3ecf3ee7": Phase="Pending", Reason="", readiness=false. Elapsed: 12.512964ms
Dec 24 13:33:09.755: INFO: Pod "downwardapi-volume-185c514a-847e-4b1a-b73f-5e9a3ecf3ee7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021086476s
Dec 24 13:33:11.765: INFO: Pod "downwardapi-volume-185c514a-847e-4b1a-b73f-5e9a3ecf3ee7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031293097s
Dec 24 13:33:13.805: INFO: Pod "downwardapi-volume-185c514a-847e-4b1a-b73f-5e9a3ecf3ee7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071388515s
Dec 24 13:33:15.814: INFO: Pod "downwardapi-volume-185c514a-847e-4b1a-b73f-5e9a3ecf3ee7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.080110181s
STEP: Saw pod success
Dec 24 13:33:15.814: INFO: Pod "downwardapi-volume-185c514a-847e-4b1a-b73f-5e9a3ecf3ee7" satisfied condition "success or failure"
Dec 24 13:33:15.818: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-185c514a-847e-4b1a-b73f-5e9a3ecf3ee7 container client-container: 
STEP: delete the pod
Dec 24 13:33:15.871: INFO: Waiting for pod downwardapi-volume-185c514a-847e-4b1a-b73f-5e9a3ecf3ee7 to disappear
Dec 24 13:33:15.885: INFO: Pod downwardapi-volume-185c514a-847e-4b1a-b73f-5e9a3ecf3ee7 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 13:33:15.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-880" for this suite.
Dec 24 13:33:21.918: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:33:22.026: INFO: namespace projected-880 deletion completed in 6.133171134s

• [SLOW TEST:14.399 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 13:33:22.027: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Dec 24 13:33:22.181: INFO: Waiting up to 5m0s for pod "pod-d5aded48-9661-4c31-935b-e87e320423da" in namespace "emptydir-1921" to be "success or failure"
Dec 24 13:33:22.214: INFO: Pod "pod-d5aded48-9661-4c31-935b-e87e320423da": Phase="Pending", Reason="", readiness=false. Elapsed: 33.110793ms
Dec 24 13:33:24.224: INFO: Pod "pod-d5aded48-9661-4c31-935b-e87e320423da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042191707s
Dec 24 13:33:26.233: INFO: Pod "pod-d5aded48-9661-4c31-935b-e87e320423da": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051179403s
Dec 24 13:33:28.241: INFO: Pod "pod-d5aded48-9661-4c31-935b-e87e320423da": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059587555s
Dec 24 13:33:30.252: INFO: Pod "pod-d5aded48-9661-4c31-935b-e87e320423da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.070362247s
STEP: Saw pod success
Dec 24 13:33:30.252: INFO: Pod "pod-d5aded48-9661-4c31-935b-e87e320423da" satisfied condition "success or failure"
Dec 24 13:33:30.255: INFO: Trying to get logs from node iruya-node pod pod-d5aded48-9661-4c31-935b-e87e320423da container test-container: 
STEP: delete the pod
Dec 24 13:33:30.405: INFO: Waiting for pod pod-d5aded48-9661-4c31-935b-e87e320423da to disappear
Dec 24 13:33:30.411: INFO: Pod pod-d5aded48-9661-4c31-935b-e87e320423da no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 13:33:30.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1921" for this suite.
Dec 24 13:33:36.545: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:33:36.705: INFO: namespace emptydir-1921 deletion completed in 6.288351345s

• [SLOW TEST:14.679 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 13:33:36.706: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Dec 24 13:33:36.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5326'
Dec 24 13:33:38.875: INFO: stderr: ""
Dec 24 13:33:38.875: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 24 13:33:38.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5326'
Dec 24 13:33:39.144: INFO: stderr: ""
Dec 24 13:33:39.144: INFO: stdout: "update-demo-nautilus-5jpqr update-demo-nautilus-rhwtk "
Dec 24 13:33:39.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5jpqr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5326'
Dec 24 13:33:39.276: INFO: stderr: ""
Dec 24 13:33:39.276: INFO: stdout: ""
Dec 24 13:33:39.276: INFO: update-demo-nautilus-5jpqr is created but not running
Dec 24 13:33:44.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5326'
Dec 24 13:33:44.521: INFO: stderr: ""
Dec 24 13:33:44.521: INFO: stdout: "update-demo-nautilus-5jpqr update-demo-nautilus-rhwtk "
Dec 24 13:33:44.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5jpqr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5326'
Dec 24 13:33:46.220: INFO: stderr: ""
Dec 24 13:33:46.220: INFO: stdout: ""
Dec 24 13:33:46.220: INFO: update-demo-nautilus-5jpqr is created but not running
Dec 24 13:33:51.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5326'
Dec 24 13:33:51.456: INFO: stderr: ""
Dec 24 13:33:51.456: INFO: stdout: "update-demo-nautilus-5jpqr update-demo-nautilus-rhwtk "
Dec 24 13:33:51.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5jpqr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5326'
Dec 24 13:33:51.586: INFO: stderr: ""
Dec 24 13:33:51.586: INFO: stdout: "true"
Dec 24 13:33:51.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5jpqr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5326'
Dec 24 13:33:51.742: INFO: stderr: ""
Dec 24 13:33:51.742: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 24 13:33:51.742: INFO: validating pod update-demo-nautilus-5jpqr
Dec 24 13:33:51.751: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 24 13:33:51.751: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 24 13:33:51.751: INFO: update-demo-nautilus-5jpqr is verified up and running
Dec 24 13:33:51.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rhwtk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5326'
Dec 24 13:33:51.958: INFO: stderr: ""
Dec 24 13:33:51.958: INFO: stdout: "true"
Dec 24 13:33:51.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rhwtk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5326'
Dec 24 13:33:52.145: INFO: stderr: ""
Dec 24 13:33:52.145: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 24 13:33:52.145: INFO: validating pod update-demo-nautilus-rhwtk
Dec 24 13:33:52.193: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 24 13:33:52.193: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 24 13:33:52.193: INFO: update-demo-nautilus-rhwtk is verified up and running
STEP: scaling down the replication controller
Dec 24 13:33:52.198: INFO: scanned /root for discovery docs: 
Dec 24 13:33:52.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-5326'
Dec 24 13:33:53.564: INFO: stderr: ""
Dec 24 13:33:53.564: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 24 13:33:53.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5326'
Dec 24 13:33:53.771: INFO: stderr: ""
Dec 24 13:33:53.771: INFO: stdout: "update-demo-nautilus-5jpqr update-demo-nautilus-rhwtk "
STEP: Replicas for name=update-demo: expected=1 actual=2
Dec 24 13:33:58.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5326'
Dec 24 13:33:59.006: INFO: stderr: ""
Dec 24 13:33:59.006: INFO: stdout: "update-demo-nautilus-5jpqr update-demo-nautilus-rhwtk "
STEP: Replicas for name=update-demo: expected=1 actual=2
Dec 24 13:34:04.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5326'
Dec 24 13:34:04.127: INFO: stderr: ""
Dec 24 13:34:04.127: INFO: stdout: "update-demo-nautilus-5jpqr update-demo-nautilus-rhwtk "
STEP: Replicas for name=update-demo: expected=1 actual=2
Dec 24 13:34:09.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5326'
Dec 24 13:34:09.312: INFO: stderr: ""
Dec 24 13:34:09.312: INFO: stdout: "update-demo-nautilus-rhwtk "
Dec 24 13:34:09.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rhwtk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5326'
Dec 24 13:34:09.422: INFO: stderr: ""
Dec 24 13:34:09.422: INFO: stdout: "true"
Dec 24 13:34:09.423: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rhwtk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5326'
Dec 24 13:34:09.510: INFO: stderr: ""
Dec 24 13:34:09.510: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 24 13:34:09.511: INFO: validating pod update-demo-nautilus-rhwtk
Dec 24 13:34:09.518: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 24 13:34:09.518: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 24 13:34:09.518: INFO: update-demo-nautilus-rhwtk is verified up and running
STEP: scaling up the replication controller
Dec 24 13:34:09.520: INFO: scanned /root for discovery docs: 
Dec 24 13:34:09.520: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-5326'
Dec 24 13:34:11.026: INFO: stderr: ""
Dec 24 13:34:11.026: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 24 13:34:11.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5326'
Dec 24 13:34:11.403: INFO: stderr: ""
Dec 24 13:34:11.403: INFO: stdout: "update-demo-nautilus-kx2kz update-demo-nautilus-rhwtk "
Dec 24 13:34:11.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kx2kz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5326'
Dec 24 13:34:11.804: INFO: stderr: ""
Dec 24 13:34:11.804: INFO: stdout: ""
Dec 24 13:34:11.804: INFO: update-demo-nautilus-kx2kz is created but not running
Dec 24 13:34:16.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5326'
Dec 24 13:34:16.993: INFO: stderr: ""
Dec 24 13:34:16.993: INFO: stdout: "update-demo-nautilus-kx2kz update-demo-nautilus-rhwtk "
Dec 24 13:34:16.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kx2kz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5326'
Dec 24 13:34:17.132: INFO: stderr: ""
Dec 24 13:34:17.132: INFO: stdout: "true"
Dec 24 13:34:17.132: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kx2kz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5326'
Dec 24 13:34:17.241: INFO: stderr: ""
Dec 24 13:34:17.242: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 24 13:34:17.242: INFO: validating pod update-demo-nautilus-kx2kz
Dec 24 13:34:17.249: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 24 13:34:17.249: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 24 13:34:17.249: INFO: update-demo-nautilus-kx2kz is verified up and running
Dec 24 13:34:17.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rhwtk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5326'
Dec 24 13:34:17.347: INFO: stderr: ""
Dec 24 13:34:17.347: INFO: stdout: "true"
Dec 24 13:34:17.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rhwtk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5326'
Dec 24 13:34:17.471: INFO: stderr: ""
Dec 24 13:34:17.471: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 24 13:34:17.471: INFO: validating pod update-demo-nautilus-rhwtk
Dec 24 13:34:17.481: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 24 13:34:17.481: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 24 13:34:17.481: INFO: update-demo-nautilus-rhwtk is verified up and running
STEP: using delete to clean up resources
Dec 24 13:34:17.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5326'
Dec 24 13:34:17.573: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 24 13:34:17.573: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Dec 24 13:34:17.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5326'
Dec 24 13:34:17.723: INFO: stderr: "No resources found.\n"
Dec 24 13:34:17.723: INFO: stdout: ""
Dec 24 13:34:17.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5326 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 24 13:34:17.938: INFO: stderr: ""
Dec 24 13:34:17.939: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 13:34:17.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5326" for this suite.
Dec 24 13:34:39.990: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:34:40.120: INFO: namespace kubectl-5326 deletion completed in 22.17266652s

• [SLOW TEST:63.414 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 13:34:40.121: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Dec 24 13:34:56.291: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 24 13:34:56.310: INFO: Pod pod-with-prestop-http-hook still exists
Dec 24 13:34:58.310: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 24 13:34:58.317: INFO: Pod pod-with-prestop-http-hook still exists
Dec 24 13:35:00.310: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 24 13:35:00.319: INFO: Pod pod-with-prestop-http-hook still exists
Dec 24 13:35:02.310: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 24 13:35:02.317: INFO: Pod pod-with-prestop-http-hook still exists
Dec 24 13:35:04.310: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 24 13:35:04.320: INFO: Pod pod-with-prestop-http-hook still exists
Dec 24 13:35:06.310: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 24 13:35:06.319: INFO: Pod pod-with-prestop-http-hook still exists
Dec 24 13:35:08.310: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 24 13:35:08.321: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 13:35:08.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-8894" for this suite.
Dec 24 13:35:32.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:35:32.564: INFO: namespace container-lifecycle-hook-8894 deletion completed in 24.185554658s

• [SLOW TEST:52.444 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 13:35:32.565: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on tmpfs
Dec 24 13:35:32.692: INFO: Waiting up to 5m0s for pod "pod-33c58bea-2340-471d-a80b-8649224bd5c0" in namespace "emptydir-9879" to be "success or failure"
Dec 24 13:35:32.698: INFO: Pod "pod-33c58bea-2340-471d-a80b-8649224bd5c0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.212884ms
Dec 24 13:35:34.709: INFO: Pod "pod-33c58bea-2340-471d-a80b-8649224bd5c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016773154s
Dec 24 13:35:36.716: INFO: Pod "pod-33c58bea-2340-471d-a80b-8649224bd5c0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02428756s
Dec 24 13:35:38.734: INFO: Pod "pod-33c58bea-2340-471d-a80b-8649224bd5c0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041912766s
Dec 24 13:35:40.746: INFO: Pod "pod-33c58bea-2340-471d-a80b-8649224bd5c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.05377343s
STEP: Saw pod success
Dec 24 13:35:40.746: INFO: Pod "pod-33c58bea-2340-471d-a80b-8649224bd5c0" satisfied condition "success or failure"
Dec 24 13:35:40.750: INFO: Trying to get logs from node iruya-node pod pod-33c58bea-2340-471d-a80b-8649224bd5c0 container test-container: 
STEP: delete the pod
Dec 24 13:35:40.803: INFO: Waiting for pod pod-33c58bea-2340-471d-a80b-8649224bd5c0 to disappear
Dec 24 13:35:40.808: INFO: Pod pod-33c58bea-2340-471d-a80b-8649224bd5c0 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 13:35:40.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9879" for this suite.
Dec 24 13:35:46.833: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:35:46.979: INFO: namespace emptydir-9879 deletion completed in 6.165429248s

• [SLOW TEST:14.414 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 13:35:46.980: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 24 13:35:47.079: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 13:35:55.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4533" for this suite.
Dec 24 13:36:37.216: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:36:37.346: INFO: namespace pods-4533 deletion completed in 42.163210459s

• [SLOW TEST:50.367 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 13:36:37.347: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 13:37:37.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7155" for this suite.
Dec 24 13:37:59.578: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:37:59.764: INFO: namespace container-probe-7155 deletion completed in 22.238837196s

• [SLOW TEST:82.418 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 13:37:59.765: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Dec 24 13:37:59.988: INFO: Waiting up to 5m0s for pod "pod-16fd2287-fcf9-45f7-b8e3-633ac3685b20" in namespace "emptydir-4796" to be "success or failure"
Dec 24 13:38:00.032: INFO: Pod "pod-16fd2287-fcf9-45f7-b8e3-633ac3685b20": Phase="Pending", Reason="", readiness=false. Elapsed: 43.889767ms
Dec 24 13:38:02.041: INFO: Pod "pod-16fd2287-fcf9-45f7-b8e3-633ac3685b20": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052273705s
Dec 24 13:38:04.054: INFO: Pod "pod-16fd2287-fcf9-45f7-b8e3-633ac3685b20": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065601923s
Dec 24 13:38:06.080: INFO: Pod "pod-16fd2287-fcf9-45f7-b8e3-633ac3685b20": Phase="Pending", Reason="", readiness=false. Elapsed: 6.091542886s
Dec 24 13:38:08.092: INFO: Pod "pod-16fd2287-fcf9-45f7-b8e3-633ac3685b20": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.103801576s
STEP: Saw pod success
Dec 24 13:38:08.092: INFO: Pod "pod-16fd2287-fcf9-45f7-b8e3-633ac3685b20" satisfied condition "success or failure"
Dec 24 13:38:08.096: INFO: Trying to get logs from node iruya-node pod pod-16fd2287-fcf9-45f7-b8e3-633ac3685b20 container test-container: 
STEP: delete the pod
Dec 24 13:38:08.443: INFO: Waiting for pod pod-16fd2287-fcf9-45f7-b8e3-633ac3685b20 to disappear
Dec 24 13:38:08.450: INFO: Pod pod-16fd2287-fcf9-45f7-b8e3-633ac3685b20 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 13:38:08.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4796" for this suite.
Dec 24 13:38:14.520: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:38:14.676: INFO: namespace emptydir-4796 deletion completed in 6.21510061s

• [SLOW TEST:14.911 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 13:38:14.678: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Dec 24 13:38:24.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-b6ca3951-d23f-47b8-b1f0-e23f633eae1c -c busybox-main-container --namespace=emptydir-8093 -- cat /usr/share/volumeshare/shareddata.txt'
Dec 24 13:38:25.385: INFO: stderr: ""
Dec 24 13:38:25.385: INFO: stdout: "Hello from the busy-box sub-container\n"
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 13:38:25.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8093" for this suite.
Dec 24 13:38:31.432: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:38:31.561: INFO: namespace emptydir-8093 deletion completed in 6.167873864s

• [SLOW TEST:16.884 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 13:38:31.562: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W1224 13:38:48.219814       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 24 13:38:48.219: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 13:38:48.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7610" for this suite.
Dec 24 13:39:02.718: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:39:02.903: INFO: namespace gc-7610 deletion completed in 14.400249199s

• [SLOW TEST:31.342 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 13:39:02.905: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Dec 24 13:39:02.987: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 24 13:39:02.997: INFO: Waiting for terminating namespaces to be deleted...
Dec 24 13:39:03.001: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Dec 24 13:39:03.020: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Dec 24 13:39:03.020: INFO: 	Container weave ready: true, restart count 0
Dec 24 13:39:03.020: INFO: 	Container weave-npc ready: true, restart count 0
Dec 24 13:39:03.020: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Dec 24 13:39:03.020: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 24 13:39:03.020: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Dec 24 13:39:03.045: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Dec 24 13:39:03.045: INFO: 	Container kube-apiserver ready: true, restart count 0
Dec 24 13:39:03.045: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Dec 24 13:39:03.045: INFO: 	Container coredns ready: true, restart count 0
Dec 24 13:39:03.045: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Dec 24 13:39:03.045: INFO: 	Container kube-scheduler ready: true, restart count 7
Dec 24 13:39:03.045: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Dec 24 13:39:03.045: INFO: 	Container weave ready: true, restart count 0
Dec 24 13:39:03.045: INFO: 	Container weave-npc ready: true, restart count 0
Dec 24 13:39:03.045: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Dec 24 13:39:03.045: INFO: 	Container coredns ready: true, restart count 0
Dec 24 13:39:03.045: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Dec 24 13:39:03.045: INFO: 	Container etcd ready: true, restart count 0
Dec 24 13:39:03.045: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Dec 24 13:39:03.045: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 24 13:39:03.045: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Dec 24 13:39:03.045: INFO: 	Container kube-controller-manager ready: true, restart count 10
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-4635aee1-ff4c-40d4-8322-9f19958b50aa 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-4635aee1-ff4c-40d4-8322-9f19958b50aa off the node iruya-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-4635aee1-ff4c-40d4-8322-9f19958b50aa
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 13:39:21.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-7004" for this suite.
Dec 24 13:39:41.394: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:39:41.533: INFO: namespace sched-pred-7004 deletion completed in 20.209152203s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:38.628 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 13:39:41.534: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-1340c166-eb7a-49f2-8f53-23fcce4bc688
STEP: Creating configMap with name cm-test-opt-upd-d81fab9d-fb05-45dc-b12c-e8bf6ada0e7d
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-1340c166-eb7a-49f2-8f53-23fcce4bc688
STEP: Updating configmap cm-test-opt-upd-d81fab9d-fb05-45dc-b12c-e8bf6ada0e7d
STEP: Creating configMap with name cm-test-opt-create-dfb71715-7327-4678-9cc2-0b021926b76a
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 13:41:17.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6870" for this suite.
Dec 24 13:41:37.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:41:38.134: INFO: namespace configmap-6870 deletion completed in 20.159537154s

• [SLOW TEST:116.601 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 13:41:38.136: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 24 13:41:38.773: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"2fd409ad-6708-49e6-8c8e-aaf488844e73", Controller:(*bool)(0xc0032a94c2), BlockOwnerDeletion:(*bool)(0xc0032a94c3)}}
Dec 24 13:41:38.796: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"ece2af7e-bca4-4527-b3c7-98c18e2da7d8", Controller:(*bool)(0xc0030f192a), BlockOwnerDeletion:(*bool)(0xc0030f192b)}}
Dec 24 13:41:38.827: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"d5207e5c-e09e-4fb0-897b-8a6829386554", Controller:(*bool)(0xc0032dd7ca), BlockOwnerDeletion:(*bool)(0xc0032dd7cb)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 13:41:43.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1811" for this suite.
Dec 24 13:41:50.065: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:41:50.174: INFO: namespace gc-1811 deletion completed in 6.18095625s

• [SLOW TEST:12.038 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 13:41:50.175: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 24 13:41:50.298: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1c46810b-6c65-4229-94eb-829b84418fbb" in namespace "downward-api-299" to be "success or failure"
Dec 24 13:41:50.332: INFO: Pod "downwardapi-volume-1c46810b-6c65-4229-94eb-829b84418fbb": Phase="Pending", Reason="", readiness=false. Elapsed: 34.032555ms
Dec 24 13:41:52.341: INFO: Pod "downwardapi-volume-1c46810b-6c65-4229-94eb-829b84418fbb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042767782s
Dec 24 13:41:54.404: INFO: Pod "downwardapi-volume-1c46810b-6c65-4229-94eb-829b84418fbb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106252442s
Dec 24 13:41:56.426: INFO: Pod "downwardapi-volume-1c46810b-6c65-4229-94eb-829b84418fbb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.128038643s
Dec 24 13:41:58.444: INFO: Pod "downwardapi-volume-1c46810b-6c65-4229-94eb-829b84418fbb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.145517902s
STEP: Saw pod success
Dec 24 13:41:58.444: INFO: Pod "downwardapi-volume-1c46810b-6c65-4229-94eb-829b84418fbb" satisfied condition "success or failure"
Dec 24 13:41:58.451: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-1c46810b-6c65-4229-94eb-829b84418fbb container client-container: 
STEP: delete the pod
Dec 24 13:41:58.757: INFO: Waiting for pod downwardapi-volume-1c46810b-6c65-4229-94eb-829b84418fbb to disappear
Dec 24 13:41:58.766: INFO: Pod downwardapi-volume-1c46810b-6c65-4229-94eb-829b84418fbb no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 13:41:58.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-299" for this suite.
Dec 24 13:42:04.793: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:42:04.882: INFO: namespace downward-api-299 deletion completed in 6.110243019s

• [SLOW TEST:14.707 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 13:42:04.884: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 13:42:05.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1918" for this suite.
Dec 24 13:42:11.109: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:42:11.280: INFO: namespace kubelet-test-1918 deletion completed in 6.210453586s

• [SLOW TEST:6.397 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 13:42:11.281: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 24 13:42:11.383: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cfb5ff10-877f-4562-920c-64e09827c6d7" in namespace "downward-api-819" to be "success or failure"
Dec 24 13:42:11.441: INFO: Pod "downwardapi-volume-cfb5ff10-877f-4562-920c-64e09827c6d7": Phase="Pending", Reason="", readiness=false. Elapsed: 57.504728ms
Dec 24 13:42:13.457: INFO: Pod "downwardapi-volume-cfb5ff10-877f-4562-920c-64e09827c6d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074185811s
Dec 24 13:42:15.470: INFO: Pod "downwardapi-volume-cfb5ff10-877f-4562-920c-64e09827c6d7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086523056s
Dec 24 13:42:17.486: INFO: Pod "downwardapi-volume-cfb5ff10-877f-4562-920c-64e09827c6d7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.102620741s
Dec 24 13:42:19.501: INFO: Pod "downwardapi-volume-cfb5ff10-877f-4562-920c-64e09827c6d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.117828637s
STEP: Saw pod success
Dec 24 13:42:19.501: INFO: Pod "downwardapi-volume-cfb5ff10-877f-4562-920c-64e09827c6d7" satisfied condition "success or failure"
Dec 24 13:42:19.506: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-cfb5ff10-877f-4562-920c-64e09827c6d7 container client-container: 
STEP: delete the pod
Dec 24 13:42:19.560: INFO: Waiting for pod downwardapi-volume-cfb5ff10-877f-4562-920c-64e09827c6d7 to disappear
Dec 24 13:42:19.564: INFO: Pod downwardapi-volume-cfb5ff10-877f-4562-920c-64e09827c6d7 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 13:42:19.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-819" for this suite.
Dec 24 13:42:25.590: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:42:25.735: INFO: namespace downward-api-819 deletion completed in 6.165657845s

• [SLOW TEST:14.455 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 13:42:25.736: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-7112
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 24 13:42:25.818: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 24 13:43:04.191: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7112 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 24 13:43:04.191: INFO: >>> kubeConfig: /root/.kube/config
Dec 24 13:43:04.595: INFO: Found all expected endpoints: [netserver-0]
Dec 24 13:43:04.605: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7112 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 24 13:43:04.605: INFO: >>> kubeConfig: /root/.kube/config
Dec 24 13:43:04.988: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 13:43:04.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-7112" for this suite.
Dec 24 13:43:31.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:43:31.171: INFO: namespace pod-network-test-7112 deletion completed in 26.172346399s

• [SLOW TEST:65.435 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 13:43:31.172: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 13:43:31.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6992" for this suite.
Dec 24 13:43:53.363: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:43:53.513: INFO: namespace pods-6992 deletion completed in 22.25138662s

• [SLOW TEST:22.341 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 13:43:53.513: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 24 13:43:53.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-2161'
Dec 24 13:43:55.951: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 24 13:43:55.951: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Dec 24 13:43:55.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-2161'
Dec 24 13:43:56.206: INFO: stderr: ""
Dec 24 13:43:56.206: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 13:43:56.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2161" for this suite.
Dec 24 13:44:02.234: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:44:02.333: INFO: namespace kubectl-2161 deletion completed in 6.121485s

• [SLOW TEST:8.820 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 13:44:02.334: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-8188
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 24 13:44:02.429: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 24 13:44:34.659: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8188 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 24 13:44:34.660: INFO: >>> kubeConfig: /root/.kube/config
Dec 24 13:44:36.173: INFO: Found all expected endpoints: [netserver-0]
Dec 24 13:44:36.183: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8188 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 24 13:44:36.183: INFO: >>> kubeConfig: /root/.kube/config
Dec 24 13:44:37.587: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 13:44:37.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-8188" for this suite.
Dec 24 13:45:03.637: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:45:03.797: INFO: namespace pod-network-test-8188 deletion completed in 26.192738228s

• [SLOW TEST:61.464 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 13:45:03.799: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 24 13:45:03.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-860'
Dec 24 13:45:04.107: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 24 13:45:04.107: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562
Dec 24 13:45:06.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-860'
Dec 24 13:45:06.485: INFO: stderr: ""
Dec 24 13:45:06.485: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 13:45:06.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-860" for this suite.
Dec 24 13:45:12.556: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:45:12.721: INFO: namespace kubectl-860 deletion completed in 6.222145643s

• [SLOW TEST:8.922 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 13:45:12.721: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Dec 24 13:48:13.071: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 13:48:13.097: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 13:48:15.097: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 13:48:15.104: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 13:48:17.097: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 13:48:17.102: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 13:48:19.097: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 13:48:19.104: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 13:48:21.097: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 13:48:21.106: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 13:48:23.097: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 13:48:23.105: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 13:48:25.097: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 13:48:25.107: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 13:48:27.097: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 13:48:27.107: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 13:48:29.097: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 13:48:29.107: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 13:48:31.097: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 13:48:31.104: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 13:48:33.097: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 13:48:33.106: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 13:48:35.097: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 13:48:35.108: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 13:48:37.097: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 13:48:37.104: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 13:48:39.097: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 13:48:39.106: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 13:48:41.097: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 13:48:41.104: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 13:48:43.097: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 13:48:43.104: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 13:48:45.097: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 13:48:45.111: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 13:48:47.097: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 13:48:47.106: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 13:48:49.097: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 13:48:49.105: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 13:48:51.097: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 13:48:51.107: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 13:48:53.097: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 13:48:53.108: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 13:48:55.097: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 13:48:55.113: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 13:48:57.098: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 13:48:57.118: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 13:48:59.097: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 13:48:59.106: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 13:49:01.097: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 13:49:01.110: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 13:49:03.097: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 13:49:03.124: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 13:49:05.097: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 13:49:05.107: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 13:49:07.097: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 13:49:07.104: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 13:49:09.097: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 13:49:09.106: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 13:49:11.097: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 13:49:11.105: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 13:49:13.097: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 13:49:13.104: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 13:49:15.097: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 13:49:15.106: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 13:49:17.097: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 13:49:17.105: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 13:49:19.097: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 13:49:19.105: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 13:49:21.097: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 13:49:21.105: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 13:49:23.097: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 13:49:23.103: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 13:49:25.097: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 13:49:25.107: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 13:49:27.097: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 13:49:27.109: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 13:49:29.097: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 13:49:29.106: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 13:49:31.097: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 13:49:31.107: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 13:49:33.097: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 13:49:33.103: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 13:49:35.097: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 13:49:35.110: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 13:49:37.097: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 13:49:37.112: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 13:49:39.097: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 13:49:39.109: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 13:49:41.097: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 13:49:41.105: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 13:49:43.097: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 13:49:43.107: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 13:49:45.097: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 13:49:45.104: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 24 13:49:47.097: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 24 13:49:47.109: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 13:49:47.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-9332" for this suite.
Dec 24 13:50:09.165: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:50:09.295: INFO: namespace container-lifecycle-hook-9332 deletion completed in 22.181611425s

• [SLOW TEST:296.574 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 13:50:09.296: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Dec 24 13:50:09.435: INFO: Waiting up to 5m0s for pod "pod-9bfad8ca-13a9-4c14-8f87-52a8baba3891" in namespace "emptydir-458" to be "success or failure"
Dec 24 13:50:09.509: INFO: Pod "pod-9bfad8ca-13a9-4c14-8f87-52a8baba3891": Phase="Pending", Reason="", readiness=false. Elapsed: 73.62076ms
Dec 24 13:50:11.519: INFO: Pod "pod-9bfad8ca-13a9-4c14-8f87-52a8baba3891": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08366366s
Dec 24 13:50:13.549: INFO: Pod "pod-9bfad8ca-13a9-4c14-8f87-52a8baba3891": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113822142s
Dec 24 13:50:15.555: INFO: Pod "pod-9bfad8ca-13a9-4c14-8f87-52a8baba3891": Phase="Pending", Reason="", readiness=false. Elapsed: 6.119827408s
Dec 24 13:50:17.570: INFO: Pod "pod-9bfad8ca-13a9-4c14-8f87-52a8baba3891": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.134539599s
STEP: Saw pod success
Dec 24 13:50:17.570: INFO: Pod "pod-9bfad8ca-13a9-4c14-8f87-52a8baba3891" satisfied condition "success or failure"
Dec 24 13:50:17.575: INFO: Trying to get logs from node iruya-node pod pod-9bfad8ca-13a9-4c14-8f87-52a8baba3891 container test-container: 
STEP: delete the pod
Dec 24 13:50:17.686: INFO: Waiting for pod pod-9bfad8ca-13a9-4c14-8f87-52a8baba3891 to disappear
Dec 24 13:50:17.704: INFO: Pod pod-9bfad8ca-13a9-4c14-8f87-52a8baba3891 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 13:50:17.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-458" for this suite.
Dec 24 13:50:23.862: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:50:24.031: INFO: namespace emptydir-458 deletion completed in 6.202118775s

• [SLOW TEST:14.735 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 13:50:24.033: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Dec 24 13:50:24.994: INFO: Pod name wrapped-volume-race-6de62cf0-1542-41e9-9217-b3056636e86d: Found 0 pods out of 5
Dec 24 13:50:30.454: INFO: Pod name wrapped-volume-race-6de62cf0-1542-41e9-9217-b3056636e86d: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-6de62cf0-1542-41e9-9217-b3056636e86d in namespace emptydir-wrapper-9725, will wait for the garbage collector to delete the pods
Dec 24 13:51:00.614: INFO: Deleting ReplicationController wrapped-volume-race-6de62cf0-1542-41e9-9217-b3056636e86d took: 34.333069ms
Dec 24 13:51:01.014: INFO: Terminating ReplicationController wrapped-volume-race-6de62cf0-1542-41e9-9217-b3056636e86d pods took: 400.515998ms
STEP: Creating RC which spawns configmap-volume pods
Dec 24 13:51:47.692: INFO: Pod name wrapped-volume-race-2b9bdeca-01aa-4690-87d7-81eccf143523: Found 0 pods out of 5
Dec 24 13:51:52.707: INFO: Pod name wrapped-volume-race-2b9bdeca-01aa-4690-87d7-81eccf143523: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-2b9bdeca-01aa-4690-87d7-81eccf143523 in namespace emptydir-wrapper-9725, will wait for the garbage collector to delete the pods
Dec 24 13:52:22.804: INFO: Deleting ReplicationController wrapped-volume-race-2b9bdeca-01aa-4690-87d7-81eccf143523 took: 13.337972ms
Dec 24 13:52:23.204: INFO: Terminating ReplicationController wrapped-volume-race-2b9bdeca-01aa-4690-87d7-81eccf143523 pods took: 400.381864ms
STEP: Creating RC which spawns configmap-volume pods
Dec 24 13:53:07.675: INFO: Pod name wrapped-volume-race-24903e24-5572-418b-8952-721b4dfecce4: Found 0 pods out of 5
Dec 24 13:53:12.731: INFO: Pod name wrapped-volume-race-24903e24-5572-418b-8952-721b4dfecce4: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-24903e24-5572-418b-8952-721b4dfecce4 in namespace emptydir-wrapper-9725, will wait for the garbage collector to delete the pods
Dec 24 13:53:42.955: INFO: Deleting ReplicationController wrapped-volume-race-24903e24-5572-418b-8952-721b4dfecce4 took: 17.869929ms
Dec 24 13:53:43.555: INFO: Terminating ReplicationController wrapped-volume-race-24903e24-5572-418b-8952-721b4dfecce4 pods took: 600.796044ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 13:54:28.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-9725" for this suite.
Dec 24 13:54:38.643: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:54:38.751: INFO: namespace emptydir-wrapper-9725 deletion completed in 10.152880787s

• [SLOW TEST:254.718 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 13:54:38.751: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-ece0bad3-b72e-4af0-8fee-060da5bb4ca6
STEP: Creating a pod to test consume configMaps
Dec 24 13:54:38.890: INFO: Waiting up to 5m0s for pod "pod-configmaps-057aad5d-583e-4ea8-99df-85dea0fd5ed5" in namespace "configmap-7540" to be "success or failure"
Dec 24 13:54:38.936: INFO: Pod "pod-configmaps-057aad5d-583e-4ea8-99df-85dea0fd5ed5": Phase="Pending", Reason="", readiness=false. Elapsed: 45.756115ms
Dec 24 13:54:40.978: INFO: Pod "pod-configmaps-057aad5d-583e-4ea8-99df-85dea0fd5ed5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088149307s
Dec 24 13:54:43.015: INFO: Pod "pod-configmaps-057aad5d-583e-4ea8-99df-85dea0fd5ed5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.125022412s
Dec 24 13:54:45.830: INFO: Pod "pod-configmaps-057aad5d-583e-4ea8-99df-85dea0fd5ed5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.940509476s
Dec 24 13:54:47.857: INFO: Pod "pod-configmaps-057aad5d-583e-4ea8-99df-85dea0fd5ed5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.967523325s
Dec 24 13:54:49.872: INFO: Pod "pod-configmaps-057aad5d-583e-4ea8-99df-85dea0fd5ed5": Phase="Running", Reason="", readiness=true. Elapsed: 10.981982168s
Dec 24 13:54:51.914: INFO: Pod "pod-configmaps-057aad5d-583e-4ea8-99df-85dea0fd5ed5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.024584544s
STEP: Saw pod success
Dec 24 13:54:51.915: INFO: Pod "pod-configmaps-057aad5d-583e-4ea8-99df-85dea0fd5ed5" satisfied condition "success or failure"
Dec 24 13:54:51.926: INFO: Trying to get logs from node iruya-node pod pod-configmaps-057aad5d-583e-4ea8-99df-85dea0fd5ed5 container configmap-volume-test: 
STEP: delete the pod
Dec 24 13:54:52.109: INFO: Waiting for pod pod-configmaps-057aad5d-583e-4ea8-99df-85dea0fd5ed5 to disappear
Dec 24 13:54:52.139: INFO: Pod pod-configmaps-057aad5d-583e-4ea8-99df-85dea0fd5ed5 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 13:54:52.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7540" for this suite.
Dec 24 13:54:58.175: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:54:58.283: INFO: namespace configmap-7540 deletion completed in 6.130293765s

• [SLOW TEST:19.532 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 13:54:58.283: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-6858af9c-5933-48e3-9816-b0c78ee9adda
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-6858af9c-5933-48e3-9816-b0c78ee9adda
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 13:55:10.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4989" for this suite.
Dec 24 13:55:32.706: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:55:32.831: INFO: namespace configmap-4989 deletion completed in 22.15657216s

• [SLOW TEST:34.549 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 13:55:32.832: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 24 13:55:32.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2114'
Dec 24 13:55:35.247: INFO: stderr: ""
Dec 24 13:55:35.247: INFO: stdout: "replicationcontroller/redis-master created\n"
Dec 24 13:55:35.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2114'
Dec 24 13:55:35.740: INFO: stderr: ""
Dec 24 13:55:35.740: INFO: stdout: "service/redis-master created\n"
STEP: Waiting for Redis master to start.
Dec 24 13:55:36.794: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 13:55:36.794: INFO: Found 0 / 1
Dec 24 13:55:37.755: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 13:55:37.755: INFO: Found 0 / 1
Dec 24 13:55:38.763: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 13:55:38.763: INFO: Found 0 / 1
Dec 24 13:55:39.749: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 13:55:39.749: INFO: Found 0 / 1
Dec 24 13:55:40.750: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 13:55:40.750: INFO: Found 0 / 1
Dec 24 13:55:41.822: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 13:55:41.823: INFO: Found 0 / 1
Dec 24 13:55:42.799: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 13:55:42.799: INFO: Found 0 / 1
Dec 24 13:55:43.750: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 13:55:43.750: INFO: Found 0 / 1
Dec 24 13:55:44.749: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 13:55:44.749: INFO: Found 1 / 1
Dec 24 13:55:44.749: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Dec 24 13:55:44.755: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 13:55:44.755: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Dec 24 13:55:44.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-vr5d4 --namespace=kubectl-2114'
Dec 24 13:55:44.966: INFO: stderr: ""
Dec 24 13:55:44.966: INFO: stdout: "Name:           redis-master-vr5d4\nNamespace:      kubectl-2114\nPriority:       0\nNode:           iruya-node/10.96.3.65\nStart Time:     Tue, 24 Dec 2019 13:55:35 +0000\nLabels:         app=redis\n                role=master\nAnnotations:    \nStatus:         Running\nIP:             10.44.0.1\nControlled By:  ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   docker://fcd438c6dd85c145da39be1aaa518fc040d920a05381886b0a5751bddba18741\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Tue, 24 Dec 2019 13:55:42 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-n8d8d (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-n8d8d:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-n8d8d\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                 Message\n  ----    ------     ----  ----                 -------\n  Normal  Scheduled  9s    default-scheduler    Successfully assigned kubectl-2114/redis-master-vr5d4 to iruya-node\n  Normal  Pulled     5s    kubelet, iruya-node  Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    3s    kubelet, iruya-node  Created container redis-master\n  Normal  Started    2s    kubelet, iruya-node  Started container redis-master\n"
Dec 24 13:55:44.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-2114'
Dec 24 13:55:45.118: INFO: stderr: ""
Dec 24 13:55:45.118: INFO: stdout: "Name:         redis-master\nNamespace:    kubectl-2114\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  10s   replication-controller  Created pod: redis-master-vr5d4\n"
Dec 24 13:55:45.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-2114'
Dec 24 13:55:45.254: INFO: stderr: ""
Dec 24 13:55:45.254: INFO: stdout: "Name:              redis-master\nNamespace:         kubectl-2114\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.108.83.210\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.44.0.1:6379\nSession Affinity:  None\nEvents:            \n"
Dec 24 13:55:45.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-node'
Dec 24 13:55:45.421: INFO: stderr: ""
Dec 24 13:55:45.421: INFO: stdout: "Name:               iruya-node\nRoles:              \nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=iruya-node\n                    kubernetes.io/os=linux\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sun, 04 Aug 2019 09:01:39 +0000\nTaints:             \nUnschedulable:      false\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sat, 12 Oct 2019 11:56:49 +0000   Sat, 12 Oct 2019 11:56:49 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Tue, 24 Dec 2019 13:54:53 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Tue, 24 Dec 2019 13:54:53 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Tue, 24 Dec 2019 13:54:53 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Tue, 24 Dec 2019 13:54:53 +0000   Sun, 04 Aug 2019 09:02:19 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  10.96.3.65\n  Hostname:    iruya-node\nCapacity:\n cpu:                4\n ephemeral-storage:  20145724Ki\n hugepages-2Mi:      0\n memory:             4039076Ki\n pods:               110\nAllocatable:\n cpu:                4\n ephemeral-storage:  18566299208\n hugepages-2Mi:      0\n memory:             3936676Ki\n pods:               110\nSystem Info:\n Machine ID:                 f573dcf04d6f4a87856a35d266a2fa7a\n System UUID:                F573DCF0-4D6F-4A87-856A-35D266A2FA7A\n Boot ID:                    8baf4beb-8391-43e6-b17b-b1e184b5370a\n Kernel Version:             4.15.0-52-generic\n OS Image:                   Ubuntu 18.04.2 LTS\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  docker://18.9.7\n Kubelet Version:            v1.15.1\n Kube-Proxy Version:         v1.15.1\nPodCIDR:                     10.96.1.0/24\nNon-terminated Pods:         (3 in total)\n  Namespace                  Name                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                  ------------  ----------  ---------------  -------------  ---\n  kube-system                kube-proxy-976zl      0 (0%)        0 (0%)      0 (0%)           0 (0%)         142d\n  kube-system                weave-net-rlp57       20m (0%)      0 (0%)      0 (0%)           0 (0%)         73d\n  kubectl-2114               redis-master-vr5d4    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests  Limits\n  --------           --------  ------\n  cpu                20m (0%)  0 (0%)\n  memory             0 (0%)    0 (0%)\n  ephemeral-storage  0 (0%)    0 (0%)\nEvents:              \n"
Dec 24 13:55:45.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-2114'
Dec 24 13:55:45.573: INFO: stderr: ""
Dec 24 13:55:45.573: INFO: stdout: "Name:         kubectl-2114\nLabels:       e2e-framework=kubectl\n              e2e-run=6726ca72-d209-4ad3-becc-472ec83926f1\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 13:55:45.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2114" for this suite.
Dec 24 13:56:07.628: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:56:07.723: INFO: namespace kubectl-2114 deletion completed in 22.14621306s

• [SLOW TEST:34.891 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 13:56:07.723: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W1224 13:56:17.888526       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 24 13:56:17.888: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 13:56:17.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4734" for this suite.
Dec 24 13:56:23.933: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:56:24.055: INFO: namespace gc-4734 deletion completed in 6.16132888s

• [SLOW TEST:16.332 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 13:56:24.055: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Dec 24 13:56:24.169: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Registering the sample API server.
Dec 24 13:56:24.671: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Dec 24 13:56:27.034: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712792584, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712792584, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712792584, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712792584, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 24 13:56:29.043: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712792584, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712792584, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712792584, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712792584, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 24 13:56:31.046: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712792584, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712792584, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712792584, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712792584, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 24 13:56:33.040: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712792584, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712792584, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712792584, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712792584, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 24 13:56:35.040: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712792584, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712792584, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712792584, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712792584, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 24 13:56:40.872: INFO: Waited 3.81160168s for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 13:56:42.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-2116" for this suite.
Dec 24 13:56:48.175: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:56:48.274: INFO: namespace aggregator-2116 deletion completed in 6.133634631s

• [SLOW TEST:24.219 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 13:56:48.274: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-bc9f65b3-7735-47f4-a1de-899d7ae1f745
STEP: Creating a pod to test consume configMaps
Dec 24 13:56:48.346: INFO: Waiting up to 5m0s for pod "pod-configmaps-afd560fc-0e98-49e1-a7ca-2f47f5ac5217" in namespace "configmap-2434" to be "success or failure"
Dec 24 13:56:48.355: INFO: Pod "pod-configmaps-afd560fc-0e98-49e1-a7ca-2f47f5ac5217": Phase="Pending", Reason="", readiness=false. Elapsed: 8.194502ms
Dec 24 13:56:50.364: INFO: Pod "pod-configmaps-afd560fc-0e98-49e1-a7ca-2f47f5ac5217": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0177828s
Dec 24 13:56:52.469: INFO: Pod "pod-configmaps-afd560fc-0e98-49e1-a7ca-2f47f5ac5217": Phase="Pending", Reason="", readiness=false. Elapsed: 4.122484322s
Dec 24 13:56:54.483: INFO: Pod "pod-configmaps-afd560fc-0e98-49e1-a7ca-2f47f5ac5217": Phase="Pending", Reason="", readiness=false. Elapsed: 6.136406669s
Dec 24 13:56:56.500: INFO: Pod "pod-configmaps-afd560fc-0e98-49e1-a7ca-2f47f5ac5217": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.153121728s
STEP: Saw pod success
Dec 24 13:56:56.500: INFO: Pod "pod-configmaps-afd560fc-0e98-49e1-a7ca-2f47f5ac5217" satisfied condition "success or failure"
Dec 24 13:56:56.509: INFO: Trying to get logs from node iruya-node pod pod-configmaps-afd560fc-0e98-49e1-a7ca-2f47f5ac5217 container configmap-volume-test: 
STEP: delete the pod
Dec 24 13:56:56.784: INFO: Waiting for pod pod-configmaps-afd560fc-0e98-49e1-a7ca-2f47f5ac5217 to disappear
Dec 24 13:56:56.793: INFO: Pod pod-configmaps-afd560fc-0e98-49e1-a7ca-2f47f5ac5217 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 13:56:56.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2434" for this suite.
Dec 24 13:57:02.887: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:57:02.997: INFO: namespace configmap-2434 deletion completed in 6.199128562s

• [SLOW TEST:14.723 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 13:57:02.998: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-89df50ec-dd39-445f-a6f8-6acc15653e88
STEP: Creating a pod to test consume configMaps
Dec 24 13:57:03.158: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-19b73f30-9b5f-4466-8480-5c8f38215ec5" in namespace "projected-2643" to be "success or failure"
Dec 24 13:57:03.188: INFO: Pod "pod-projected-configmaps-19b73f30-9b5f-4466-8480-5c8f38215ec5": Phase="Pending", Reason="", readiness=false. Elapsed: 29.552124ms
Dec 24 13:57:05.199: INFO: Pod "pod-projected-configmaps-19b73f30-9b5f-4466-8480-5c8f38215ec5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040854375s
Dec 24 13:57:07.206: INFO: Pod "pod-projected-configmaps-19b73f30-9b5f-4466-8480-5c8f38215ec5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047946728s
Dec 24 13:57:09.220: INFO: Pod "pod-projected-configmaps-19b73f30-9b5f-4466-8480-5c8f38215ec5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06200883s
Dec 24 13:57:11.232: INFO: Pod "pod-projected-configmaps-19b73f30-9b5f-4466-8480-5c8f38215ec5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.073859155s
STEP: Saw pod success
Dec 24 13:57:11.232: INFO: Pod "pod-projected-configmaps-19b73f30-9b5f-4466-8480-5c8f38215ec5" satisfied condition "success or failure"
Dec 24 13:57:11.237: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-19b73f30-9b5f-4466-8480-5c8f38215ec5 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 24 13:57:11.301: INFO: Waiting for pod pod-projected-configmaps-19b73f30-9b5f-4466-8480-5c8f38215ec5 to disappear
Dec 24 13:57:11.314: INFO: Pod pod-projected-configmaps-19b73f30-9b5f-4466-8480-5c8f38215ec5 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 13:57:11.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2643" for this suite.
Dec 24 13:57:17.421: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:57:17.584: INFO: namespace projected-2643 deletion completed in 6.262065778s

• [SLOW TEST:14.586 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 13:57:17.584: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Dec 24 13:57:17.666: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 24 13:57:17.709: INFO: Waiting for terminating namespaces to be deleted...
Dec 24 13:57:17.712: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Dec 24 13:57:17.724: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Dec 24 13:57:17.724: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 24 13:57:17.724: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Dec 24 13:57:17.724: INFO: 	Container weave ready: true, restart count 0
Dec 24 13:57:17.724: INFO: 	Container weave-npc ready: true, restart count 0
Dec 24 13:57:17.724: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Dec 24 13:57:17.735: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Dec 24 13:57:17.735: INFO: 	Container kube-scheduler ready: true, restart count 7
Dec 24 13:57:17.735: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Dec 24 13:57:17.735: INFO: 	Container coredns ready: true, restart count 0
Dec 24 13:57:17.735: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Dec 24 13:57:17.735: INFO: 	Container etcd ready: true, restart count 0
Dec 24 13:57:17.735: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Dec 24 13:57:17.735: INFO: 	Container weave ready: true, restart count 0
Dec 24 13:57:17.735: INFO: 	Container weave-npc ready: true, restart count 0
Dec 24 13:57:17.735: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Dec 24 13:57:17.735: INFO: 	Container coredns ready: true, restart count 0
Dec 24 13:57:17.735: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Dec 24 13:57:17.735: INFO: 	Container kube-controller-manager ready: true, restart count 10
Dec 24 13:57:17.735: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Dec 24 13:57:17.735: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 24 13:57:17.735: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Dec 24 13:57:17.735: INFO: 	Container kube-apiserver ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15e35342132782f3], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 13:57:18.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-6159" for this suite.
Dec 24 13:57:24.825: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:57:24.954: INFO: namespace sched-pred-6159 deletion completed in 6.180374671s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:7.370 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 13:57:24.955: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-6b167b2d-34c1-4bbd-a551-fd512ecfa79e
STEP: Creating a pod to test consume configMaps
Dec 24 13:57:25.095: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4ce92b36-b446-484a-a84d-52f54e8c1019" in namespace "projected-1012" to be "success or failure"
Dec 24 13:57:25.107: INFO: Pod "pod-projected-configmaps-4ce92b36-b446-484a-a84d-52f54e8c1019": Phase="Pending", Reason="", readiness=false. Elapsed: 12.446073ms
Dec 24 13:57:27.118: INFO: Pod "pod-projected-configmaps-4ce92b36-b446-484a-a84d-52f54e8c1019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02317413s
Dec 24 13:57:29.134: INFO: Pod "pod-projected-configmaps-4ce92b36-b446-484a-a84d-52f54e8c1019": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039685898s
Dec 24 13:57:31.146: INFO: Pod "pod-projected-configmaps-4ce92b36-b446-484a-a84d-52f54e8c1019": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051807289s
Dec 24 13:57:33.152: INFO: Pod "pod-projected-configmaps-4ce92b36-b446-484a-a84d-52f54e8c1019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.057843807s
STEP: Saw pod success
Dec 24 13:57:33.152: INFO: Pod "pod-projected-configmaps-4ce92b36-b446-484a-a84d-52f54e8c1019" satisfied condition "success or failure"
Dec 24 13:57:33.156: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-4ce92b36-b446-484a-a84d-52f54e8c1019 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 24 13:57:33.211: INFO: Waiting for pod pod-projected-configmaps-4ce92b36-b446-484a-a84d-52f54e8c1019 to disappear
Dec 24 13:57:33.217: INFO: Pod pod-projected-configmaps-4ce92b36-b446-484a-a84d-52f54e8c1019 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 13:57:33.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1012" for this suite.
Dec 24 13:57:39.256: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:57:39.370: INFO: namespace projected-1012 deletion completed in 6.147708717s

• [SLOW TEST:14.415 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 13:57:39.370: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 13:57:50.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-5765" for this suite.
Dec 24 13:58:12.614: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:58:12.735: INFO: namespace replication-controller-5765 deletion completed in 22.198999197s

• [SLOW TEST:33.365 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 13:58:12.735: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Dec 24 13:58:12.919: INFO: Waiting up to 5m0s for pod "pod-c5e69716-c476-47a3-af19-ea3db515f6f8" in namespace "emptydir-2155" to be "success or failure"
Dec 24 13:58:12.924: INFO: Pod "pod-c5e69716-c476-47a3-af19-ea3db515f6f8": Phase="Pending", Reason="", readiness=false. Elapsed: 5.268152ms
Dec 24 13:58:14.931: INFO: Pod "pod-c5e69716-c476-47a3-af19-ea3db515f6f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01208224s
Dec 24 13:58:16.941: INFO: Pod "pod-c5e69716-c476-47a3-af19-ea3db515f6f8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022837235s
Dec 24 13:58:18.948: INFO: Pod "pod-c5e69716-c476-47a3-af19-ea3db515f6f8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.029489413s
Dec 24 13:58:20.954: INFO: Pod "pod-c5e69716-c476-47a3-af19-ea3db515f6f8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.035751614s
Dec 24 13:58:22.962: INFO: Pod "pod-c5e69716-c476-47a3-af19-ea3db515f6f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.042898286s
STEP: Saw pod success
Dec 24 13:58:22.962: INFO: Pod "pod-c5e69716-c476-47a3-af19-ea3db515f6f8" satisfied condition "success or failure"
Dec 24 13:58:22.971: INFO: Trying to get logs from node iruya-node pod pod-c5e69716-c476-47a3-af19-ea3db515f6f8 container test-container: 
STEP: delete the pod
Dec 24 13:58:23.083: INFO: Waiting for pod pod-c5e69716-c476-47a3-af19-ea3db515f6f8 to disappear
Dec 24 13:58:23.093: INFO: Pod pod-c5e69716-c476-47a3-af19-ea3db515f6f8 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 13:58:23.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2155" for this suite.
Dec 24 13:58:29.125: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:58:29.283: INFO: namespace emptydir-2155 deletion completed in 6.182632686s

• [SLOW TEST:16.547 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 13:58:29.283: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 24 13:58:29.537: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Dec 24 13:58:29.600: INFO: Number of nodes with available pods: 0
Dec 24 13:58:29.600: INFO: Node iruya-node is running more than one daemon pod
Dec 24 13:58:30.619: INFO: Number of nodes with available pods: 0
Dec 24 13:58:30.619: INFO: Node iruya-node is running more than one daemon pod
Dec 24 13:58:31.713: INFO: Number of nodes with available pods: 0
Dec 24 13:58:31.713: INFO: Node iruya-node is running more than one daemon pod
Dec 24 13:58:32.623: INFO: Number of nodes with available pods: 0
Dec 24 13:58:32.623: INFO: Node iruya-node is running more than one daemon pod
Dec 24 13:58:33.621: INFO: Number of nodes with available pods: 0
Dec 24 13:58:33.621: INFO: Node iruya-node is running more than one daemon pod
Dec 24 13:58:34.627: INFO: Number of nodes with available pods: 0
Dec 24 13:58:34.627: INFO: Node iruya-node is running more than one daemon pod
Dec 24 13:58:36.587: INFO: Number of nodes with available pods: 0
Dec 24 13:58:36.587: INFO: Node iruya-node is running more than one daemon pod
Dec 24 13:58:37.120: INFO: Number of nodes with available pods: 0
Dec 24 13:58:37.120: INFO: Node iruya-node is running more than one daemon pod
Dec 24 13:58:37.621: INFO: Number of nodes with available pods: 0
Dec 24 13:58:37.621: INFO: Node iruya-node is running more than one daemon pod
Dec 24 13:58:38.657: INFO: Number of nodes with available pods: 0
Dec 24 13:58:38.657: INFO: Node iruya-node is running more than one daemon pod
Dec 24 13:58:39.619: INFO: Number of nodes with available pods: 0
Dec 24 13:58:39.619: INFO: Node iruya-node is running more than one daemon pod
Dec 24 13:58:40.613: INFO: Number of nodes with available pods: 2
Dec 24 13:58:40.613: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Dec 24 13:58:40.680: INFO: Wrong image for pod: daemon-set-7bmkm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 24 13:58:40.681: INFO: Wrong image for pod: daemon-set-xmlrb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 24 13:58:41.697: INFO: Wrong image for pod: daemon-set-7bmkm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 24 13:58:41.697: INFO: Wrong image for pod: daemon-set-xmlrb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 24 13:58:42.702: INFO: Wrong image for pod: daemon-set-7bmkm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 24 13:58:42.702: INFO: Wrong image for pod: daemon-set-xmlrb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 24 13:58:43.700: INFO: Wrong image for pod: daemon-set-7bmkm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 24 13:58:43.700: INFO: Wrong image for pod: daemon-set-xmlrb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 24 13:58:44.693: INFO: Wrong image for pod: daemon-set-7bmkm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 24 13:58:44.693: INFO: Wrong image for pod: daemon-set-xmlrb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 24 13:58:45.742: INFO: Wrong image for pod: daemon-set-7bmkm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 24 13:58:45.742: INFO: Wrong image for pod: daemon-set-xmlrb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 24 13:58:46.698: INFO: Wrong image for pod: daemon-set-7bmkm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 24 13:58:46.698: INFO: Wrong image for pod: daemon-set-xmlrb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 24 13:58:47.694: INFO: Wrong image for pod: daemon-set-7bmkm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 24 13:58:47.694: INFO: Pod daemon-set-7bmkm is not available
Dec 24 13:58:47.694: INFO: Wrong image for pod: daemon-set-xmlrb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 24 13:58:48.696: INFO: Wrong image for pod: daemon-set-7bmkm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 24 13:58:48.696: INFO: Pod daemon-set-7bmkm is not available
Dec 24 13:58:48.696: INFO: Wrong image for pod: daemon-set-xmlrb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 24 13:58:49.696: INFO: Wrong image for pod: daemon-set-7bmkm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 24 13:58:49.696: INFO: Pod daemon-set-7bmkm is not available
Dec 24 13:58:49.696: INFO: Wrong image for pod: daemon-set-xmlrb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 24 13:58:50.693: INFO: Wrong image for pod: daemon-set-7bmkm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 24 13:58:50.693: INFO: Pod daemon-set-7bmkm is not available
Dec 24 13:58:50.693: INFO: Wrong image for pod: daemon-set-xmlrb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 24 13:58:51.696: INFO: Wrong image for pod: daemon-set-7bmkm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 24 13:58:51.696: INFO: Pod daemon-set-7bmkm is not available
Dec 24 13:58:51.696: INFO: Wrong image for pod: daemon-set-xmlrb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 24 13:58:52.696: INFO: Wrong image for pod: daemon-set-7bmkm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 24 13:58:52.696: INFO: Pod daemon-set-7bmkm is not available
Dec 24 13:58:52.696: INFO: Wrong image for pod: daemon-set-xmlrb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 24 13:58:53.693: INFO: Wrong image for pod: daemon-set-7bmkm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 24 13:58:53.693: INFO: Pod daemon-set-7bmkm is not available
Dec 24 13:58:53.693: INFO: Wrong image for pod: daemon-set-xmlrb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 24 13:58:54.695: INFO: Wrong image for pod: daemon-set-7bmkm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 24 13:58:54.695: INFO: Pod daemon-set-7bmkm is not available
Dec 24 13:58:54.695: INFO: Wrong image for pod: daemon-set-xmlrb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 24 13:58:55.694: INFO: Wrong image for pod: daemon-set-7bmkm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 24 13:58:55.694: INFO: Pod daemon-set-7bmkm is not available
Dec 24 13:58:55.694: INFO: Wrong image for pod: daemon-set-xmlrb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 24 13:58:56.692: INFO: Wrong image for pod: daemon-set-7bmkm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 24 13:58:56.692: INFO: Pod daemon-set-7bmkm is not available
Dec 24 13:58:56.692: INFO: Wrong image for pod: daemon-set-xmlrb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 24 13:58:57.705: INFO: Wrong image for pod: daemon-set-7bmkm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 24 13:58:57.705: INFO: Pod daemon-set-7bmkm is not available
Dec 24 13:58:57.705: INFO: Wrong image for pod: daemon-set-xmlrb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 24 13:58:58.705: INFO: Pod daemon-set-mvtv9 is not available
Dec 24 13:58:58.705: INFO: Wrong image for pod: daemon-set-xmlrb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 24 13:58:59.695: INFO: Pod daemon-set-mvtv9 is not available
Dec 24 13:58:59.695: INFO: Wrong image for pod: daemon-set-xmlrb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 24 13:59:00.697: INFO: Pod daemon-set-mvtv9 is not available
Dec 24 13:59:00.697: INFO: Wrong image for pod: daemon-set-xmlrb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 24 13:59:02.531: INFO: Pod daemon-set-mvtv9 is not available
Dec 24 13:59:02.531: INFO: Wrong image for pod: daemon-set-xmlrb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 24 13:59:03.841: INFO: Pod daemon-set-mvtv9 is not available
Dec 24 13:59:03.841: INFO: Wrong image for pod: daemon-set-xmlrb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 24 13:59:04.792: INFO: Pod daemon-set-mvtv9 is not available
Dec 24 13:59:04.792: INFO: Wrong image for pod: daemon-set-xmlrb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 24 13:59:05.697: INFO: Wrong image for pod: daemon-set-xmlrb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 24 13:59:06.701: INFO: Wrong image for pod: daemon-set-xmlrb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 24 13:59:07.697: INFO: Wrong image for pod: daemon-set-xmlrb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 24 13:59:08.759: INFO: Wrong image for pod: daemon-set-xmlrb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 24 13:59:09.698: INFO: Wrong image for pod: daemon-set-xmlrb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 24 13:59:09.698: INFO: Pod daemon-set-xmlrb is not available
Dec 24 13:59:10.761: INFO: Wrong image for pod: daemon-set-xmlrb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 24 13:59:10.761: INFO: Pod daemon-set-xmlrb is not available
Dec 24 13:59:11.701: INFO: Pod daemon-set-br8jh is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Dec 24 13:59:11.721: INFO: Number of nodes with available pods: 1
Dec 24 13:59:11.721: INFO: Node iruya-node is running more than one daemon pod
Dec 24 13:59:12.736: INFO: Number of nodes with available pods: 1
Dec 24 13:59:12.736: INFO: Node iruya-node is running more than one daemon pod
Dec 24 13:59:13.741: INFO: Number of nodes with available pods: 1
Dec 24 13:59:13.741: INFO: Node iruya-node is running more than one daemon pod
Dec 24 13:59:14.736: INFO: Number of nodes with available pods: 1
Dec 24 13:59:14.736: INFO: Node iruya-node is running more than one daemon pod
Dec 24 13:59:15.739: INFO: Number of nodes with available pods: 1
Dec 24 13:59:15.739: INFO: Node iruya-node is running more than one daemon pod
Dec 24 13:59:16.735: INFO: Number of nodes with available pods: 1
Dec 24 13:59:16.735: INFO: Node iruya-node is running more than one daemon pod
Dec 24 13:59:17.735: INFO: Number of nodes with available pods: 1
Dec 24 13:59:17.735: INFO: Node iruya-node is running more than one daemon pod
Dec 24 13:59:18.736: INFO: Number of nodes with available pods: 2
Dec 24 13:59:18.736: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8370, will wait for the garbage collector to delete the pods
Dec 24 13:59:18.829: INFO: Deleting DaemonSet.extensions daemon-set took: 14.538217ms
Dec 24 13:59:19.230: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.69984ms
Dec 24 13:59:36.636: INFO: Number of nodes with available pods: 0
Dec 24 13:59:36.636: INFO: Number of running nodes: 0, number of available pods: 0
Dec 24 13:59:36.640: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8370/daemonsets","resourceVersion":"17897290"},"items":null}

Dec 24 13:59:36.699: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8370/pods","resourceVersion":"17897290"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 13:59:36.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-8370" for this suite.
Dec 24 13:59:42.761: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 13:59:42.912: INFO: namespace daemonsets-8370 deletion completed in 6.18525117s

• [SLOW TEST:73.629 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 13:59:42.913: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-07431341-6953-4086-bfef-37864aee9087
STEP: Creating a pod to test consume secrets
Dec 24 13:59:43.039: INFO: Waiting up to 5m0s for pod "pod-secrets-2a29ee56-62f1-4d29-ac8f-9c3325b0824c" in namespace "secrets-8509" to be "success or failure"
Dec 24 13:59:43.055: INFO: Pod "pod-secrets-2a29ee56-62f1-4d29-ac8f-9c3325b0824c": Phase="Pending", Reason="", readiness=false. Elapsed: 15.735971ms
Dec 24 13:59:45.064: INFO: Pod "pod-secrets-2a29ee56-62f1-4d29-ac8f-9c3325b0824c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024589886s
Dec 24 13:59:47.075: INFO: Pod "pod-secrets-2a29ee56-62f1-4d29-ac8f-9c3325b0824c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035590733s
Dec 24 13:59:49.114: INFO: Pod "pod-secrets-2a29ee56-62f1-4d29-ac8f-9c3325b0824c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.074158528s
Dec 24 13:59:51.126: INFO: Pod "pod-secrets-2a29ee56-62f1-4d29-ac8f-9c3325b0824c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.086245965s
Dec 24 13:59:53.139: INFO: Pod "pod-secrets-2a29ee56-62f1-4d29-ac8f-9c3325b0824c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.099948675s
STEP: Saw pod success
Dec 24 13:59:53.140: INFO: Pod "pod-secrets-2a29ee56-62f1-4d29-ac8f-9c3325b0824c" satisfied condition "success or failure"
Dec 24 13:59:53.146: INFO: Trying to get logs from node iruya-node pod pod-secrets-2a29ee56-62f1-4d29-ac8f-9c3325b0824c container secret-volume-test: 
STEP: delete the pod
Dec 24 13:59:53.964: INFO: Waiting for pod pod-secrets-2a29ee56-62f1-4d29-ac8f-9c3325b0824c to disappear
Dec 24 13:59:53.988: INFO: Pod pod-secrets-2a29ee56-62f1-4d29-ac8f-9c3325b0824c no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 13:59:53.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8509" for this suite.
Dec 24 14:00:00.035: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:00:00.178: INFO: namespace secrets-8509 deletion completed in 6.164443383s

• [SLOW TEST:17.265 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:00:00.178: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Dec 24 14:00:16.394: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 24 14:00:16.423: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 24 14:00:18.424: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 24 14:00:18.492: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 24 14:00:20.424: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 24 14:00:20.439: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 24 14:00:22.424: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 24 14:00:22.461: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 24 14:00:24.424: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 24 14:00:24.432: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 24 14:00:26.424: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 24 14:00:26.453: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 24 14:00:28.424: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 24 14:00:28.437: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 24 14:00:30.424: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 24 14:00:30.432: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 24 14:00:32.424: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 24 14:00:32.439: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 24 14:00:34.424: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 24 14:00:34.434: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 24 14:00:36.424: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 24 14:00:36.447: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 24 14:00:38.424: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 24 14:00:38.440: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 24 14:00:40.424: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 24 14:00:40.434: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 24 14:00:42.424: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 24 14:00:42.433: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 24 14:00:44.424: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 24 14:00:44.469: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 24 14:00:46.424: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 24 14:00:46.496: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 24 14:00:48.424: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 24 14:00:48.456: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:00:48.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-1692" for this suite.
Dec 24 14:01:10.604: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:01:10.730: INFO: namespace container-lifecycle-hook-1692 deletion completed in 22.164479662s

• [SLOW TEST:70.551 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:01:10.731: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Dec 24 14:01:10.808: INFO: Waiting up to 5m0s for pod "pod-5555d8a0-122d-4c62-bc1b-8e14b3d2bfbe" in namespace "emptydir-2303" to be "success or failure"
Dec 24 14:01:10.872: INFO: Pod "pod-5555d8a0-122d-4c62-bc1b-8e14b3d2bfbe": Phase="Pending", Reason="", readiness=false. Elapsed: 63.432804ms
Dec 24 14:01:12.886: INFO: Pod "pod-5555d8a0-122d-4c62-bc1b-8e14b3d2bfbe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077174141s
Dec 24 14:01:14.910: INFO: Pod "pod-5555d8a0-122d-4c62-bc1b-8e14b3d2bfbe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.101697952s
Dec 24 14:01:16.919: INFO: Pod "pod-5555d8a0-122d-4c62-bc1b-8e14b3d2bfbe": Phase="Pending", Reason="", readiness=false. Elapsed: 6.110213058s
Dec 24 14:01:18.929: INFO: Pod "pod-5555d8a0-122d-4c62-bc1b-8e14b3d2bfbe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.120237132s
STEP: Saw pod success
Dec 24 14:01:18.929: INFO: Pod "pod-5555d8a0-122d-4c62-bc1b-8e14b3d2bfbe" satisfied condition "success or failure"
Dec 24 14:01:18.932: INFO: Trying to get logs from node iruya-node pod pod-5555d8a0-122d-4c62-bc1b-8e14b3d2bfbe container test-container: 
STEP: delete the pod
Dec 24 14:01:19.069: INFO: Waiting for pod pod-5555d8a0-122d-4c62-bc1b-8e14b3d2bfbe to disappear
Dec 24 14:01:19.075: INFO: Pod pod-5555d8a0-122d-4c62-bc1b-8e14b3d2bfbe no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:01:19.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2303" for this suite.
Dec 24 14:01:25.121: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:01:25.263: INFO: namespace emptydir-2303 deletion completed in 6.180900243s

• [SLOW TEST:14.533 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:01:25.265: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Dec 24 14:01:25.461: INFO: Number of nodes with available pods: 0
Dec 24 14:01:25.461: INFO: Node iruya-node is running more than one daemon pod
Dec 24 14:01:26.488: INFO: Number of nodes with available pods: 0
Dec 24 14:01:26.488: INFO: Node iruya-node is running more than one daemon pod
Dec 24 14:01:27.549: INFO: Number of nodes with available pods: 0
Dec 24 14:01:27.549: INFO: Node iruya-node is running more than one daemon pod
Dec 24 14:01:28.491: INFO: Number of nodes with available pods: 0
Dec 24 14:01:28.491: INFO: Node iruya-node is running more than one daemon pod
Dec 24 14:01:29.487: INFO: Number of nodes with available pods: 0
Dec 24 14:01:29.487: INFO: Node iruya-node is running more than one daemon pod
Dec 24 14:01:31.848: INFO: Number of nodes with available pods: 0
Dec 24 14:01:31.848: INFO: Node iruya-node is running more than one daemon pod
Dec 24 14:01:32.495: INFO: Number of nodes with available pods: 0
Dec 24 14:01:32.495: INFO: Node iruya-node is running more than one daemon pod
Dec 24 14:01:33.480: INFO: Number of nodes with available pods: 0
Dec 24 14:01:33.480: INFO: Node iruya-node is running more than one daemon pod
Dec 24 14:01:34.484: INFO: Number of nodes with available pods: 0
Dec 24 14:01:34.484: INFO: Node iruya-node is running more than one daemon pod
Dec 24 14:01:35.480: INFO: Number of nodes with available pods: 1
Dec 24 14:01:35.480: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 24 14:01:36.493: INFO: Number of nodes with available pods: 2
Dec 24 14:01:36.493: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Dec 24 14:01:36.583: INFO: Number of nodes with available pods: 1
Dec 24 14:01:36.583: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 24 14:01:37.694: INFO: Number of nodes with available pods: 1
Dec 24 14:01:37.694: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 24 14:01:38.619: INFO: Number of nodes with available pods: 1
Dec 24 14:01:38.619: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 24 14:01:39.595: INFO: Number of nodes with available pods: 1
Dec 24 14:01:39.595: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 24 14:01:40.610: INFO: Number of nodes with available pods: 1
Dec 24 14:01:40.610: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 24 14:01:41.598: INFO: Number of nodes with available pods: 1
Dec 24 14:01:41.598: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 24 14:01:42.806: INFO: Number of nodes with available pods: 1
Dec 24 14:01:42.806: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 24 14:01:43.605: INFO: Number of nodes with available pods: 1
Dec 24 14:01:43.605: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 24 14:01:44.616: INFO: Number of nodes with available pods: 1
Dec 24 14:01:44.616: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 24 14:01:45.604: INFO: Number of nodes with available pods: 2
Dec 24 14:01:45.604: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-792, will wait for the garbage collector to delete the pods
Dec 24 14:01:45.678: INFO: Deleting DaemonSet.extensions daemon-set took: 10.981253ms
Dec 24 14:01:45.978: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.370847ms
Dec 24 14:01:57.895: INFO: Number of nodes with available pods: 0
Dec 24 14:01:57.895: INFO: Number of running nodes: 0, number of available pods: 0
Dec 24 14:01:57.912: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-792/daemonsets","resourceVersion":"17897656"},"items":null}

Dec 24 14:01:57.919: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-792/pods","resourceVersion":"17897656"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:01:57.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-792" for this suite.
Dec 24 14:02:03.958: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:02:04.080: INFO: namespace daemonsets-792 deletion completed in 6.143322366s

• [SLOW TEST:38.815 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:02:04.080: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-3ffe5bf4-7a53-41df-af0f-48066c8fbb21
STEP: Creating a pod to test consume configMaps
Dec 24 14:02:04.255: INFO: Waiting up to 5m0s for pod "pod-configmaps-b9507ae6-9cc0-4d65-8885-2db2a73d90fc" in namespace "configmap-9056" to be "success or failure"
Dec 24 14:02:04.260: INFO: Pod "pod-configmaps-b9507ae6-9cc0-4d65-8885-2db2a73d90fc": Phase="Pending", Reason="", readiness=false. Elapsed: 5.223458ms
Dec 24 14:02:06.270: INFO: Pod "pod-configmaps-b9507ae6-9cc0-4d65-8885-2db2a73d90fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014456862s
Dec 24 14:02:08.283: INFO: Pod "pod-configmaps-b9507ae6-9cc0-4d65-8885-2db2a73d90fc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027579992s
Dec 24 14:02:10.295: INFO: Pod "pod-configmaps-b9507ae6-9cc0-4d65-8885-2db2a73d90fc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040404105s
Dec 24 14:02:12.340: INFO: Pod "pod-configmaps-b9507ae6-9cc0-4d65-8885-2db2a73d90fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.084542901s
STEP: Saw pod success
Dec 24 14:02:12.340: INFO: Pod "pod-configmaps-b9507ae6-9cc0-4d65-8885-2db2a73d90fc" satisfied condition "success or failure"
Dec 24 14:02:12.410: INFO: Trying to get logs from node iruya-node pod pod-configmaps-b9507ae6-9cc0-4d65-8885-2db2a73d90fc container configmap-volume-test: 
STEP: delete the pod
Dec 24 14:02:12.492: INFO: Waiting for pod pod-configmaps-b9507ae6-9cc0-4d65-8885-2db2a73d90fc to disappear
Dec 24 14:02:12.497: INFO: Pod pod-configmaps-b9507ae6-9cc0-4d65-8885-2db2a73d90fc no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:02:12.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9056" for this suite.
Dec 24 14:02:18.626: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:02:18.846: INFO: namespace configmap-9056 deletion completed in 6.338295882s

• [SLOW TEST:14.766 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:02:18.848: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Dec 24 14:02:27.535: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:02:27.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-1238" for this suite.
Dec 24 14:02:33.624: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:02:33.851: INFO: namespace container-runtime-1238 deletion completed in 6.275318934s

• [SLOW TEST:15.003 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:02:33.852: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Dec 24 14:02:33.978: INFO: Waiting up to 5m0s for pod "downward-api-6d02c5ae-c0fb-495b-9f11-2a548ef89c1a" in namespace "downward-api-7622" to be "success or failure"
Dec 24 14:02:33.985: INFO: Pod "downward-api-6d02c5ae-c0fb-495b-9f11-2a548ef89c1a": Phase="Pending", Reason="", readiness=false. Elapsed: 7.307236ms
Dec 24 14:02:38.425: INFO: Pod "downward-api-6d02c5ae-c0fb-495b-9f11-2a548ef89c1a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.447398766s
Dec 24 14:02:40.441: INFO: Pod "downward-api-6d02c5ae-c0fb-495b-9f11-2a548ef89c1a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.462985375s
Dec 24 14:02:42.451: INFO: Pod "downward-api-6d02c5ae-c0fb-495b-9f11-2a548ef89c1a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.473016726s
Dec 24 14:02:44.468: INFO: Pod "downward-api-6d02c5ae-c0fb-495b-9f11-2a548ef89c1a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.490165783s
Dec 24 14:02:46.498: INFO: Pod "downward-api-6d02c5ae-c0fb-495b-9f11-2a548ef89c1a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.519853166s
STEP: Saw pod success
Dec 24 14:02:46.498: INFO: Pod "downward-api-6d02c5ae-c0fb-495b-9f11-2a548ef89c1a" satisfied condition "success or failure"
Dec 24 14:02:46.505: INFO: Trying to get logs from node iruya-node pod downward-api-6d02c5ae-c0fb-495b-9f11-2a548ef89c1a container dapi-container: 
STEP: delete the pod
Dec 24 14:02:46.664: INFO: Waiting for pod downward-api-6d02c5ae-c0fb-495b-9f11-2a548ef89c1a to disappear
Dec 24 14:02:46.679: INFO: Pod downward-api-6d02c5ae-c0fb-495b-9f11-2a548ef89c1a no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:02:46.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7622" for this suite.
Dec 24 14:02:52.701: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:02:53.779: INFO: namespace downward-api-7622 deletion completed in 7.097586649s

• [SLOW TEST:19.928 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:02:53.780: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-c5ddef9b-0ef4-456a-8e59-728a81577978
STEP: Creating configMap with name cm-test-opt-upd-71137402-63b6-40a2-a166-79203f9793f4
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-c5ddef9b-0ef4-456a-8e59-728a81577978
STEP: Updating configmap cm-test-opt-upd-71137402-63b6-40a2-a166-79203f9793f4
STEP: Creating configMap with name cm-test-opt-create-b411e8d9-534e-4b31-8d35-5ccd1f1ada70
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:03:10.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4300" for this suite.
Dec 24 14:03:32.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:03:32.501: INFO: namespace projected-4300 deletion completed in 22.132499858s

• [SLOW TEST:38.722 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:03:32.501: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on node default medium
Dec 24 14:03:32.634: INFO: Waiting up to 5m0s for pod "pod-9311b9d6-5eca-45c3-8a6f-0e3b27b396ff" in namespace "emptydir-729" to be "success or failure"
Dec 24 14:03:32.639: INFO: Pod "pod-9311b9d6-5eca-45c3-8a6f-0e3b27b396ff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.510289ms
Dec 24 14:03:34.649: INFO: Pod "pod-9311b9d6-5eca-45c3-8a6f-0e3b27b396ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01496857s
Dec 24 14:03:36.663: INFO: Pod "pod-9311b9d6-5eca-45c3-8a6f-0e3b27b396ff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02920484s
Dec 24 14:03:38.680: INFO: Pod "pod-9311b9d6-5eca-45c3-8a6f-0e3b27b396ff": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046335907s
Dec 24 14:03:40.689: INFO: Pod "pod-9311b9d6-5eca-45c3-8a6f-0e3b27b396ff": Phase="Running", Reason="", readiness=true. Elapsed: 8.054729552s
Dec 24 14:03:42.697: INFO: Pod "pod-9311b9d6-5eca-45c3-8a6f-0e3b27b396ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.063287791s
STEP: Saw pod success
Dec 24 14:03:42.698: INFO: Pod "pod-9311b9d6-5eca-45c3-8a6f-0e3b27b396ff" satisfied condition "success or failure"
Dec 24 14:03:42.701: INFO: Trying to get logs from node iruya-node pod pod-9311b9d6-5eca-45c3-8a6f-0e3b27b396ff container test-container: 
STEP: delete the pod
Dec 24 14:03:42.761: INFO: Waiting for pod pod-9311b9d6-5eca-45c3-8a6f-0e3b27b396ff to disappear
Dec 24 14:03:42.769: INFO: Pod pod-9311b9d6-5eca-45c3-8a6f-0e3b27b396ff no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:03:42.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-729" for this suite.
Dec 24 14:03:48.795: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:03:48.937: INFO: namespace emptydir-729 deletion completed in 6.162332001s

• [SLOW TEST:16.436 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:03:48.937: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap that has name configmap-test-emptyKey-f63ef839-2fb7-48d1-b8a4-a3c8c07c3e97
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:03:49.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7103" for this suite.
Dec 24 14:03:55.059: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:03:55.178: INFO: namespace configmap-7103 deletion completed in 6.135380923s

• [SLOW TEST:6.240 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:03:55.178: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W1224 14:04:25.935097       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 24 14:04:25.935: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:04:25.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4073" for this suite.
Dec 24 14:04:33.963: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:04:34.056: INFO: namespace gc-4073 deletion completed in 8.116435627s

• [SLOW TEST:38.878 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:04:34.056: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating cluster-info
Dec 24 14:04:35.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Dec 24 14:04:35.960: INFO: stderr: ""
Dec 24 14:04:35.960: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:04:35.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4516" for this suite.
Dec 24 14:04:42.060: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:04:42.140: INFO: namespace kubectl-4516 deletion completed in 6.171883906s

• [SLOW TEST:8.084 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:04:42.141: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-3668
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 24 14:04:42.242: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 24 14:05:20.508: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-3668 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 24 14:05:20.509: INFO: >>> kubeConfig: /root/.kube/config
Dec 24 14:05:20.910: INFO: Waiting for endpoints: map[]
Dec 24 14:05:20.919: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-3668 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 24 14:05:20.919: INFO: >>> kubeConfig: /root/.kube/config
Dec 24 14:05:21.307: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:05:21.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-3668" for this suite.
Dec 24 14:05:45.353: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:05:45.476: INFO: namespace pod-network-test-3668 deletion completed in 24.158609575s

• [SLOW TEST:63.336 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:05:45.477: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:05:53.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9087" for this suite.
Dec 24 14:06:39.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:06:39.920: INFO: namespace kubelet-test-9087 deletion completed in 46.225787754s

• [SLOW TEST:54.443 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:06:39.921: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating server pod server in namespace prestop-9934
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-9934
STEP: Deleting pre-stop pod
Dec 24 14:07:01.174: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:07:01.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-9934" for this suite.
Dec 24 14:07:39.274: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:07:39.413: INFO: namespace prestop-9934 deletion completed in 38.203309875s

• [SLOW TEST:59.492 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:07:39.415: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:08:10.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-3473" for this suite.
Dec 24 14:08:16.185: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:08:16.354: INFO: namespace namespaces-3473 deletion completed in 6.190404031s
STEP: Destroying namespace "nsdeletetest-6057" for this suite.
Dec 24 14:08:16.358: INFO: Namespace nsdeletetest-6057 was already deleted
STEP: Destroying namespace "nsdeletetest-3163" for this suite.
Dec 24 14:08:22.396: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:08:22.502: INFO: namespace nsdeletetest-3163 deletion completed in 6.143681515s

• [SLOW TEST:43.087 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:08:22.502: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Dec 24 14:08:31.203: INFO: Successfully updated pod "labelsupdate16cc04b9-8680-4000-9e89-a4d059eb611a"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:08:33.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5272" for this suite.
Dec 24 14:08:55.450: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:08:55.571: INFO: namespace projected-5272 deletion completed in 22.1591792s

• [SLOW TEST:33.069 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:08:55.571: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-ee563f3c-fb45-4bac-a83d-4a59cee115b4
STEP: Creating a pod to test consume secrets
Dec 24 14:08:55.745: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-30be9574-ee81-42af-b30b-37ff62805bed" in namespace "projected-9547" to be "success or failure"
Dec 24 14:08:55.784: INFO: Pod "pod-projected-secrets-30be9574-ee81-42af-b30b-37ff62805bed": Phase="Pending", Reason="", readiness=false. Elapsed: 38.642985ms
Dec 24 14:08:57.796: INFO: Pod "pod-projected-secrets-30be9574-ee81-42af-b30b-37ff62805bed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050826065s
Dec 24 14:08:59.805: INFO: Pod "pod-projected-secrets-30be9574-ee81-42af-b30b-37ff62805bed": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059429899s
Dec 24 14:09:01.813: INFO: Pod "pod-projected-secrets-30be9574-ee81-42af-b30b-37ff62805bed": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067579634s
Dec 24 14:09:03.829: INFO: Pod "pod-projected-secrets-30be9574-ee81-42af-b30b-37ff62805bed": Phase="Pending", Reason="", readiness=false. Elapsed: 8.083463528s
Dec 24 14:09:05.839: INFO: Pod "pod-projected-secrets-30be9574-ee81-42af-b30b-37ff62805bed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.093898756s
STEP: Saw pod success
Dec 24 14:09:05.839: INFO: Pod "pod-projected-secrets-30be9574-ee81-42af-b30b-37ff62805bed" satisfied condition "success or failure"
Dec 24 14:09:05.845: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-30be9574-ee81-42af-b30b-37ff62805bed container projected-secret-volume-test: 
STEP: delete the pod
Dec 24 14:09:05.925: INFO: Waiting for pod pod-projected-secrets-30be9574-ee81-42af-b30b-37ff62805bed to disappear
Dec 24 14:09:05.965: INFO: Pod pod-projected-secrets-30be9574-ee81-42af-b30b-37ff62805bed no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:09:05.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9547" for this suite.
Dec 24 14:09:12.023: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:09:12.152: INFO: namespace projected-9547 deletion completed in 6.172651158s

• [SLOW TEST:16.581 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:09:12.152: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 24 14:09:12.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-2942'
Dec 24 14:09:14.251: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 24 14:09:14.251: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Dec 24 14:09:14.278: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Dec 24 14:09:14.305: INFO: scanned /root for discovery docs: 
Dec 24 14:09:14.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-2942'
Dec 24 14:09:35.477: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Dec 24 14:09:35.477: INFO: stdout: "Created e2e-test-nginx-rc-4de9ba90abc7aa3759a02546be5f0e2a\nScaling up e2e-test-nginx-rc-4de9ba90abc7aa3759a02546be5f0e2a from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-4de9ba90abc7aa3759a02546be5f0e2a up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-4de9ba90abc7aa3759a02546be5f0e2a to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Dec 24 14:09:35.477: INFO: stdout: "Created e2e-test-nginx-rc-4de9ba90abc7aa3759a02546be5f0e2a\nScaling up e2e-test-nginx-rc-4de9ba90abc7aa3759a02546be5f0e2a from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-4de9ba90abc7aa3759a02546be5f0e2a up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-4de9ba90abc7aa3759a02546be5f0e2a to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Dec 24 14:09:35.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-2942'
Dec 24 14:09:35.669: INFO: stderr: ""
Dec 24 14:09:35.669: INFO: stdout: "e2e-test-nginx-rc-4de9ba90abc7aa3759a02546be5f0e2a-wqskr e2e-test-nginx-rc-ck9m2 "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Dec 24 14:09:40.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-2942'
Dec 24 14:09:40.829: INFO: stderr: ""
Dec 24 14:09:40.829: INFO: stdout: "e2e-test-nginx-rc-4de9ba90abc7aa3759a02546be5f0e2a-wqskr "
Dec 24 14:09:40.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-4de9ba90abc7aa3759a02546be5f0e2a-wqskr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2942'
Dec 24 14:09:40.959: INFO: stderr: ""
Dec 24 14:09:40.959: INFO: stdout: "true"
Dec 24 14:09:40.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-4de9ba90abc7aa3759a02546be5f0e2a-wqskr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2942'
Dec 24 14:09:41.077: INFO: stderr: ""
Dec 24 14:09:41.077: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Dec 24 14:09:41.077: INFO: e2e-test-nginx-rc-4de9ba90abc7aa3759a02546be5f0e2a-wqskr is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522
Dec 24 14:09:41.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-2942'
Dec 24 14:09:41.208: INFO: stderr: ""
Dec 24 14:09:41.208: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:09:41.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2942" for this suite.
Dec 24 14:10:03.236: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:10:03.378: INFO: namespace kubectl-2942 deletion completed in 22.159881388s

• [SLOW TEST:51.226 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:10:03.379: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Dec 24 14:10:03.488: INFO: Waiting up to 5m0s for pod "pod-70f5ffc4-7d8e-4467-aa67-cfd443714ede" in namespace "emptydir-9410" to be "success or failure"
Dec 24 14:10:03.502: INFO: Pod "pod-70f5ffc4-7d8e-4467-aa67-cfd443714ede": Phase="Pending", Reason="", readiness=false. Elapsed: 13.999848ms
Dec 24 14:10:05.522: INFO: Pod "pod-70f5ffc4-7d8e-4467-aa67-cfd443714ede": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033588013s
Dec 24 14:10:07.538: INFO: Pod "pod-70f5ffc4-7d8e-4467-aa67-cfd443714ede": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049901279s
Dec 24 14:10:09.554: INFO: Pod "pod-70f5ffc4-7d8e-4467-aa67-cfd443714ede": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06545814s
Dec 24 14:10:11.561: INFO: Pod "pod-70f5ffc4-7d8e-4467-aa67-cfd443714ede": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.073161414s
STEP: Saw pod success
Dec 24 14:10:11.561: INFO: Pod "pod-70f5ffc4-7d8e-4467-aa67-cfd443714ede" satisfied condition "success or failure"
Dec 24 14:10:11.567: INFO: Trying to get logs from node iruya-node pod pod-70f5ffc4-7d8e-4467-aa67-cfd443714ede container test-container: 
STEP: delete the pod
Dec 24 14:10:11.624: INFO: Waiting for pod pod-70f5ffc4-7d8e-4467-aa67-cfd443714ede to disappear
Dec 24 14:10:11.634: INFO: Pod pod-70f5ffc4-7d8e-4467-aa67-cfd443714ede no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:10:11.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9410" for this suite.
Dec 24 14:10:17.654: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:10:17.838: INFO: namespace emptydir-9410 deletion completed in 6.20054127s

• [SLOW TEST:14.460 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:10:17.839: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Dec 24 14:10:17.995: INFO: namespace kubectl-8658
Dec 24 14:10:17.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8658'
Dec 24 14:10:18.623: INFO: stderr: ""
Dec 24 14:10:18.623: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Dec 24 14:10:19.637: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 14:10:19.637: INFO: Found 0 / 1
Dec 24 14:10:20.697: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 14:10:20.697: INFO: Found 0 / 1
Dec 24 14:10:21.665: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 14:10:21.665: INFO: Found 0 / 1
Dec 24 14:10:22.632: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 14:10:22.632: INFO: Found 0 / 1
Dec 24 14:10:23.636: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 14:10:23.637: INFO: Found 0 / 1
Dec 24 14:10:24.637: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 14:10:24.637: INFO: Found 0 / 1
Dec 24 14:10:25.637: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 14:10:25.637: INFO: Found 0 / 1
Dec 24 14:10:26.639: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 14:10:26.639: INFO: Found 1 / 1
Dec 24 14:10:26.639: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Dec 24 14:10:26.642: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 14:10:26.642: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Dec 24 14:10:26.642: INFO: wait on redis-master startup in kubectl-8658 
Dec 24 14:10:26.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-mzqsc redis-master --namespace=kubectl-8658'
Dec 24 14:10:26.858: INFO: stderr: ""
Dec 24 14:10:26.858: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 24 Dec 14:10:25.176 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 24 Dec 14:10:25.176 # Server started, Redis version 3.2.12\n1:M 24 Dec 14:10:25.176 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 24 Dec 14:10:25.177 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Dec 24 14:10:26.858: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-8658'
Dec 24 14:10:27.073: INFO: stderr: ""
Dec 24 14:10:27.073: INFO: stdout: "service/rm2 exposed\n"
Dec 24 14:10:27.113: INFO: Service rm2 in namespace kubectl-8658 found.
STEP: exposing service
Dec 24 14:10:29.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-8658'
Dec 24 14:10:29.343: INFO: stderr: ""
Dec 24 14:10:29.343: INFO: stdout: "service/rm3 exposed\n"
Dec 24 14:10:29.349: INFO: Service rm3 in namespace kubectl-8658 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:10:31.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8658" for this suite.
Dec 24 14:10:55.399: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:10:55.529: INFO: namespace kubectl-8658 deletion completed in 24.161642278s

• [SLOW TEST:37.690 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:10:55.530: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:11:44.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-419" for this suite.
Dec 24 14:11:50.623: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:11:50.754: INFO: namespace container-runtime-419 deletion completed in 6.204165245s

• [SLOW TEST:55.225 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:11:50.755: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 24 14:11:50.895: INFO: Waiting up to 5m0s for pod "downwardapi-volume-68389482-4d05-4231-ae18-f46c0d408d0a" in namespace "projected-3394" to be "success or failure"
Dec 24 14:11:50.914: INFO: Pod "downwardapi-volume-68389482-4d05-4231-ae18-f46c0d408d0a": Phase="Pending", Reason="", readiness=false. Elapsed: 18.700759ms
Dec 24 14:11:52.937: INFO: Pod "downwardapi-volume-68389482-4d05-4231-ae18-f46c0d408d0a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042031717s
Dec 24 14:11:54.956: INFO: Pod "downwardapi-volume-68389482-4d05-4231-ae18-f46c0d408d0a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060917701s
Dec 24 14:11:56.976: INFO: Pod "downwardapi-volume-68389482-4d05-4231-ae18-f46c0d408d0a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.080336921s
Dec 24 14:11:58.983: INFO: Pod "downwardapi-volume-68389482-4d05-4231-ae18-f46c0d408d0a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.087984367s
STEP: Saw pod success
Dec 24 14:11:58.983: INFO: Pod "downwardapi-volume-68389482-4d05-4231-ae18-f46c0d408d0a" satisfied condition "success or failure"
Dec 24 14:11:58.987: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-68389482-4d05-4231-ae18-f46c0d408d0a container client-container: 
STEP: delete the pod
Dec 24 14:11:59.064: INFO: Waiting for pod downwardapi-volume-68389482-4d05-4231-ae18-f46c0d408d0a to disappear
Dec 24 14:11:59.076: INFO: Pod downwardapi-volume-68389482-4d05-4231-ae18-f46c0d408d0a no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:11:59.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3394" for this suite.
Dec 24 14:12:05.131: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:12:05.274: INFO: namespace projected-3394 deletion completed in 6.191258117s

• [SLOW TEST:14.518 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:12:05.274: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Dec 24 14:12:05.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9411'
Dec 24 14:12:05.748: INFO: stderr: ""
Dec 24 14:12:05.748: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Dec 24 14:12:06.755: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 14:12:06.755: INFO: Found 0 / 1
Dec 24 14:12:07.763: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 14:12:07.764: INFO: Found 0 / 1
Dec 24 14:12:08.764: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 14:12:08.764: INFO: Found 0 / 1
Dec 24 14:12:09.758: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 14:12:09.758: INFO: Found 0 / 1
Dec 24 14:12:10.758: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 14:12:10.758: INFO: Found 0 / 1
Dec 24 14:12:11.759: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 14:12:11.759: INFO: Found 0 / 1
Dec 24 14:12:12.757: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 14:12:12.757: INFO: Found 1 / 1
Dec 24 14:12:12.757: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Dec 24 14:12:12.764: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 14:12:12.764: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Dec 24 14:12:12.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-wlfjp --namespace=kubectl-9411 -p {"metadata":{"annotations":{"x":"y"}}}'
Dec 24 14:12:13.052: INFO: stderr: ""
Dec 24 14:12:13.052: INFO: stdout: "pod/redis-master-wlfjp patched\n"
STEP: checking annotations
Dec 24 14:12:13.068: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 14:12:13.068: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:12:13.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9411" for this suite.
Dec 24 14:12:35.114: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:12:35.311: INFO: namespace kubectl-9411 deletion completed in 22.235764417s

• [SLOW TEST:30.037 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:12:35.312: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating api versions
Dec 24 14:12:35.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Dec 24 14:12:35.615: INFO: stderr: ""
Dec 24 14:12:35.615: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:12:35.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4403" for this suite.
Dec 24 14:12:41.652: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:12:41.807: INFO: namespace kubectl-4403 deletion completed in 6.182724617s

• [SLOW TEST:6.495 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:12:41.807: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-8ba7d17f-4531-4e1b-af22-87f9477dc1ba
STEP: Creating a pod to test consume secrets
Dec 24 14:12:42.024: INFO: Waiting up to 5m0s for pod "pod-secrets-cdd37e39-d08f-411b-bc75-d2385cca8e44" in namespace "secrets-7232" to be "success or failure"
Dec 24 14:12:42.098: INFO: Pod "pod-secrets-cdd37e39-d08f-411b-bc75-d2385cca8e44": Phase="Pending", Reason="", readiness=false. Elapsed: 73.482192ms
Dec 24 14:12:44.110: INFO: Pod "pod-secrets-cdd37e39-d08f-411b-bc75-d2385cca8e44": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08494329s
Dec 24 14:12:46.120: INFO: Pod "pod-secrets-cdd37e39-d08f-411b-bc75-d2385cca8e44": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09569971s
Dec 24 14:12:48.144: INFO: Pod "pod-secrets-cdd37e39-d08f-411b-bc75-d2385cca8e44": Phase="Pending", Reason="", readiness=false. Elapsed: 6.119225681s
Dec 24 14:12:50.151: INFO: Pod "pod-secrets-cdd37e39-d08f-411b-bc75-d2385cca8e44": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.126534421s
STEP: Saw pod success
Dec 24 14:12:50.151: INFO: Pod "pod-secrets-cdd37e39-d08f-411b-bc75-d2385cca8e44" satisfied condition "success or failure"
Dec 24 14:12:50.156: INFO: Trying to get logs from node iruya-node pod pod-secrets-cdd37e39-d08f-411b-bc75-d2385cca8e44 container secret-volume-test: 
STEP: delete the pod
Dec 24 14:12:50.295: INFO: Waiting for pod pod-secrets-cdd37e39-d08f-411b-bc75-d2385cca8e44 to disappear
Dec 24 14:12:50.303: INFO: Pod pod-secrets-cdd37e39-d08f-411b-bc75-d2385cca8e44 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:12:50.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7232" for this suite.
Dec 24 14:12:56.337: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:12:56.485: INFO: namespace secrets-7232 deletion completed in 6.17297464s

• [SLOW TEST:14.678 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:12:56.486: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 24 14:13:04.813: INFO: Waiting up to 5m0s for pod "client-envvars-c89f8e69-5b02-4bc5-bd34-5643c5bb4678" in namespace "pods-5867" to be "success or failure"
Dec 24 14:13:04.895: INFO: Pod "client-envvars-c89f8e69-5b02-4bc5-bd34-5643c5bb4678": Phase="Pending", Reason="", readiness=false. Elapsed: 81.374125ms
Dec 24 14:13:06.932: INFO: Pod "client-envvars-c89f8e69-5b02-4bc5-bd34-5643c5bb4678": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118035094s
Dec 24 14:13:08.941: INFO: Pod "client-envvars-c89f8e69-5b02-4bc5-bd34-5643c5bb4678": Phase="Pending", Reason="", readiness=false. Elapsed: 4.127194777s
Dec 24 14:13:10.962: INFO: Pod "client-envvars-c89f8e69-5b02-4bc5-bd34-5643c5bb4678": Phase="Pending", Reason="", readiness=false. Elapsed: 6.148073988s
Dec 24 14:13:12.990: INFO: Pod "client-envvars-c89f8e69-5b02-4bc5-bd34-5643c5bb4678": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.176692621s
STEP: Saw pod success
Dec 24 14:13:12.990: INFO: Pod "client-envvars-c89f8e69-5b02-4bc5-bd34-5643c5bb4678" satisfied condition "success or failure"
Dec 24 14:13:12.999: INFO: Trying to get logs from node iruya-node pod client-envvars-c89f8e69-5b02-4bc5-bd34-5643c5bb4678 container env3cont: 
STEP: delete the pod
Dec 24 14:13:13.073: INFO: Waiting for pod client-envvars-c89f8e69-5b02-4bc5-bd34-5643c5bb4678 to disappear
Dec 24 14:13:13.084: INFO: Pod client-envvars-c89f8e69-5b02-4bc5-bd34-5643c5bb4678 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:13:13.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5867" for this suite.
Dec 24 14:13:59.112: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:13:59.289: INFO: namespace pods-5867 deletion completed in 46.20081683s

• [SLOW TEST:62.804 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:13:59.290: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-f21f6ee2-e6fe-4876-a257-1fb2bad196ca
STEP: Creating a pod to test consume configMaps
Dec 24 14:13:59.617: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3aebc2c2-621d-48c3-9bb0-42e73c549990" in namespace "projected-1313" to be "success or failure"
Dec 24 14:13:59.633: INFO: Pod "pod-projected-configmaps-3aebc2c2-621d-48c3-9bb0-42e73c549990": Phase="Pending", Reason="", readiness=false. Elapsed: 16.135254ms
Dec 24 14:14:01.718: INFO: Pod "pod-projected-configmaps-3aebc2c2-621d-48c3-9bb0-42e73c549990": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100783052s
Dec 24 14:14:03.743: INFO: Pod "pod-projected-configmaps-3aebc2c2-621d-48c3-9bb0-42e73c549990": Phase="Pending", Reason="", readiness=false. Elapsed: 4.126585785s
Dec 24 14:14:05.751: INFO: Pod "pod-projected-configmaps-3aebc2c2-621d-48c3-9bb0-42e73c549990": Phase="Pending", Reason="", readiness=false. Elapsed: 6.134321722s
Dec 24 14:14:07.779: INFO: Pod "pod-projected-configmaps-3aebc2c2-621d-48c3-9bb0-42e73c549990": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.161954498s
STEP: Saw pod success
Dec 24 14:14:07.779: INFO: Pod "pod-projected-configmaps-3aebc2c2-621d-48c3-9bb0-42e73c549990" satisfied condition "success or failure"
Dec 24 14:14:07.784: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-3aebc2c2-621d-48c3-9bb0-42e73c549990 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 24 14:14:07.841: INFO: Waiting for pod pod-projected-configmaps-3aebc2c2-621d-48c3-9bb0-42e73c549990 to disappear
Dec 24 14:14:07.849: INFO: Pod pod-projected-configmaps-3aebc2c2-621d-48c3-9bb0-42e73c549990 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:14:07.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1313" for this suite.
Dec 24 14:14:13.913: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:14:14.048: INFO: namespace projected-1313 deletion completed in 6.189233292s

• [SLOW TEST:14.758 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:14:14.048: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Dec 24 14:14:22.240: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-86354b29-52b1-4026-a3e0-86818054824d,GenerateName:,Namespace:events-3734,SelfLink:/api/v1/namespaces/events-3734/pods/send-events-86354b29-52b1-4026-a3e0-86818054824d,UID:f02879dc-bc45-4c8b-8c94-18c212a45eb4,ResourceVersion:17899486,Generation:0,CreationTimestamp:2019-12-24 14:14:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 137966429,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9bvxg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9bvxg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-9bvxg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0032dd280} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0032dd2a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:14:14 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:14:21 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:14:21 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:14:14 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2019-12-24 14:14:14 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2019-12-24 14:14:19 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://af967b91dc204ffbb27df922b1a09488d12e20cb867507a63d4ff266bef93844}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Dec 24 14:14:24.252: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Dec 24 14:14:26.261: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:14:26.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-3734" for this suite.
Dec 24 14:15:08.376: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:15:08.512: INFO: namespace events-3734 deletion completed in 42.206247546s

• [SLOW TEST:54.463 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:15:08.512: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 24 14:15:08.646: INFO: Waiting up to 5m0s for pod "downwardapi-volume-424249b6-bf7f-4d28-b845-63c03199831c" in namespace "projected-953" to be "success or failure"
Dec 24 14:15:08.654: INFO: Pod "downwardapi-volume-424249b6-bf7f-4d28-b845-63c03199831c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.26239ms
Dec 24 14:15:10.670: INFO: Pod "downwardapi-volume-424249b6-bf7f-4d28-b845-63c03199831c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024792953s
Dec 24 14:15:12.693: INFO: Pod "downwardapi-volume-424249b6-bf7f-4d28-b845-63c03199831c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047694572s
Dec 24 14:15:14.715: INFO: Pod "downwardapi-volume-424249b6-bf7f-4d28-b845-63c03199831c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068877079s
Dec 24 14:15:16.723: INFO: Pod "downwardapi-volume-424249b6-bf7f-4d28-b845-63c03199831c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.077466888s
STEP: Saw pod success
Dec 24 14:15:16.723: INFO: Pod "downwardapi-volume-424249b6-bf7f-4d28-b845-63c03199831c" satisfied condition "success or failure"
Dec 24 14:15:16.731: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-424249b6-bf7f-4d28-b845-63c03199831c container client-container: 
STEP: delete the pod
Dec 24 14:15:16.788: INFO: Waiting for pod downwardapi-volume-424249b6-bf7f-4d28-b845-63c03199831c to disappear
Dec 24 14:15:16.794: INFO: Pod downwardapi-volume-424249b6-bf7f-4d28-b845-63c03199831c no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:15:16.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-953" for this suite.
Dec 24 14:15:22.863: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:15:22.956: INFO: namespace projected-953 deletion completed in 6.155150288s

• [SLOW TEST:14.444 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:15:22.956: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's args
Dec 24 14:15:23.103: INFO: Waiting up to 5m0s for pod "var-expansion-4b100d7f-e6ef-41db-bfd3-318252572701" in namespace "var-expansion-502" to be "success or failure"
Dec 24 14:15:23.137: INFO: Pod "var-expansion-4b100d7f-e6ef-41db-bfd3-318252572701": Phase="Pending", Reason="", readiness=false. Elapsed: 34.092344ms
Dec 24 14:15:25.145: INFO: Pod "var-expansion-4b100d7f-e6ef-41db-bfd3-318252572701": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041703644s
Dec 24 14:15:27.152: INFO: Pod "var-expansion-4b100d7f-e6ef-41db-bfd3-318252572701": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048733738s
Dec 24 14:15:29.157: INFO: Pod "var-expansion-4b100d7f-e6ef-41db-bfd3-318252572701": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053977486s
Dec 24 14:15:31.164: INFO: Pod "var-expansion-4b100d7f-e6ef-41db-bfd3-318252572701": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.061472627s
STEP: Saw pod success
Dec 24 14:15:31.164: INFO: Pod "var-expansion-4b100d7f-e6ef-41db-bfd3-318252572701" satisfied condition "success or failure"
Dec 24 14:15:31.169: INFO: Trying to get logs from node iruya-node pod var-expansion-4b100d7f-e6ef-41db-bfd3-318252572701 container dapi-container: 
STEP: delete the pod
Dec 24 14:15:31.304: INFO: Waiting for pod var-expansion-4b100d7f-e6ef-41db-bfd3-318252572701 to disappear
Dec 24 14:15:31.316: INFO: Pod var-expansion-4b100d7f-e6ef-41db-bfd3-318252572701 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:15:31.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-502" for this suite.
Dec 24 14:15:37.347: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:15:37.514: INFO: namespace var-expansion-502 deletion completed in 6.193826339s

• [SLOW TEST:14.558 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:15:37.515: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the initial replication controller
Dec 24 14:15:37.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-65'
Dec 24 14:15:37.920: INFO: stderr: ""
Dec 24 14:15:37.920: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 24 14:15:37.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-65'
Dec 24 14:15:38.108: INFO: stderr: ""
Dec 24 14:15:38.108: INFO: stdout: "update-demo-nautilus-h2g5v update-demo-nautilus-v56qv "
Dec 24 14:15:38.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h2g5v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-65'
Dec 24 14:15:38.218: INFO: stderr: ""
Dec 24 14:15:38.218: INFO: stdout: ""
Dec 24 14:15:38.218: INFO: update-demo-nautilus-h2g5v is created but not running
Dec 24 14:15:43.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-65'
Dec 24 14:15:44.830: INFO: stderr: ""
Dec 24 14:15:44.830: INFO: stdout: "update-demo-nautilus-h2g5v update-demo-nautilus-v56qv "
Dec 24 14:15:44.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h2g5v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-65'
Dec 24 14:15:45.456: INFO: stderr: ""
Dec 24 14:15:45.456: INFO: stdout: ""
Dec 24 14:15:45.456: INFO: update-demo-nautilus-h2g5v is created but not running
Dec 24 14:15:50.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-65'
Dec 24 14:15:50.675: INFO: stderr: ""
Dec 24 14:15:50.675: INFO: stdout: "update-demo-nautilus-h2g5v update-demo-nautilus-v56qv "
Dec 24 14:15:50.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h2g5v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-65'
Dec 24 14:15:50.826: INFO: stderr: ""
Dec 24 14:15:50.826: INFO: stdout: "true"
Dec 24 14:15:50.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h2g5v -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-65'
Dec 24 14:15:50.951: INFO: stderr: ""
Dec 24 14:15:50.951: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 24 14:15:50.951: INFO: validating pod update-demo-nautilus-h2g5v
Dec 24 14:15:50.974: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 24 14:15:50.975: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 24 14:15:50.975: INFO: update-demo-nautilus-h2g5v is verified up and running
Dec 24 14:15:50.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v56qv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-65'
Dec 24 14:15:51.106: INFO: stderr: ""
Dec 24 14:15:51.106: INFO: stdout: "true"
Dec 24 14:15:51.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v56qv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-65'
Dec 24 14:15:51.198: INFO: stderr: ""
Dec 24 14:15:51.199: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 24 14:15:51.199: INFO: validating pod update-demo-nautilus-v56qv
Dec 24 14:15:51.215: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 24 14:15:51.215: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 24 14:15:51.215: INFO: update-demo-nautilus-v56qv is verified up and running
STEP: rolling-update to new replication controller
Dec 24 14:15:51.226: INFO: scanned /root for discovery docs: 
Dec 24 14:15:51.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-65'
Dec 24 14:16:22.557: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Dec 24 14:16:22.558: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 24 14:16:22.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-65'
Dec 24 14:16:22.718: INFO: stderr: ""
Dec 24 14:16:22.719: INFO: stdout: "update-demo-kitten-jvc9l update-demo-kitten-s6hzn "
Dec 24 14:16:22.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-jvc9l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-65'
Dec 24 14:16:22.847: INFO: stderr: ""
Dec 24 14:16:22.847: INFO: stdout: "true"
Dec 24 14:16:22.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-jvc9l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-65'
Dec 24 14:16:22.941: INFO: stderr: ""
Dec 24 14:16:22.942: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Dec 24 14:16:22.942: INFO: validating pod update-demo-kitten-jvc9l
Dec 24 14:16:22.963: INFO: got data: {
  "image": "kitten.jpg"
}

Dec 24 14:16:22.963: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Dec 24 14:16:22.963: INFO: update-demo-kitten-jvc9l is verified up and running
Dec 24 14:16:22.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-s6hzn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-65'
Dec 24 14:16:23.066: INFO: stderr: ""
Dec 24 14:16:23.066: INFO: stdout: "true"
Dec 24 14:16:23.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-s6hzn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-65'
Dec 24 14:16:23.163: INFO: stderr: ""
Dec 24 14:16:23.163: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Dec 24 14:16:23.163: INFO: validating pod update-demo-kitten-s6hzn
Dec 24 14:16:23.175: INFO: got data: {
  "image": "kitten.jpg"
}

Dec 24 14:16:23.175: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Dec 24 14:16:23.175: INFO: update-demo-kitten-s6hzn is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:16:23.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-65" for this suite.
Dec 24 14:16:53.198: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:16:53.282: INFO: namespace kubectl-65 deletion completed in 30.099926902s

• [SLOW TEST:75.767 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:16:53.282: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 24 14:16:53.378: INFO: Creating ReplicaSet my-hostname-basic-df913486-6f66-4ca9-ab18-1bc81633eb11
Dec 24 14:16:53.390: INFO: Pod name my-hostname-basic-df913486-6f66-4ca9-ab18-1bc81633eb11: Found 0 pods out of 1
Dec 24 14:16:58.407: INFO: Pod name my-hostname-basic-df913486-6f66-4ca9-ab18-1bc81633eb11: Found 1 pods out of 1
Dec 24 14:16:58.407: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-df913486-6f66-4ca9-ab18-1bc81633eb11" is running
Dec 24 14:17:02.419: INFO: Pod "my-hostname-basic-df913486-6f66-4ca9-ab18-1bc81633eb11-rm9xh" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-24 14:16:53 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-24 14:16:53 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-df913486-6f66-4ca9-ab18-1bc81633eb11]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-24 14:16:53 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-df913486-6f66-4ca9-ab18-1bc81633eb11]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-24 14:16:53 +0000 UTC Reason: Message:}])
Dec 24 14:17:02.419: INFO: Trying to dial the pod
Dec 24 14:17:07.463: INFO: Controller my-hostname-basic-df913486-6f66-4ca9-ab18-1bc81633eb11: Got expected result from replica 1 [my-hostname-basic-df913486-6f66-4ca9-ab18-1bc81633eb11-rm9xh]: "my-hostname-basic-df913486-6f66-4ca9-ab18-1bc81633eb11-rm9xh", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:17:07.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-3348" for this suite.
Dec 24 14:17:13.497: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:17:13.666: INFO: namespace replicaset-3348 deletion completed in 6.195415964s

• [SLOW TEST:20.384 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:17:13.668: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 24 14:17:13.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-2215'
Dec 24 14:17:14.115: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 24 14:17:14.115: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426
Dec 24 14:17:14.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-2215'
Dec 24 14:17:14.368: INFO: stderr: ""
Dec 24 14:17:14.368: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:17:14.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2215" for this suite.
Dec 24 14:17:20.419: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:17:20.577: INFO: namespace kubectl-2215 deletion completed in 6.200389899s

• [SLOW TEST:6.909 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:17:20.578: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 24 14:17:20.786: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Dec 24 14:17:24.393: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:17:25.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-7222" for this suite.
Dec 24 14:17:33.638: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:17:34.067: INFO: namespace replication-controller-7222 deletion completed in 8.609393525s

• [SLOW TEST:13.490 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:17:34.068: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Starting the proxy
Dec 24 14:17:34.153: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix777028753/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:17:34.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5644" for this suite.
Dec 24 14:17:40.409: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:17:40.541: INFO: namespace kubectl-5644 deletion completed in 6.255231012s

• [SLOW TEST:6.473 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:17:40.541: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Dec 24 14:17:49.397: INFO: Successfully updated pod "pod-update-activedeadlineseconds-f9465b33-367e-451c-b3b1-26630d0cc4e9"
Dec 24 14:17:49.397: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-f9465b33-367e-451c-b3b1-26630d0cc4e9" in namespace "pods-8371" to be "terminated due to deadline exceeded"
Dec 24 14:17:49.416: INFO: Pod "pod-update-activedeadlineseconds-f9465b33-367e-451c-b3b1-26630d0cc4e9": Phase="Running", Reason="", readiness=true. Elapsed: 19.087687ms
Dec 24 14:17:51.432: INFO: Pod "pod-update-activedeadlineseconds-f9465b33-367e-451c-b3b1-26630d0cc4e9": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.035776816s
Dec 24 14:17:51.433: INFO: Pod "pod-update-activedeadlineseconds-f9465b33-367e-451c-b3b1-26630d0cc4e9" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:17:51.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8371" for this suite.
Dec 24 14:17:57.562: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:17:57.679: INFO: namespace pods-8371 deletion completed in 6.168844162s

• [SLOW TEST:17.138 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:17:57.680: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210
STEP: creating the pod
Dec 24 14:17:57.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-118'
Dec 24 14:17:58.071: INFO: stderr: ""
Dec 24 14:17:58.071: INFO: stdout: "pod/pause created\n"
Dec 24 14:17:58.071: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Dec 24 14:17:58.072: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-118" to be "running and ready"
Dec 24 14:17:58.183: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 111.129606ms
Dec 24 14:18:00.195: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123235041s
Dec 24 14:18:02.207: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.134783086s
Dec 24 14:18:04.214: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.142549852s
Dec 24 14:18:06.223: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.150833319s
Dec 24 14:18:06.223: INFO: Pod "pause" satisfied condition "running and ready"
Dec 24 14:18:06.223: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: adding the label testing-label with value testing-label-value to a pod
Dec 24 14:18:06.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-118'
Dec 24 14:18:06.387: INFO: stderr: ""
Dec 24 14:18:06.387: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Dec 24 14:18:06.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-118'
Dec 24 14:18:06.522: INFO: stderr: ""
Dec 24 14:18:06.522: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          8s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Dec 24 14:18:06.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-118'
Dec 24 14:18:06.699: INFO: stderr: ""
Dec 24 14:18:06.699: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Dec 24 14:18:06.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-118'
Dec 24 14:18:06.903: INFO: stderr: ""
Dec 24 14:18:06.903: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          8s    \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217
STEP: using delete to clean up resources
Dec 24 14:18:06.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-118'
Dec 24 14:18:07.113: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 24 14:18:07.113: INFO: stdout: "pod \"pause\" force deleted\n"
Dec 24 14:18:07.113: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-118'
Dec 24 14:18:07.343: INFO: stderr: "No resources found.\n"
Dec 24 14:18:07.343: INFO: stdout: ""
Dec 24 14:18:07.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-118 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 24 14:18:07.422: INFO: stderr: ""
Dec 24 14:18:07.422: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:18:07.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-118" for this suite.
Dec 24 14:18:15.464: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:18:15.587: INFO: namespace kubectl-118 deletion completed in 8.158103178s

• [SLOW TEST:17.908 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:18:15.588: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Dec 24 14:18:15.717: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-4174,SelfLink:/api/v1/namespaces/watch-4174/configmaps/e2e-watch-test-watch-closed,UID:1e2ec224-2762-42ff-97ed-9aa611f5b5f1,ResourceVersion:17900172,Generation:0,CreationTimestamp:2019-12-24 14:18:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 24 14:18:15.717: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-4174,SelfLink:/api/v1/namespaces/watch-4174/configmaps/e2e-watch-test-watch-closed,UID:1e2ec224-2762-42ff-97ed-9aa611f5b5f1,ResourceVersion:17900173,Generation:0,CreationTimestamp:2019-12-24 14:18:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Dec 24 14:18:15.749: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-4174,SelfLink:/api/v1/namespaces/watch-4174/configmaps/e2e-watch-test-watch-closed,UID:1e2ec224-2762-42ff-97ed-9aa611f5b5f1,ResourceVersion:17900174,Generation:0,CreationTimestamp:2019-12-24 14:18:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 24 14:18:15.749: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-4174,SelfLink:/api/v1/namespaces/watch-4174/configmaps/e2e-watch-test-watch-closed,UID:1e2ec224-2762-42ff-97ed-9aa611f5b5f1,ResourceVersion:17900175,Generation:0,CreationTimestamp:2019-12-24 14:18:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:18:15.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4174" for this suite.
Dec 24 14:18:21.791: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:18:21.941: INFO: namespace watch-4174 deletion completed in 6.181504095s

• [SLOW TEST:6.352 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:18:21.941: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:18:27.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5077" for this suite.
Dec 24 14:18:33.597: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:18:33.770: INFO: namespace watch-5077 deletion completed in 6.279740248s

• [SLOW TEST:11.829 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:18:33.770: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 24 14:18:33.894: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:18:42.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-878" for this suite.
Dec 24 14:19:26.472: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:19:26.649: INFO: namespace pods-878 deletion completed in 44.243505961s

• [SLOW TEST:52.879 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:19:26.650: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Dec 24 14:19:26.977: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-5757,SelfLink:/api/v1/namespaces/watch-5757/configmaps/e2e-watch-test-resource-version,UID:fd138dbb-be97-42b9-bf37-41fbd9a906d4,ResourceVersion:17900417,Generation:0,CreationTimestamp:2019-12-24 14:19:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 24 14:19:26.977: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-5757,SelfLink:/api/v1/namespaces/watch-5757/configmaps/e2e-watch-test-resource-version,UID:fd138dbb-be97-42b9-bf37-41fbd9a906d4,ResourceVersion:17900418,Generation:0,CreationTimestamp:2019-12-24 14:19:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:19:26.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5757" for this suite.
Dec 24 14:19:32.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:19:33.116: INFO: namespace watch-5757 deletion completed in 6.134832921s

• [SLOW TEST:6.467 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:19:33.117: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-29753d19-c44d-45c1-b92e-f9916f99c546
STEP: Creating a pod to test consume configMaps
Dec 24 14:19:33.243: INFO: Waiting up to 5m0s for pod "pod-configmaps-e7aa1aa3-a96a-46f5-b624-cbff36beb698" in namespace "configmap-2196" to be "success or failure"
Dec 24 14:19:33.259: INFO: Pod "pod-configmaps-e7aa1aa3-a96a-46f5-b624-cbff36beb698": Phase="Pending", Reason="", readiness=false. Elapsed: 16.547272ms
Dec 24 14:19:35.274: INFO: Pod "pod-configmaps-e7aa1aa3-a96a-46f5-b624-cbff36beb698": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030993271s
Dec 24 14:19:37.285: INFO: Pod "pod-configmaps-e7aa1aa3-a96a-46f5-b624-cbff36beb698": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042065166s
Dec 24 14:19:39.293: INFO: Pod "pod-configmaps-e7aa1aa3-a96a-46f5-b624-cbff36beb698": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050480126s
Dec 24 14:19:41.302: INFO: Pod "pod-configmaps-e7aa1aa3-a96a-46f5-b624-cbff36beb698": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.059216729s
STEP: Saw pod success
Dec 24 14:19:41.302: INFO: Pod "pod-configmaps-e7aa1aa3-a96a-46f5-b624-cbff36beb698" satisfied condition "success or failure"
Dec 24 14:19:41.307: INFO: Trying to get logs from node iruya-node pod pod-configmaps-e7aa1aa3-a96a-46f5-b624-cbff36beb698 container configmap-volume-test: 
STEP: delete the pod
Dec 24 14:19:41.552: INFO: Waiting for pod pod-configmaps-e7aa1aa3-a96a-46f5-b624-cbff36beb698 to disappear
Dec 24 14:19:41.561: INFO: Pod pod-configmaps-e7aa1aa3-a96a-46f5-b624-cbff36beb698 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:19:41.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2196" for this suite.
Dec 24 14:19:47.598: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:19:47.784: INFO: namespace configmap-2196 deletion completed in 6.212934964s

• [SLOW TEST:14.667 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:19:47.785: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Dec 24 14:19:47.969: INFO: Waiting up to 5m0s for pod "pod-1e807995-ba26-4036-aaeb-97afb5f569eb" in namespace "emptydir-5411" to be "success or failure"
Dec 24 14:19:47.996: INFO: Pod "pod-1e807995-ba26-4036-aaeb-97afb5f569eb": Phase="Pending", Reason="", readiness=false. Elapsed: 26.759612ms
Dec 24 14:19:50.004: INFO: Pod "pod-1e807995-ba26-4036-aaeb-97afb5f569eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035369798s
Dec 24 14:19:52.021: INFO: Pod "pod-1e807995-ba26-4036-aaeb-97afb5f569eb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051728442s
Dec 24 14:19:54.040: INFO: Pod "pod-1e807995-ba26-4036-aaeb-97afb5f569eb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.070782537s
Dec 24 14:19:56.052: INFO: Pod "pod-1e807995-ba26-4036-aaeb-97afb5f569eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.083329338s
STEP: Saw pod success
Dec 24 14:19:56.052: INFO: Pod "pod-1e807995-ba26-4036-aaeb-97afb5f569eb" satisfied condition "success or failure"
Dec 24 14:19:56.057: INFO: Trying to get logs from node iruya-node pod pod-1e807995-ba26-4036-aaeb-97afb5f569eb container test-container: 
STEP: delete the pod
Dec 24 14:19:56.098: INFO: Waiting for pod pod-1e807995-ba26-4036-aaeb-97afb5f569eb to disappear
Dec 24 14:19:56.138: INFO: Pod pod-1e807995-ba26-4036-aaeb-97afb5f569eb no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:19:56.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5411" for this suite.
Dec 24 14:20:02.161: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:20:02.343: INFO: namespace emptydir-5411 deletion completed in 6.200598079s

• [SLOW TEST:14.559 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:20:02.344: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Dec 24 14:20:02.442: INFO: Waiting up to 5m0s for pod "downward-api-80123803-5d4c-4247-b717-aa930c33039f" in namespace "downward-api-2162" to be "success or failure"
Dec 24 14:20:02.451: INFO: Pod "downward-api-80123803-5d4c-4247-b717-aa930c33039f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.131062ms
Dec 24 14:20:04.467: INFO: Pod "downward-api-80123803-5d4c-4247-b717-aa930c33039f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024933104s
Dec 24 14:20:06.482: INFO: Pod "downward-api-80123803-5d4c-4247-b717-aa930c33039f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040175656s
Dec 24 14:20:08.498: INFO: Pod "downward-api-80123803-5d4c-4247-b717-aa930c33039f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055438172s
Dec 24 14:20:10.511: INFO: Pod "downward-api-80123803-5d4c-4247-b717-aa930c33039f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.068323506s
STEP: Saw pod success
Dec 24 14:20:10.511: INFO: Pod "downward-api-80123803-5d4c-4247-b717-aa930c33039f" satisfied condition "success or failure"
Dec 24 14:20:10.514: INFO: Trying to get logs from node iruya-node pod downward-api-80123803-5d4c-4247-b717-aa930c33039f container dapi-container: 
STEP: delete the pod
Dec 24 14:20:10.690: INFO: Waiting for pod downward-api-80123803-5d4c-4247-b717-aa930c33039f to disappear
Dec 24 14:20:10.758: INFO: Pod downward-api-80123803-5d4c-4247-b717-aa930c33039f no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:20:10.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2162" for this suite.
Dec 24 14:20:16.798: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:20:16.950: INFO: namespace downward-api-2162 deletion completed in 6.182222861s

• [SLOW TEST:14.606 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:20:16.950: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 24 14:20:17.042: INFO: (0) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 24.180366ms)
Dec 24 14:20:17.050: INFO: (1) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.593718ms)
Dec 24 14:20:17.055: INFO: (2) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.582244ms)
Dec 24 14:20:17.059: INFO: (3) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.414656ms)
Dec 24 14:20:17.072: INFO: (4) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.410892ms)
Dec 24 14:20:17.078: INFO: (5) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.842068ms)
Dec 24 14:20:17.081: INFO: (6) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.126849ms)
Dec 24 14:20:17.084: INFO: (7) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.003765ms)
Dec 24 14:20:17.087: INFO: (8) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 2.879805ms)
Dec 24 14:20:17.090: INFO: (9) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.285433ms)
Dec 24 14:20:17.094: INFO: (10) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.541422ms)
Dec 24 14:20:17.097: INFO: (11) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.235838ms)
Dec 24 14:20:17.100: INFO: (12) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.435134ms)
Dec 24 14:20:17.104: INFO: (13) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.086739ms)
Dec 24 14:20:17.107: INFO: (14) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.451471ms)
Dec 24 14:20:17.111: INFO: (15) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.649006ms)
Dec 24 14:20:17.114: INFO: (16) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.520419ms)
Dec 24 14:20:17.117: INFO: (17) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.130202ms)
Dec 24 14:20:17.122: INFO: (18) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.165264ms)
Dec 24 14:20:17.125: INFO: (19) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.745664ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:20:17.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-8153" for this suite.
Dec 24 14:20:23.150: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:20:23.266: INFO: namespace proxy-8153 deletion completed in 6.136313758s

• [SLOW TEST:6.316 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:20:23.267: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-3811
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Dec 24 14:20:23.358: INFO: Found 0 stateful pods, waiting for 3
Dec 24 14:20:33.368: INFO: Found 2 stateful pods, waiting for 3
Dec 24 14:20:43.368: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 24 14:20:43.368: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 24 14:20:43.368: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 24 14:20:53.377: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 24 14:20:53.377: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 24 14:20:53.377: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Dec 24 14:20:53.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3811 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 24 14:20:55.825: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 24 14:20:55.825: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 24 14:20:55.825: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Dec 24 14:21:05.912: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Dec 24 14:21:15.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3811 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 14:21:16.327: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Dec 24 14:21:16.327: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 24 14:21:16.327: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 24 14:21:26.361: INFO: Waiting for StatefulSet statefulset-3811/ss2 to complete update
Dec 24 14:21:26.361: INFO: Waiting for Pod statefulset-3811/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 24 14:21:26.361: INFO: Waiting for Pod statefulset-3811/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 24 14:21:26.361: INFO: Waiting for Pod statefulset-3811/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 24 14:21:36.380: INFO: Waiting for StatefulSet statefulset-3811/ss2 to complete update
Dec 24 14:21:36.380: INFO: Waiting for Pod statefulset-3811/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 24 14:21:36.380: INFO: Waiting for Pod statefulset-3811/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 24 14:21:46.379: INFO: Waiting for StatefulSet statefulset-3811/ss2 to complete update
Dec 24 14:21:46.379: INFO: Waiting for Pod statefulset-3811/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 24 14:21:46.379: INFO: Waiting for Pod statefulset-3811/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 24 14:21:56.374: INFO: Waiting for StatefulSet statefulset-3811/ss2 to complete update
Dec 24 14:21:56.375: INFO: Waiting for Pod statefulset-3811/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 24 14:22:06.375: INFO: Waiting for StatefulSet statefulset-3811/ss2 to complete update
Dec 24 14:22:06.375: INFO: Waiting for Pod statefulset-3811/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 24 14:22:16.378: INFO: Waiting for StatefulSet statefulset-3811/ss2 to complete update
Dec 24 14:22:16.378: INFO: Waiting for Pod statefulset-3811/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Rolling back to a previous revision
Dec 24 14:22:27.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3811 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 24 14:22:27.749: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 24 14:22:27.749: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 24 14:22:27.749: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 24 14:22:37.853: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Dec 24 14:22:47.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3811 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 14:22:48.607: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Dec 24 14:22:48.607: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 24 14:22:48.607: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 24 14:22:58.667: INFO: Waiting for StatefulSet statefulset-3811/ss2 to complete update
Dec 24 14:22:58.667: INFO: Waiting for Pod statefulset-3811/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 24 14:22:58.667: INFO: Waiting for Pod statefulset-3811/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 24 14:23:08.788: INFO: Waiting for StatefulSet statefulset-3811/ss2 to complete update
Dec 24 14:23:08.788: INFO: Waiting for Pod statefulset-3811/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 24 14:23:08.789: INFO: Waiting for Pod statefulset-3811/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 24 14:23:18.686: INFO: Waiting for StatefulSet statefulset-3811/ss2 to complete update
Dec 24 14:23:18.686: INFO: Waiting for Pod statefulset-3811/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 24 14:23:28.681: INFO: Waiting for StatefulSet statefulset-3811/ss2 to complete update
Dec 24 14:23:28.681: INFO: Waiting for Pod statefulset-3811/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 24 14:23:38.742: INFO: Waiting for StatefulSet statefulset-3811/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Dec 24 14:23:48.682: INFO: Deleting all statefulset in ns statefulset-3811
Dec 24 14:23:48.688: INFO: Scaling statefulset ss2 to 0
Dec 24 14:24:18.730: INFO: Waiting for statefulset status.replicas updated to 0
Dec 24 14:24:18.734: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:24:18.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-3811" for this suite.
Dec 24 14:24:26.812: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:24:26.949: INFO: namespace statefulset-3811 deletion completed in 8.159493272s

• [SLOW TEST:243.682 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:24:26.949: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 24 14:24:27.056: INFO: Creating deployment "test-recreate-deployment"
Dec 24 14:24:27.062: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Dec 24 14:24:27.095: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Dec 24 14:24:29.115: INFO: Waiting deployment "test-recreate-deployment" to complete
Dec 24 14:24:29.118: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712794267, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712794267, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712794267, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712794267, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 24 14:24:31.154: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712794267, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712794267, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712794267, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712794267, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 24 14:24:33.129: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712794267, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712794267, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712794267, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712794267, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 24 14:24:35.129: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Dec 24 14:24:35.147: INFO: Updating deployment test-recreate-deployment
Dec 24 14:24:35.147: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Dec 24 14:24:35.536: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-484,SelfLink:/apis/apps/v1/namespaces/deployment-484/deployments/test-recreate-deployment,UID:8a3f5e9f-83be-4f42-ae84-54c16ce37ab0,ResourceVersion:17901315,Generation:2,CreationTimestamp:2019-12-24 14:24:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2019-12-24 14:24:35 +0000 UTC 2019-12-24 14:24:35 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2019-12-24 14:24:35 +0000 UTC 2019-12-24 14:24:27 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Dec 24 14:24:35.545: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-484,SelfLink:/apis/apps/v1/namespaces/deployment-484/replicasets/test-recreate-deployment-5c8c9cc69d,UID:799aa829-4b2a-4182-b564-e2e752810201,ResourceVersion:17901312,Generation:1,CreationTimestamp:2019-12-24 14:24:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 8a3f5e9f-83be-4f42-ae84-54c16ce37ab0 0xc0021d6a37 0xc0021d6a38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 24 14:24:35.545: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Dec 24 14:24:35.546: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-484,SelfLink:/apis/apps/v1/namespaces/deployment-484/replicasets/test-recreate-deployment-6df85df6b9,UID:c527b537-c163-46b4-b2a3-41636a59c011,ResourceVersion:17901303,Generation:2,CreationTimestamp:2019-12-24 14:24:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 8a3f5e9f-83be-4f42-ae84-54c16ce37ab0 0xc0021d6b07 0xc0021d6b08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 24 14:24:35.551: INFO: Pod "test-recreate-deployment-5c8c9cc69d-76fs5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-76fs5,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-484,SelfLink:/api/v1/namespaces/deployment-484/pods/test-recreate-deployment-5c8c9cc69d-76fs5,UID:a7b9f08a-86b1-4d0f-a022-034a6636060a,ResourceVersion:17901311,Generation:0,CreationTimestamp:2019-12-24 14:24:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 799aa829-4b2a-4182-b564-e2e752810201 0xc0021d73e7 0xc0021d73e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7dkhv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7dkhv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-7dkhv true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0021d7460} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0021d7480}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:24:35 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:24:35.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-484" for this suite.
Dec 24 14:24:43.589: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:24:43.695: INFO: namespace deployment-484 deletion completed in 8.126631261s

• [SLOW TEST:16.746 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:24:43.695: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 24 14:24:44.099: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3e7f2bea-d8bf-4a82-af88-df04c6e59c3f" in namespace "projected-8919" to be "success or failure"
Dec 24 14:24:44.126: INFO: Pod "downwardapi-volume-3e7f2bea-d8bf-4a82-af88-df04c6e59c3f": Phase="Pending", Reason="", readiness=false. Elapsed: 27.047729ms
Dec 24 14:24:46.137: INFO: Pod "downwardapi-volume-3e7f2bea-d8bf-4a82-af88-df04c6e59c3f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038228502s
Dec 24 14:24:48.179: INFO: Pod "downwardapi-volume-3e7f2bea-d8bf-4a82-af88-df04c6e59c3f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.080456844s
Dec 24 14:24:50.244: INFO: Pod "downwardapi-volume-3e7f2bea-d8bf-4a82-af88-df04c6e59c3f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.145125263s
Dec 24 14:24:52.254: INFO: Pod "downwardapi-volume-3e7f2bea-d8bf-4a82-af88-df04c6e59c3f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.155529054s
Dec 24 14:24:54.264: INFO: Pod "downwardapi-volume-3e7f2bea-d8bf-4a82-af88-df04c6e59c3f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.165753645s
STEP: Saw pod success
Dec 24 14:24:54.265: INFO: Pod "downwardapi-volume-3e7f2bea-d8bf-4a82-af88-df04c6e59c3f" satisfied condition "success or failure"
Dec 24 14:24:54.271: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-3e7f2bea-d8bf-4a82-af88-df04c6e59c3f container client-container: 
STEP: delete the pod
Dec 24 14:24:54.371: INFO: Waiting for pod downwardapi-volume-3e7f2bea-d8bf-4a82-af88-df04c6e59c3f to disappear
Dec 24 14:24:54.377: INFO: Pod downwardapi-volume-3e7f2bea-d8bf-4a82-af88-df04c6e59c3f no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:24:54.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8919" for this suite.
Dec 24 14:25:00.399: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:25:00.525: INFO: namespace projected-8919 deletion completed in 6.141843295s

• [SLOW TEST:16.830 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:25:00.526: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-71b1dd42-40da-49e4-9a17-4a4c853c1019
STEP: Creating a pod to test consume configMaps
Dec 24 14:25:00.710: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2edfd676-6dd5-4395-918c-ffb6cd7211aa" in namespace "projected-7384" to be "success or failure"
Dec 24 14:25:00.769: INFO: Pod "pod-projected-configmaps-2edfd676-6dd5-4395-918c-ffb6cd7211aa": Phase="Pending", Reason="", readiness=false. Elapsed: 58.837457ms
Dec 24 14:25:02.863: INFO: Pod "pod-projected-configmaps-2edfd676-6dd5-4395-918c-ffb6cd7211aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.15308603s
Dec 24 14:25:04.937: INFO: Pod "pod-projected-configmaps-2edfd676-6dd5-4395-918c-ffb6cd7211aa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.227089999s
Dec 24 14:25:07.811: INFO: Pod "pod-projected-configmaps-2edfd676-6dd5-4395-918c-ffb6cd7211aa": Phase="Pending", Reason="", readiness=false. Elapsed: 7.10096242s
Dec 24 14:25:09.825: INFO: Pod "pod-projected-configmaps-2edfd676-6dd5-4395-918c-ffb6cd7211aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.115164452s
STEP: Saw pod success
Dec 24 14:25:09.825: INFO: Pod "pod-projected-configmaps-2edfd676-6dd5-4395-918c-ffb6cd7211aa" satisfied condition "success or failure"
Dec 24 14:25:09.830: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-2edfd676-6dd5-4395-918c-ffb6cd7211aa container projected-configmap-volume-test: 
STEP: delete the pod
Dec 24 14:25:09.922: INFO: Waiting for pod pod-projected-configmaps-2edfd676-6dd5-4395-918c-ffb6cd7211aa to disappear
Dec 24 14:25:09.933: INFO: Pod pod-projected-configmaps-2edfd676-6dd5-4395-918c-ffb6cd7211aa no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:25:09.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7384" for this suite.
Dec 24 14:25:16.042: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:25:16.165: INFO: namespace projected-7384 deletion completed in 6.170997684s

• [SLOW TEST:15.640 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:25:16.166: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5231.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5231.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5231.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5231.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 24 14:25:30.384: INFO: File wheezy_udp@dns-test-service-3.dns-5231.svc.cluster.local from pod  dns-5231/dns-test-b9d60328-ce47-4ddb-83ca-7889552e9d2d contains '' instead of 'foo.example.com.'
Dec 24 14:25:30.402: INFO: File jessie_udp@dns-test-service-3.dns-5231.svc.cluster.local from pod  dns-5231/dns-test-b9d60328-ce47-4ddb-83ca-7889552e9d2d contains '' instead of 'foo.example.com.'
Dec 24 14:25:30.402: INFO: Lookups using dns-5231/dns-test-b9d60328-ce47-4ddb-83ca-7889552e9d2d failed for: [wheezy_udp@dns-test-service-3.dns-5231.svc.cluster.local jessie_udp@dns-test-service-3.dns-5231.svc.cluster.local]

Dec 24 14:25:35.428: INFO: DNS probes using dns-test-b9d60328-ce47-4ddb-83ca-7889552e9d2d succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5231.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5231.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5231.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5231.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 24 14:25:47.748: INFO: File wheezy_udp@dns-test-service-3.dns-5231.svc.cluster.local from pod  dns-5231/dns-test-a6be944c-0773-4efa-8323-db28a32fe5a2 contains '' instead of 'bar.example.com.'
Dec 24 14:25:47.758: INFO: File jessie_udp@dns-test-service-3.dns-5231.svc.cluster.local from pod  dns-5231/dns-test-a6be944c-0773-4efa-8323-db28a32fe5a2 contains '' instead of 'bar.example.com.'
Dec 24 14:25:47.758: INFO: Lookups using dns-5231/dns-test-a6be944c-0773-4efa-8323-db28a32fe5a2 failed for: [wheezy_udp@dns-test-service-3.dns-5231.svc.cluster.local jessie_udp@dns-test-service-3.dns-5231.svc.cluster.local]

Dec 24 14:25:52.771: INFO: File wheezy_udp@dns-test-service-3.dns-5231.svc.cluster.local from pod  dns-5231/dns-test-a6be944c-0773-4efa-8323-db28a32fe5a2 contains 'foo.example.com.
' instead of 'bar.example.com.'
Dec 24 14:25:52.779: INFO: File jessie_udp@dns-test-service-3.dns-5231.svc.cluster.local from pod  dns-5231/dns-test-a6be944c-0773-4efa-8323-db28a32fe5a2 contains 'foo.example.com.
' instead of 'bar.example.com.'
Dec 24 14:25:52.779: INFO: Lookups using dns-5231/dns-test-a6be944c-0773-4efa-8323-db28a32fe5a2 failed for: [wheezy_udp@dns-test-service-3.dns-5231.svc.cluster.local jessie_udp@dns-test-service-3.dns-5231.svc.cluster.local]

Dec 24 14:25:57.777: INFO: File wheezy_udp@dns-test-service-3.dns-5231.svc.cluster.local from pod  dns-5231/dns-test-a6be944c-0773-4efa-8323-db28a32fe5a2 contains 'foo.example.com.
' instead of 'bar.example.com.'
Dec 24 14:25:57.789: INFO: File jessie_udp@dns-test-service-3.dns-5231.svc.cluster.local from pod  dns-5231/dns-test-a6be944c-0773-4efa-8323-db28a32fe5a2 contains 'foo.example.com.
' instead of 'bar.example.com.'
Dec 24 14:25:57.789: INFO: Lookups using dns-5231/dns-test-a6be944c-0773-4efa-8323-db28a32fe5a2 failed for: [wheezy_udp@dns-test-service-3.dns-5231.svc.cluster.local jessie_udp@dns-test-service-3.dns-5231.svc.cluster.local]

Dec 24 14:26:02.822: INFO: DNS probes using dns-test-a6be944c-0773-4efa-8323-db28a32fe5a2 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5231.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-5231.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5231.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-5231.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 24 14:26:17.223: INFO: File wheezy_udp@dns-test-service-3.dns-5231.svc.cluster.local from pod  dns-5231/dns-test-06cc8813-3607-4151-b504-09a683069144 contains '' instead of '10.110.95.124'
Dec 24 14:26:17.230: INFO: File jessie_udp@dns-test-service-3.dns-5231.svc.cluster.local from pod  dns-5231/dns-test-06cc8813-3607-4151-b504-09a683069144 contains '' instead of '10.110.95.124'
Dec 24 14:26:17.230: INFO: Lookups using dns-5231/dns-test-06cc8813-3607-4151-b504-09a683069144 failed for: [wheezy_udp@dns-test-service-3.dns-5231.svc.cluster.local jessie_udp@dns-test-service-3.dns-5231.svc.cluster.local]

Dec 24 14:26:22.247: INFO: DNS probes using dns-test-06cc8813-3607-4151-b504-09a683069144 succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:26:22.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5231" for this suite.
Dec 24 14:26:30.610: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:26:30.748: INFO: namespace dns-5231 deletion completed in 8.20920487s

• [SLOW TEST:74.583 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:26:30.750: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Dec 24 14:26:30.851: INFO: Waiting up to 5m0s for pod "downward-api-2744434b-f5cd-4c47-aefc-cf3022fe8ed3" in namespace "downward-api-9650" to be "success or failure"
Dec 24 14:26:30.925: INFO: Pod "downward-api-2744434b-f5cd-4c47-aefc-cf3022fe8ed3": Phase="Pending", Reason="", readiness=false. Elapsed: 73.935607ms
Dec 24 14:26:32.932: INFO: Pod "downward-api-2744434b-f5cd-4c47-aefc-cf3022fe8ed3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081117845s
Dec 24 14:26:34.943: INFO: Pod "downward-api-2744434b-f5cd-4c47-aefc-cf3022fe8ed3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092650456s
Dec 24 14:26:36.996: INFO: Pod "downward-api-2744434b-f5cd-4c47-aefc-cf3022fe8ed3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.145021909s
Dec 24 14:26:39.012: INFO: Pod "downward-api-2744434b-f5cd-4c47-aefc-cf3022fe8ed3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.161419606s
STEP: Saw pod success
Dec 24 14:26:39.012: INFO: Pod "downward-api-2744434b-f5cd-4c47-aefc-cf3022fe8ed3" satisfied condition "success or failure"
Dec 24 14:26:39.017: INFO: Trying to get logs from node iruya-node pod downward-api-2744434b-f5cd-4c47-aefc-cf3022fe8ed3 container dapi-container: 
STEP: delete the pod
Dec 24 14:26:39.201: INFO: Waiting for pod downward-api-2744434b-f5cd-4c47-aefc-cf3022fe8ed3 to disappear
Dec 24 14:26:39.208: INFO: Pod downward-api-2744434b-f5cd-4c47-aefc-cf3022fe8ed3 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:26:39.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9650" for this suite.
Dec 24 14:26:45.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:26:45.411: INFO: namespace downward-api-9650 deletion completed in 6.198620801s

• [SLOW TEST:14.661 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:26:45.411: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Dec 24 14:26:45.511: INFO: Waiting up to 5m0s for pod "downward-api-35764a94-22a2-46b3-9c46-2db8e74c4f84" in namespace "downward-api-7523" to be "success or failure"
Dec 24 14:26:45.544: INFO: Pod "downward-api-35764a94-22a2-46b3-9c46-2db8e74c4f84": Phase="Pending", Reason="", readiness=false. Elapsed: 33.342117ms
Dec 24 14:26:47.558: INFO: Pod "downward-api-35764a94-22a2-46b3-9c46-2db8e74c4f84": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047800471s
Dec 24 14:26:49.569: INFO: Pod "downward-api-35764a94-22a2-46b3-9c46-2db8e74c4f84": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058731426s
Dec 24 14:26:51.578: INFO: Pod "downward-api-35764a94-22a2-46b3-9c46-2db8e74c4f84": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06694831s
Dec 24 14:26:53.595: INFO: Pod "downward-api-35764a94-22a2-46b3-9c46-2db8e74c4f84": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.084337581s
STEP: Saw pod success
Dec 24 14:26:53.595: INFO: Pod "downward-api-35764a94-22a2-46b3-9c46-2db8e74c4f84" satisfied condition "success or failure"
Dec 24 14:26:53.599: INFO: Trying to get logs from node iruya-node pod downward-api-35764a94-22a2-46b3-9c46-2db8e74c4f84 container dapi-container: 
STEP: delete the pod
Dec 24 14:26:53.792: INFO: Waiting for pod downward-api-35764a94-22a2-46b3-9c46-2db8e74c4f84 to disappear
Dec 24 14:26:53.884: INFO: Pod downward-api-35764a94-22a2-46b3-9c46-2db8e74c4f84 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:26:53.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7523" for this suite.
Dec 24 14:26:59.921: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:27:00.048: INFO: namespace downward-api-7523 deletion completed in 6.158125727s

• [SLOW TEST:14.637 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:27:00.049: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:27:12.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-7009" for this suite.
Dec 24 14:27:18.293: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:27:18.461: INFO: namespace kubelet-test-7009 deletion completed in 6.196241256s

• [SLOW TEST:18.412 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:27:18.461: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating pod
Dec 24 14:27:26.681: INFO: Pod pod-hostip-e3af3b24-37d3-4278-8970-95c26ac096e5 has hostIP: 10.96.3.65
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:27:26.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7675" for this suite.
Dec 24 14:27:49.036: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:27:49.179: INFO: namespace pods-7675 deletion completed in 22.490942919s

• [SLOW TEST:30.718 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:27:49.181: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W1224 14:28:30.175502       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 24 14:28:30.175: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:28:30.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5277" for this suite.
Dec 24 14:28:50.259: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:28:50.377: INFO: namespace gc-5277 deletion completed in 20.196835296s

• [SLOW TEST:61.197 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:28:50.378: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 24 14:29:16.616: INFO: Container started at 2019-12-24 14:28:57 +0000 UTC, pod became ready at 2019-12-24 14:29:16 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:29:16.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1345" for this suite.
Dec 24 14:29:38.695: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:29:38.840: INFO: namespace container-probe-1345 deletion completed in 22.216350635s

• [SLOW TEST:48.462 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:29:38.841: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-6271
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 24 14:29:38.943: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 24 14:30:19.785: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-6271 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 24 14:30:19.786: INFO: >>> kubeConfig: /root/.kube/config
Dec 24 14:30:20.213: INFO: Waiting for endpoints: map[]
Dec 24 14:30:20.226: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-6271 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 24 14:30:20.226: INFO: >>> kubeConfig: /root/.kube/config
Dec 24 14:30:20.726: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:30:20.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-6271" for this suite.
Dec 24 14:30:44.785: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:30:44.905: INFO: namespace pod-network-test-6271 deletion completed in 24.165621819s

• [SLOW TEST:66.064 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:30:44.905: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Dec 24 14:30:45.037: INFO: Pod name pod-release: Found 0 pods out of 1
Dec 24 14:30:50.045: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:30:51.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-5922" for this suite.
Dec 24 14:30:57.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:30:57.384: INFO: namespace replication-controller-5922 deletion completed in 6.163552279s

• [SLOW TEST:12.479 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:30:57.385: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating replication controller my-hostname-basic-ed26572c-27fd-48f7-ba03-fa26b0c0a205
Dec 24 14:30:57.570: INFO: Pod name my-hostname-basic-ed26572c-27fd-48f7-ba03-fa26b0c0a205: Found 0 pods out of 1
Dec 24 14:31:02.607: INFO: Pod name my-hostname-basic-ed26572c-27fd-48f7-ba03-fa26b0c0a205: Found 1 pods out of 1
Dec 24 14:31:02.607: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-ed26572c-27fd-48f7-ba03-fa26b0c0a205" are running
Dec 24 14:31:08.638: INFO: Pod "my-hostname-basic-ed26572c-27fd-48f7-ba03-fa26b0c0a205-89hns" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-24 14:30:59 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-24 14:30:59 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-ed26572c-27fd-48f7-ba03-fa26b0c0a205]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-24 14:30:59 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-ed26572c-27fd-48f7-ba03-fa26b0c0a205]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-24 14:30:57 +0000 UTC Reason: Message:}])
Dec 24 14:31:08.638: INFO: Trying to dial the pod
Dec 24 14:31:13.682: INFO: Controller my-hostname-basic-ed26572c-27fd-48f7-ba03-fa26b0c0a205: Got expected result from replica 1 [my-hostname-basic-ed26572c-27fd-48f7-ba03-fa26b0c0a205-89hns]: "my-hostname-basic-ed26572c-27fd-48f7-ba03-fa26b0c0a205-89hns", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:31:13.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-8693" for this suite.
Dec 24 14:31:19.761: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:31:19.965: INFO: namespace replication-controller-8693 deletion completed in 6.274252668s

• [SLOW TEST:22.580 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:31:19.967: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-7027
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating stateful set ss in namespace statefulset-7027
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7027
Dec 24 14:31:20.263: INFO: Found 0 stateful pods, waiting for 1
Dec 24 14:31:30.277: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Dec 24 14:31:30.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 24 14:31:32.677: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 24 14:31:32.678: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 24 14:31:32.678: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 24 14:31:32.689: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Dec 24 14:31:42.697: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 24 14:31:42.697: INFO: Waiting for statefulset status.replicas updated to 0
Dec 24 14:31:42.769: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Dec 24 14:31:42.770: INFO: ss-0  iruya-node  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:20 +0000 UTC  }]
Dec 24 14:31:42.770: INFO: ss-1              Pending         []
Dec 24 14:31:42.770: INFO: 
Dec 24 14:31:42.770: INFO: StatefulSet ss has not reached scale 3, at 2
Dec 24 14:31:44.551: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.937489349s
Dec 24 14:31:45.809: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.156192628s
Dec 24 14:31:46.823: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.898054045s
Dec 24 14:31:47.856: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.884302883s
Dec 24 14:31:50.243: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.850717673s
Dec 24 14:31:51.261: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.464126082s
Dec 24 14:31:52.400: INFO: Verifying statefulset ss doesn't scale past 3 for another 446.458182ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7027
Dec 24 14:31:53.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 14:31:54.241: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Dec 24 14:31:54.241: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 24 14:31:54.241: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 24 14:31:54.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 14:31:54.688: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n"
Dec 24 14:31:54.688: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 24 14:31:54.688: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 24 14:31:54.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 14:31:55.157: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n"
Dec 24 14:31:55.157: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 24 14:31:55.157: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 24 14:31:55.221: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 24 14:31:55.221: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 24 14:31:55.221: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Dec 24 14:31:55.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 24 14:31:55.845: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 24 14:31:55.845: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 24 14:31:55.845: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 24 14:31:55.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 24 14:31:56.241: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 24 14:31:56.241: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 24 14:31:56.241: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 24 14:31:56.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 24 14:31:56.804: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 24 14:31:56.804: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 24 14:31:56.804: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 24 14:31:56.804: INFO: Waiting for statefulset status.replicas updated to 0
Dec 24 14:31:56.812: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Dec 24 14:32:06.837: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 24 14:32:06.837: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Dec 24 14:32:06.837: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Dec 24 14:32:06.908: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 24 14:32:06.908: INFO: ss-0  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:20 +0000 UTC  }]
Dec 24 14:32:06.908: INFO: ss-1  iruya-server-sfge57q7djm7  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:42 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:42 +0000 UTC  }]
Dec 24 14:32:06.908: INFO: ss-2  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:42 +0000 UTC  }]
Dec 24 14:32:06.908: INFO: 
Dec 24 14:32:06.908: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 24 14:32:09.318: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 24 14:32:09.318: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:20 +0000 UTC  }]
Dec 24 14:32:09.318: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:42 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:42 +0000 UTC  }]
Dec 24 14:32:09.318: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:42 +0000 UTC  }]
Dec 24 14:32:09.318: INFO: 
Dec 24 14:32:09.318: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 24 14:32:10.329: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 24 14:32:10.329: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:20 +0000 UTC  }]
Dec 24 14:32:10.330: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:42 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:42 +0000 UTC  }]
Dec 24 14:32:10.330: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:42 +0000 UTC  }]
Dec 24 14:32:10.330: INFO: 
Dec 24 14:32:10.330: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 24 14:32:11.696: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 24 14:32:11.696: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:20 +0000 UTC  }]
Dec 24 14:32:11.696: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:42 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:42 +0000 UTC  }]
Dec 24 14:32:11.696: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:42 +0000 UTC  }]
Dec 24 14:32:11.696: INFO: 
Dec 24 14:32:11.696: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 24 14:32:12.704: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 24 14:32:12.705: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:20 +0000 UTC  }]
Dec 24 14:32:12.705: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:42 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:42 +0000 UTC  }]
Dec 24 14:32:12.705: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:42 +0000 UTC  }]
Dec 24 14:32:12.705: INFO: 
Dec 24 14:32:12.705: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 24 14:32:13.714: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 24 14:32:13.714: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:20 +0000 UTC  }]
Dec 24 14:32:13.714: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:42 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:42 +0000 UTC  }]
Dec 24 14:32:13.714: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:42 +0000 UTC  }]
Dec 24 14:32:13.714: INFO: 
Dec 24 14:32:13.714: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 24 14:32:14.721: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Dec 24 14:32:14.721: INFO: ss-0  iruya-node  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:20 +0000 UTC  }]
Dec 24 14:32:14.721: INFO: ss-2  iruya-node  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:42 +0000 UTC  }]
Dec 24 14:32:14.721: INFO: 
Dec 24 14:32:14.721: INFO: StatefulSet ss has not reached scale 0, at 2
Dec 24 14:32:15.736: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Dec 24 14:32:15.736: INFO: ss-0  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:20 +0000 UTC  }]
Dec 24 14:32:15.736: INFO: ss-2  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:42 +0000 UTC  }]
Dec 24 14:32:15.736: INFO: 
Dec 24 14:32:15.736: INFO: StatefulSet ss has not reached scale 0, at 2
Dec 24 14:32:16.754: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Dec 24 14:32:16.754: INFO: ss-0  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:20 +0000 UTC  }]
Dec 24 14:32:16.754: INFO: ss-2  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:31:42 +0000 UTC  }]
Dec 24 14:32:16.754: INFO: 
Dec 24 14:32:16.754: INFO: StatefulSet ss has not reached scale 0, at 2
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7027
Dec 24 14:32:17.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 14:32:18.084: INFO: rc: 1
Dec 24 14:32:18.085: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc00278ed50 exit status 1   true [0xc00035bdd8 0xc00035be88 0xc00035bf50] [0xc00035bdd8 0xc00035be88 0xc00035bf50] [0xc00035be70 0xc00035bed8] [0xba6c50 0xba6c50] 0xc0030f7740 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1
Dec 24 14:32:28.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 14:32:28.263: INFO: rc: 1
Dec 24 14:32:28.263: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0012ea150 exit status 1   true [0xc000186420 0xc000186540 0xc0001865b8] [0xc000186420 0xc000186540 0xc0001865b8] [0xc000186508 0xc000186578] [0xba6c50 0xba6c50] 0xc0030d6d20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 24 14:32:38.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 14:32:38.511: INFO: rc: 1
Dec 24 14:32:38.512: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0012ea240 exit status 1   true [0xc000186658 0xc000186780 0xc000186878] [0xc000186658 0xc000186780 0xc000186878] [0xc000186770 0xc0001867e8] [0xba6c50 0xba6c50] 0xc0030d70e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 24 14:32:48.512: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 14:32:49.138: INFO: rc: 1
Dec 24 14:32:49.138: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002ea8c30 exit status 1   true [0xc000feb130 0xc000feb1a8 0xc000feb228] [0xc000feb130 0xc000feb1a8 0xc000feb228] [0xc000feb168 0xc000feb208] [0xba6c50 0xba6c50] 0xc0032cd4a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 24 14:32:59.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 14:32:59.330: INFO: rc: 1
Dec 24 14:32:59.330: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002ea8cf0 exit status 1   true [0xc000feb248 0xc000feb2c8 0xc000feb2e0] [0xc000feb248 0xc000feb2c8 0xc000feb2e0] [0xc000feb2b8 0xc000feb2d8] [0xba6c50 0xba6c50] 0xc0032cd7a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 24 14:33:09.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 14:33:09.498: INFO: rc: 1
Dec 24 14:33:09.498: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002ea8db0 exit status 1   true [0xc000feb300 0xc000feb348 0xc000feb378] [0xc000feb300 0xc000feb348 0xc000feb378] [0xc000feb338 0xc000feb368] [0xba6c50 0xba6c50] 0xc0032cdaa0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 24 14:33:19.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 14:33:19.710: INFO: rc: 1
Dec 24 14:33:19.710: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002ea8e70 exit status 1   true [0xc000feb388 0xc000feb410 0xc000feb458] [0xc000feb388 0xc000feb410 0xc000feb458] [0xc000feb3b0 0xc000feb438] [0xba6c50 0xba6c50] 0xc0032cdf20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 24 14:33:29.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 14:33:29.914: INFO: rc: 1
Dec 24 14:33:29.914: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00278ee40 exit status 1   true [0xc00035bf58 0xc002ff4000 0xc002ff4018] [0xc00035bf58 0xc002ff4000 0xc002ff4018] [0xc00035bfd0 0xc002ff4010] [0xba6c50 0xba6c50] 0xc0030f7a40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 24 14:33:39.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 14:33:40.108: INFO: rc: 1
Dec 24 14:33:40.108: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002ea8f30 exit status 1   true [0xc000feb4a8 0xc000feb508 0xc000feb530] [0xc000feb4a8 0xc000feb508 0xc000feb530] [0xc000feb4f0 0xc000feb518] [0xba6c50 0xba6c50] 0xc002209740 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 24 14:33:50.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 14:33:50.289: INFO: rc: 1
Dec 24 14:33:50.289: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00051b7d0 exit status 1   true [0xc00035b280 0xc00035b398 0xc00035b3f8] [0xc00035b280 0xc00035b398 0xc00035b3f8] [0xc00035b340 0xc00035b3d8] [0xba6c50 0xba6c50] 0xc0032cd020 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 24 14:34:00.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 14:34:00.460: INFO: rc: 1
Dec 24 14:34:00.460: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00051b8f0 exit status 1   true [0xc00035b438 0xc00035b698 0xc00035b728] [0xc00035b438 0xc00035b698 0xc00035b728] [0xc00035b550 0xc00035b6d8] [0xba6c50 0xba6c50] 0xc0032cd320 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 24 14:34:10.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 14:34:10.691: INFO: rc: 1
Dec 24 14:34:10.691: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00051b9b0 exit status 1   true [0xc00035b788 0xc00035b928 0xc00035ba28] [0xc00035b788 0xc00035b928 0xc00035ba28] [0xc00035b848 0xc00035b988] [0xba6c50 0xba6c50] 0xc0032cd620 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 24 14:34:20.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 14:34:20.979: INFO: rc: 1
Dec 24 14:34:20.979: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00051ba70 exit status 1   true [0xc00035ba70 0xc00035bc20 0xc00035bd98] [0xc00035ba70 0xc00035bc20 0xc00035bd98] [0xc00035bb48 0xc00035bd58] [0xba6c50 0xba6c50] 0xc0032cd920 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 24 14:34:30.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 14:34:31.241: INFO: rc: 1
Dec 24 14:34:31.241: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00051bb60 exit status 1   true [0xc00035bdb0 0xc00035be70 0xc00035bed8] [0xc00035bdb0 0xc00035be70 0xc00035bed8] [0xc00035be38 0xc00035be90] [0xba6c50 0xba6c50] 0xc0032cdc20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 24 14:34:41.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 14:34:41.392: INFO: rc: 1
Dec 24 14:34:41.393: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000702c60 exit status 1   true [0xc0007097c8 0xc0007098c8 0xc000709988] [0xc0007097c8 0xc0007098c8 0xc000709988] [0xc000709870 0xc000709978] [0xba6c50 0xba6c50] 0xc002252a80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 24 14:34:51.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 14:34:51.552: INFO: rc: 1
Dec 24 14:34:51.552: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000702d50 exit status 1   true [0xc0007099a0 0xc000709a00 0xc000709b58] [0xc0007099a0 0xc000709a00 0xc000709b58] [0xc0007099d0 0xc000709b38] [0xba6c50 0xba6c50] 0xc002253500 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 24 14:35:01.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 14:35:01.812: INFO: rc: 1
Dec 24 14:35:01.813: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002c840c0 exit status 1   true [0xc002ff4000 0xc002ff4018 0xc002ff4030] [0xc002ff4000 0xc002ff4018 0xc002ff4030] [0xc002ff4010 0xc002ff4028] [0xba6c50 0xba6c50] 0xc002bf2240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 24 14:35:11.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 14:35:11.987: INFO: rc: 1
Dec 24 14:35:11.987: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002c84180 exit status 1   true [0xc002ff4038 0xc002ff4050 0xc002ff4068] [0xc002ff4038 0xc002ff4050 0xc002ff4068] [0xc002ff4048 0xc002ff4060] [0xba6c50 0xba6c50] 0xc002bf3500 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 24 14:35:21.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 14:35:22.231: INFO: rc: 1
Dec 24 14:35:22.231: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002c84270 exit status 1   true [0xc002ff4070 0xc002ff4088 0xc002ff40a0] [0xc002ff4070 0xc002ff4088 0xc002ff40a0] [0xc002ff4080 0xc002ff4098] [0xba6c50 0xba6c50] 0xc002bf3800 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 24 14:35:32.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 14:35:32.429: INFO: rc: 1
Dec 24 14:35:32.430: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002c84330 exit status 1   true [0xc002ff40a8 0xc002ff40c0 0xc002ff40d8] [0xc002ff40a8 0xc002ff40c0 0xc002ff40d8] [0xc002ff40b8 0xc002ff40d0] [0xba6c50 0xba6c50] 0xc002bf3b00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 24 14:35:42.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 14:35:42.637: INFO: rc: 1
Dec 24 14:35:42.637: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002c84420 exit status 1   true [0xc002ff40e0 0xc002ff40f8 0xc002ff4110] [0xc002ff40e0 0xc002ff40f8 0xc002ff4110] [0xc002ff40f0 0xc002ff4108] [0xba6c50 0xba6c50] 0xc002bf3e00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 24 14:35:52.639: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 14:35:52.809: INFO: rc: 1
Dec 24 14:35:52.810: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002c84090 exit status 1   true [0xc002ff4008 0xc002ff4020 0xc002ff4038] [0xc002ff4008 0xc002ff4020 0xc002ff4038] [0xc002ff4018 0xc002ff4030] [0xba6c50 0xba6c50] 0xc002bf2240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 24 14:36:02.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 14:36:03.030: INFO: rc: 1
Dec 24 14:36:03.031: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00051b800 exit status 1   true [0xc00035b258 0xc00035b340 0xc00035b3d8] [0xc00035b258 0xc00035b340 0xc00035b3d8] [0xc00035b2a0 0xc00035b3c0] [0xba6c50 0xba6c50] 0xc0032cd020 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 24 14:36:13.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 14:36:13.275: INFO: rc: 1
Dec 24 14:36:13.276: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000702c30 exit status 1   true [0xc0007097c8 0xc0007098c8 0xc000709988] [0xc0007097c8 0xc0007098c8 0xc000709988] [0xc000709870 0xc000709978] [0xba6c50 0xba6c50] 0xc002252a80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 24 14:36:23.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 14:36:23.408: INFO: rc: 1
Dec 24 14:36:23.408: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000702d80 exit status 1   true [0xc0007099a0 0xc000709a00 0xc000709b58] [0xc0007099a0 0xc000709a00 0xc000709b58] [0xc0007099d0 0xc000709b38] [0xba6c50 0xba6c50] 0xc002253500 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 24 14:36:33.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 14:36:33.652: INFO: rc: 1
Dec 24 14:36:33.653: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000702e40 exit status 1   true [0xc000709b60 0xc000709b88 0xc000709c20] [0xc000709b60 0xc000709b88 0xc000709c20] [0xc000709b80 0xc000709bd0] [0xba6c50 0xba6c50] 0xc0030f6360 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 24 14:36:43.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 14:36:43.901: INFO: rc: 1
Dec 24 14:36:43.901: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000702f00 exit status 1   true [0xc000709c80 0xc000709cc8 0xc000709e20] [0xc000709c80 0xc000709cc8 0xc000709e20] [0xc000709cb8 0xc000709df8] [0xba6c50 0xba6c50] 0xc0030f6660 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 24 14:36:53.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 14:36:54.032: INFO: rc: 1
Dec 24 14:36:54.032: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002f980c0 exit status 1   true [0xc002082000 0xc002082040 0xc002082080] [0xc002082000 0xc002082040 0xc002082080] [0xc002082038 0xc002082068] [0xba6c50 0xba6c50] 0xc00262c480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 24 14:37:04.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 14:37:04.170: INFO: rc: 1
Dec 24 14:37:04.170: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002f98180 exit status 1   true [0xc002082088 0xc0020820b8 0xc0020820e0] [0xc002082088 0xc0020820b8 0xc0020820e0] [0xc002082098 0xc0020820d8] [0xba6c50 0xba6c50] 0xc00262c960 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 24 14:37:14.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 14:37:14.266: INFO: rc: 1
Dec 24 14:37:14.267: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00051b980 exit status 1   true [0xc00035b3f8 0xc00035b550 0xc00035b6d8] [0xc00035b3f8 0xc00035b550 0xc00035b6d8] [0xc00035b520 0xc00035b6b8] [0xba6c50 0xba6c50] 0xc0032cd320 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 24 14:37:24.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 24 14:37:24.458: INFO: rc: 1
Dec 24 14:37:24.458: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: 
Dec 24 14:37:24.458: INFO: Scaling statefulset ss to 0
Dec 24 14:37:24.479: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Dec 24 14:37:24.482: INFO: Deleting all statefulset in ns statefulset-7027
Dec 24 14:37:24.486: INFO: Scaling statefulset ss to 0
Dec 24 14:37:24.494: INFO: Waiting for statefulset status.replicas updated to 0
Dec 24 14:37:24.497: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:37:24.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-7027" for this suite.
Dec 24 14:37:30.563: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:37:30.690: INFO: namespace statefulset-7027 deletion completed in 6.151132471s

• [SLOW TEST:370.724 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:37:30.691: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1691.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1691.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1691.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1691.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1691.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1691.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1691.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1691.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1691.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1691.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1691.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1691.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1691.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 148.51.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.51.148_udp@PTR;check="$$(dig +tcp +noall +answer +search 148.51.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.51.148_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1691.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1691.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1691.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1691.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1691.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1691.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1691.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1691.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1691.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1691.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1691.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1691.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1691.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 148.51.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.51.148_udp@PTR;check="$$(dig +tcp +noall +answer +search 148.51.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.51.148_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 24 14:37:43.073: INFO: Unable to read wheezy_udp@dns-test-service.dns-1691.svc.cluster.local from pod dns-1691/dns-test-8fad430d-aad3-41df-9689-a71b95c5f7bc: the server could not find the requested resource (get pods dns-test-8fad430d-aad3-41df-9689-a71b95c5f7bc)
Dec 24 14:37:43.081: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1691.svc.cluster.local from pod dns-1691/dns-test-8fad430d-aad3-41df-9689-a71b95c5f7bc: the server could not find the requested resource (get pods dns-test-8fad430d-aad3-41df-9689-a71b95c5f7bc)
Dec 24 14:37:43.087: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1691.svc.cluster.local from pod dns-1691/dns-test-8fad430d-aad3-41df-9689-a71b95c5f7bc: the server could not find the requested resource (get pods dns-test-8fad430d-aad3-41df-9689-a71b95c5f7bc)
Dec 24 14:37:43.091: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1691.svc.cluster.local from pod dns-1691/dns-test-8fad430d-aad3-41df-9689-a71b95c5f7bc: the server could not find the requested resource (get pods dns-test-8fad430d-aad3-41df-9689-a71b95c5f7bc)
Dec 24 14:37:43.096: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-1691.svc.cluster.local from pod dns-1691/dns-test-8fad430d-aad3-41df-9689-a71b95c5f7bc: the server could not find the requested resource (get pods dns-test-8fad430d-aad3-41df-9689-a71b95c5f7bc)
Dec 24 14:37:43.108: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-1691.svc.cluster.local from pod dns-1691/dns-test-8fad430d-aad3-41df-9689-a71b95c5f7bc: the server could not find the requested resource (get pods dns-test-8fad430d-aad3-41df-9689-a71b95c5f7bc)
Dec 24 14:37:43.113: INFO: Unable to read wheezy_udp@PodARecord from pod dns-1691/dns-test-8fad430d-aad3-41df-9689-a71b95c5f7bc: the server could not find the requested resource (get pods dns-test-8fad430d-aad3-41df-9689-a71b95c5f7bc)
Dec 24 14:37:43.118: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-1691/dns-test-8fad430d-aad3-41df-9689-a71b95c5f7bc: the server could not find the requested resource (get pods dns-test-8fad430d-aad3-41df-9689-a71b95c5f7bc)
Dec 24 14:37:43.125: INFO: Unable to read 10.97.51.148_udp@PTR from pod dns-1691/dns-test-8fad430d-aad3-41df-9689-a71b95c5f7bc: the server could not find the requested resource (get pods dns-test-8fad430d-aad3-41df-9689-a71b95c5f7bc)
Dec 24 14:37:43.132: INFO: Unable to read 10.97.51.148_tcp@PTR from pod dns-1691/dns-test-8fad430d-aad3-41df-9689-a71b95c5f7bc: the server could not find the requested resource (get pods dns-test-8fad430d-aad3-41df-9689-a71b95c5f7bc)
Dec 24 14:37:43.136: INFO: Unable to read jessie_udp@dns-test-service.dns-1691.svc.cluster.local from pod dns-1691/dns-test-8fad430d-aad3-41df-9689-a71b95c5f7bc: the server could not find the requested resource (get pods dns-test-8fad430d-aad3-41df-9689-a71b95c5f7bc)
Dec 24 14:37:43.141: INFO: Unable to read jessie_tcp@dns-test-service.dns-1691.svc.cluster.local from pod dns-1691/dns-test-8fad430d-aad3-41df-9689-a71b95c5f7bc: the server could not find the requested resource (get pods dns-test-8fad430d-aad3-41df-9689-a71b95c5f7bc)
Dec 24 14:37:43.146: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1691.svc.cluster.local from pod dns-1691/dns-test-8fad430d-aad3-41df-9689-a71b95c5f7bc: the server could not find the requested resource (get pods dns-test-8fad430d-aad3-41df-9689-a71b95c5f7bc)
Dec 24 14:37:43.151: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1691.svc.cluster.local from pod dns-1691/dns-test-8fad430d-aad3-41df-9689-a71b95c5f7bc: the server could not find the requested resource (get pods dns-test-8fad430d-aad3-41df-9689-a71b95c5f7bc)
Dec 24 14:37:43.160: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-1691.svc.cluster.local from pod dns-1691/dns-test-8fad430d-aad3-41df-9689-a71b95c5f7bc: the server could not find the requested resource (get pods dns-test-8fad430d-aad3-41df-9689-a71b95c5f7bc)
Dec 24 14:37:43.172: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-1691.svc.cluster.local from pod dns-1691/dns-test-8fad430d-aad3-41df-9689-a71b95c5f7bc: the server could not find the requested resource (get pods dns-test-8fad430d-aad3-41df-9689-a71b95c5f7bc)
Dec 24 14:37:43.176: INFO: Unable to read jessie_udp@PodARecord from pod dns-1691/dns-test-8fad430d-aad3-41df-9689-a71b95c5f7bc: the server could not find the requested resource (get pods dns-test-8fad430d-aad3-41df-9689-a71b95c5f7bc)
Dec 24 14:37:43.179: INFO: Unable to read jessie_tcp@PodARecord from pod dns-1691/dns-test-8fad430d-aad3-41df-9689-a71b95c5f7bc: the server could not find the requested resource (get pods dns-test-8fad430d-aad3-41df-9689-a71b95c5f7bc)
Dec 24 14:37:43.182: INFO: Unable to read 10.97.51.148_udp@PTR from pod dns-1691/dns-test-8fad430d-aad3-41df-9689-a71b95c5f7bc: the server could not find the requested resource (get pods dns-test-8fad430d-aad3-41df-9689-a71b95c5f7bc)
Dec 24 14:37:43.186: INFO: Unable to read 10.97.51.148_tcp@PTR from pod dns-1691/dns-test-8fad430d-aad3-41df-9689-a71b95c5f7bc: the server could not find the requested resource (get pods dns-test-8fad430d-aad3-41df-9689-a71b95c5f7bc)
Dec 24 14:37:43.186: INFO: Lookups using dns-1691/dns-test-8fad430d-aad3-41df-9689-a71b95c5f7bc failed for: [wheezy_udp@dns-test-service.dns-1691.svc.cluster.local wheezy_tcp@dns-test-service.dns-1691.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1691.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1691.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-1691.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-1691.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.97.51.148_udp@PTR 10.97.51.148_tcp@PTR jessie_udp@dns-test-service.dns-1691.svc.cluster.local jessie_tcp@dns-test-service.dns-1691.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1691.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1691.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-1691.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-1691.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.97.51.148_udp@PTR 10.97.51.148_tcp@PTR]

Dec 24 14:37:48.282: INFO: DNS probes using dns-1691/dns-test-8fad430d-aad3-41df-9689-a71b95c5f7bc succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:37:48.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1691" for this suite.
Dec 24 14:37:54.733: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:37:54.838: INFO: namespace dns-1691 deletion completed in 6.129367074s

• [SLOW TEST:24.147 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:37:54.839: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-8718
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Dec 24 14:37:55.008: INFO: Found 0 stateful pods, waiting for 3
Dec 24 14:38:05.018: INFO: Found 2 stateful pods, waiting for 3
Dec 24 14:38:15.017: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 24 14:38:15.017: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 24 14:38:15.017: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 24 14:38:25.024: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 24 14:38:25.024: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 24 14:38:25.024: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Dec 24 14:38:25.060: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Dec 24 14:38:35.143: INFO: Updating stateful set ss2
Dec 24 14:38:35.189: INFO: Waiting for Pod statefulset-8718/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 24 14:38:45.223: INFO: Waiting for Pod statefulset-8718/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Dec 24 14:38:55.404: INFO: Found 2 stateful pods, waiting for 3
Dec 24 14:39:05.415: INFO: Found 2 stateful pods, waiting for 3
Dec 24 14:39:15.416: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 24 14:39:15.416: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 24 14:39:15.416: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Dec 24 14:39:15.447: INFO: Updating stateful set ss2
Dec 24 14:39:15.564: INFO: Waiting for Pod statefulset-8718/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 24 14:39:25.770: INFO: Updating stateful set ss2
Dec 24 14:39:25.829: INFO: Waiting for StatefulSet statefulset-8718/ss2 to complete update
Dec 24 14:39:25.829: INFO: Waiting for Pod statefulset-8718/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 24 14:39:35.841: INFO: Waiting for StatefulSet statefulset-8718/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Dec 24 14:39:45.850: INFO: Deleting all statefulset in ns statefulset-8718
Dec 24 14:39:45.857: INFO: Scaling statefulset ss2 to 0
Dec 24 14:40:25.898: INFO: Waiting for statefulset status.replicas updated to 0
Dec 24 14:40:25.902: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:40:25.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-8718" for this suite.
Dec 24 14:40:33.993: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:40:34.152: INFO: namespace statefulset-8718 deletion completed in 8.189094728s

• [SLOW TEST:159.313 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:40:34.153: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Dec 24 14:40:34.236: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 24 14:40:34.243: INFO: Waiting for terminating namespaces to be deleted...
Dec 24 14:40:34.245: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Dec 24 14:40:34.266: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Dec 24 14:40:34.266: INFO: 	Container weave ready: true, restart count 0
Dec 24 14:40:34.266: INFO: 	Container weave-npc ready: true, restart count 0
Dec 24 14:40:34.266: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Dec 24 14:40:34.266: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 24 14:40:34.266: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Dec 24 14:40:34.280: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Dec 24 14:40:34.280: INFO: 	Container kube-apiserver ready: true, restart count 0
Dec 24 14:40:34.280: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Dec 24 14:40:34.280: INFO: 	Container kube-scheduler ready: true, restart count 7
Dec 24 14:40:34.280: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Dec 24 14:40:34.280: INFO: 	Container coredns ready: true, restart count 0
Dec 24 14:40:34.280: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Dec 24 14:40:34.280: INFO: 	Container etcd ready: true, restart count 0
Dec 24 14:40:34.280: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Dec 24 14:40:34.280: INFO: 	Container weave ready: true, restart count 0
Dec 24 14:40:34.280: INFO: 	Container weave-npc ready: true, restart count 0
Dec 24 14:40:34.280: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Dec 24 14:40:34.280: INFO: 	Container coredns ready: true, restart count 0
Dec 24 14:40:34.280: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Dec 24 14:40:34.280: INFO: 	Container kube-controller-manager ready: true, restart count 10
Dec 24 14:40:34.280: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Dec 24 14:40:34.280: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: verifying the node has the label node iruya-node
STEP: verifying the node has the label node iruya-server-sfge57q7djm7
Dec 24 14:40:34.466: INFO: Pod coredns-5c98db65d4-bm4gs requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Dec 24 14:40:34.466: INFO: Pod coredns-5c98db65d4-xx8w8 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Dec 24 14:40:34.466: INFO: Pod etcd-iruya-server-sfge57q7djm7 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Dec 24 14:40:34.466: INFO: Pod kube-apiserver-iruya-server-sfge57q7djm7 requesting resource cpu=250m on Node iruya-server-sfge57q7djm7
Dec 24 14:40:34.466: INFO: Pod kube-controller-manager-iruya-server-sfge57q7djm7 requesting resource cpu=200m on Node iruya-server-sfge57q7djm7
Dec 24 14:40:34.466: INFO: Pod kube-proxy-58v95 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Dec 24 14:40:34.466: INFO: Pod kube-proxy-976zl requesting resource cpu=0m on Node iruya-node
Dec 24 14:40:34.466: INFO: Pod kube-scheduler-iruya-server-sfge57q7djm7 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Dec 24 14:40:34.466: INFO: Pod weave-net-bzl4d requesting resource cpu=20m on Node iruya-server-sfge57q7djm7
Dec 24 14:40:34.466: INFO: Pod weave-net-rlp57 requesting resource cpu=20m on Node iruya-node
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-839d9d69-f597-4119-92ce-44880b44d836.15e3559eadd9355f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1952/filler-pod-839d9d69-f597-4119-92ce-44880b44d836 to iruya-server-sfge57q7djm7]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-839d9d69-f597-4119-92ce-44880b44d836.15e3559fc9779e2e], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-839d9d69-f597-4119-92ce-44880b44d836.15e355a0bc9d6ac8], Reason = [Created], Message = [Created container filler-pod-839d9d69-f597-4119-92ce-44880b44d836]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-839d9d69-f597-4119-92ce-44880b44d836.15e355a0e2314655], Reason = [Started], Message = [Started container filler-pod-839d9d69-f597-4119-92ce-44880b44d836]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-e3e0a994-7762-4953-8a52-1f595bcddf2e.15e3559eaccef49e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1952/filler-pod-e3e0a994-7762-4953-8a52-1f595bcddf2e to iruya-node]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-e3e0a994-7762-4953-8a52-1f595bcddf2e.15e3559fc353bb20], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-e3e0a994-7762-4953-8a52-1f595bcddf2e.15e355a084698d2d], Reason = [Created], Message = [Created container filler-pod-e3e0a994-7762-4953-8a52-1f595bcddf2e]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-e3e0a994-7762-4953-8a52-1f595bcddf2e.15e355a0a76d6cb3], Reason = [Started], Message = [Started container filler-pod-e3e0a994-7762-4953-8a52-1f595bcddf2e]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15e355a1052f8226], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: removing the label node off the node iruya-node
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node iruya-server-sfge57q7djm7
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:40:45.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-1952" for this suite.
Dec 24 14:40:52.908: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:40:53.894: INFO: namespace sched-pred-1952 deletion completed in 8.164510698s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:19.741 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:40:53.894: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Dec 24 14:40:54.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-874'
Dec 24 14:40:54.619: INFO: stderr: ""
Dec 24 14:40:54.619: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 24 14:40:54.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-874'
Dec 24 14:40:55.674: INFO: stderr: ""
Dec 24 14:40:55.674: INFO: stdout: ""
STEP: Replicas for name=update-demo: expected=2 actual=0
Dec 24 14:41:00.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-874'
Dec 24 14:41:00.893: INFO: stderr: ""
Dec 24 14:41:00.893: INFO: stdout: "update-demo-nautilus-4zcp9 update-demo-nautilus-rt8gl "
Dec 24 14:41:00.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4zcp9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-874'
Dec 24 14:41:01.094: INFO: stderr: ""
Dec 24 14:41:01.094: INFO: stdout: ""
Dec 24 14:41:01.094: INFO: update-demo-nautilus-4zcp9 is created but not running
Dec 24 14:41:06.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-874'
Dec 24 14:41:06.222: INFO: stderr: ""
Dec 24 14:41:06.222: INFO: stdout: "update-demo-nautilus-4zcp9 update-demo-nautilus-rt8gl "
Dec 24 14:41:06.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4zcp9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-874'
Dec 24 14:41:06.383: INFO: stderr: ""
Dec 24 14:41:06.384: INFO: stdout: "true"
Dec 24 14:41:06.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4zcp9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-874'
Dec 24 14:41:06.547: INFO: stderr: ""
Dec 24 14:41:06.547: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 24 14:41:06.547: INFO: validating pod update-demo-nautilus-4zcp9
Dec 24 14:41:06.570: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 24 14:41:06.571: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 24 14:41:06.571: INFO: update-demo-nautilus-4zcp9 is verified up and running
Dec 24 14:41:06.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rt8gl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-874'
Dec 24 14:41:06.749: INFO: stderr: ""
Dec 24 14:41:06.749: INFO: stdout: "true"
Dec 24 14:41:06.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rt8gl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-874'
Dec 24 14:41:07.024: INFO: stderr: ""
Dec 24 14:41:07.024: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 24 14:41:07.024: INFO: validating pod update-demo-nautilus-rt8gl
Dec 24 14:41:07.054: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 24 14:41:07.054: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 24 14:41:07.054: INFO: update-demo-nautilus-rt8gl is verified up and running
STEP: using delete to clean up resources
Dec 24 14:41:07.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-874'
Dec 24 14:41:07.209: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 24 14:41:07.209: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Dec 24 14:41:07.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-874'
Dec 24 14:41:07.323: INFO: stderr: "No resources found.\n"
Dec 24 14:41:07.323: INFO: stdout: ""
Dec 24 14:41:07.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-874 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 24 14:41:07.411: INFO: stderr: ""
Dec 24 14:41:07.411: INFO: stdout: "update-demo-nautilus-4zcp9\nupdate-demo-nautilus-rt8gl\n"
Dec 24 14:41:07.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-874'
Dec 24 14:41:08.106: INFO: stderr: "No resources found.\n"
Dec 24 14:41:08.106: INFO: stdout: ""
Dec 24 14:41:08.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-874 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 24 14:41:08.384: INFO: stderr: ""
Dec 24 14:41:08.385: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:41:08.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-874" for this suite.
Dec 24 14:41:31.168: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:41:31.239: INFO: namespace kubectl-874 deletion completed in 22.839461563s

• [SLOW TEST:37.345 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:41:31.240: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-c3bf70c2-67f1-4ac8-933f-b61f42b34cbb
STEP: Creating a pod to test consume secrets
Dec 24 14:41:31.324: INFO: Waiting up to 5m0s for pod "pod-secrets-ef1547ef-ab04-46fc-a673-55fffb5afb53" in namespace "secrets-501" to be "success or failure"
Dec 24 14:41:31.372: INFO: Pod "pod-secrets-ef1547ef-ab04-46fc-a673-55fffb5afb53": Phase="Pending", Reason="", readiness=false. Elapsed: 47.648212ms
Dec 24 14:41:33.379: INFO: Pod "pod-secrets-ef1547ef-ab04-46fc-a673-55fffb5afb53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054821005s
Dec 24 14:41:35.387: INFO: Pod "pod-secrets-ef1547ef-ab04-46fc-a673-55fffb5afb53": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062444602s
Dec 24 14:41:37.397: INFO: Pod "pod-secrets-ef1547ef-ab04-46fc-a673-55fffb5afb53": Phase="Pending", Reason="", readiness=false. Elapsed: 6.072894651s
Dec 24 14:41:39.412: INFO: Pod "pod-secrets-ef1547ef-ab04-46fc-a673-55fffb5afb53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.087257734s
STEP: Saw pod success
Dec 24 14:41:39.412: INFO: Pod "pod-secrets-ef1547ef-ab04-46fc-a673-55fffb5afb53" satisfied condition "success or failure"
Dec 24 14:41:39.415: INFO: Trying to get logs from node iruya-node pod pod-secrets-ef1547ef-ab04-46fc-a673-55fffb5afb53 container secret-volume-test: 
STEP: delete the pod
Dec 24 14:41:39.520: INFO: Waiting for pod pod-secrets-ef1547ef-ab04-46fc-a673-55fffb5afb53 to disappear
Dec 24 14:41:39.527: INFO: Pod pod-secrets-ef1547ef-ab04-46fc-a673-55fffb5afb53 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:41:39.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-501" for this suite.
Dec 24 14:41:45.617: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:41:45.735: INFO: namespace secrets-501 deletion completed in 6.194733704s

• [SLOW TEST:14.495 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:41:45.735: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 24 14:41:45.871: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ab419978-4c4c-4e6d-8e4c-d0a2aa68b2e3" in namespace "downward-api-5782" to be "success or failure"
Dec 24 14:41:45.886: INFO: Pod "downwardapi-volume-ab419978-4c4c-4e6d-8e4c-d0a2aa68b2e3": Phase="Pending", Reason="", readiness=false. Elapsed: 15.030366ms
Dec 24 14:41:47.911: INFO: Pod "downwardapi-volume-ab419978-4c4c-4e6d-8e4c-d0a2aa68b2e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03993782s
Dec 24 14:41:49.924: INFO: Pod "downwardapi-volume-ab419978-4c4c-4e6d-8e4c-d0a2aa68b2e3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053110446s
Dec 24 14:41:51.994: INFO: Pod "downwardapi-volume-ab419978-4c4c-4e6d-8e4c-d0a2aa68b2e3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.122743407s
Dec 24 14:41:54.035: INFO: Pod "downwardapi-volume-ab419978-4c4c-4e6d-8e4c-d0a2aa68b2e3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.163382155s
Dec 24 14:41:56.043: INFO: Pod "downwardapi-volume-ab419978-4c4c-4e6d-8e4c-d0a2aa68b2e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.172230698s
STEP: Saw pod success
Dec 24 14:41:56.044: INFO: Pod "downwardapi-volume-ab419978-4c4c-4e6d-8e4c-d0a2aa68b2e3" satisfied condition "success or failure"
Dec 24 14:41:56.047: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-ab419978-4c4c-4e6d-8e4c-d0a2aa68b2e3 container client-container: 
STEP: delete the pod
Dec 24 14:41:56.120: INFO: Waiting for pod downwardapi-volume-ab419978-4c4c-4e6d-8e4c-d0a2aa68b2e3 to disappear
Dec 24 14:41:56.131: INFO: Pod downwardapi-volume-ab419978-4c4c-4e6d-8e4c-d0a2aa68b2e3 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:41:56.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5782" for this suite.
Dec 24 14:42:02.184: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:42:02.309: INFO: namespace downward-api-5782 deletion completed in 6.167271486s

• [SLOW TEST:16.574 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:42:02.310: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-74ba4729-6ef2-475a-9ae6-2c5a611838cc
STEP: Creating secret with name s-test-opt-upd-107b884c-060c-4cbf-b9dd-1d590a266671
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-74ba4729-6ef2-475a-9ae6-2c5a611838cc
STEP: Updating secret s-test-opt-upd-107b884c-060c-4cbf-b9dd-1d590a266671
STEP: Creating secret with name s-test-opt-create-008164c6-eb0b-453d-ba8a-aaab02364177
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:42:18.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7781" for this suite.
Dec 24 14:42:42.871: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:42:43.023: INFO: namespace projected-7781 deletion completed in 24.244996119s

• [SLOW TEST:40.713 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:42:43.023: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-5162, will wait for the garbage collector to delete the pods
Dec 24 14:42:53.197: INFO: Deleting Job.batch foo took: 12.342687ms
Dec 24 14:42:53.497: INFO: Terminating Job.batch foo pods took: 300.441088ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:43:36.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-5162" for this suite.
Dec 24 14:43:42.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:43:42.920: INFO: namespace job-5162 deletion completed in 6.198352193s

• [SLOW TEST:59.898 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:43:42.921: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Dec 24 14:43:43.057: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:43:57.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-5037" for this suite.
Dec 24 14:44:03.927: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:44:04.027: INFO: namespace init-container-5037 deletion completed in 6.196369383s

• [SLOW TEST:21.106 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:44:04.028: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Dec 24 14:44:12.819: INFO: Successfully updated pod "annotationupdate202b91f7-2cbd-4a28-804d-eca1fc7ed73d"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:44:14.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8657" for this suite.
Dec 24 14:44:36.993: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:44:37.092: INFO: namespace downward-api-8657 deletion completed in 22.133916056s

• [SLOW TEST:33.065 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:44:37.094: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-7159/configmap-test-1a90cb72-5638-40eb-9933-7933328053f2
STEP: Creating a pod to test consume configMaps
Dec 24 14:44:37.186: INFO: Waiting up to 5m0s for pod "pod-configmaps-e4c99e46-31d1-44c0-8853-aedde27bc314" in namespace "configmap-7159" to be "success or failure"
Dec 24 14:44:37.190: INFO: Pod "pod-configmaps-e4c99e46-31d1-44c0-8853-aedde27bc314": Phase="Pending", Reason="", readiness=false. Elapsed: 3.548655ms
Dec 24 14:44:39.198: INFO: Pod "pod-configmaps-e4c99e46-31d1-44c0-8853-aedde27bc314": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011310551s
Dec 24 14:44:41.205: INFO: Pod "pod-configmaps-e4c99e46-31d1-44c0-8853-aedde27bc314": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018531732s
Dec 24 14:44:43.219: INFO: Pod "pod-configmaps-e4c99e46-31d1-44c0-8853-aedde27bc314": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032634242s
Dec 24 14:44:45.230: INFO: Pod "pod-configmaps-e4c99e46-31d1-44c0-8853-aedde27bc314": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.043177329s
STEP: Saw pod success
Dec 24 14:44:45.230: INFO: Pod "pod-configmaps-e4c99e46-31d1-44c0-8853-aedde27bc314" satisfied condition "success or failure"
Dec 24 14:44:45.234: INFO: Trying to get logs from node iruya-node pod pod-configmaps-e4c99e46-31d1-44c0-8853-aedde27bc314 container env-test: 
STEP: delete the pod
Dec 24 14:44:45.317: INFO: Waiting for pod pod-configmaps-e4c99e46-31d1-44c0-8853-aedde27bc314 to disappear
Dec 24 14:44:45.328: INFO: Pod pod-configmaps-e4c99e46-31d1-44c0-8853-aedde27bc314 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:44:45.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7159" for this suite.
Dec 24 14:44:51.453: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:44:51.930: INFO: namespace configmap-7159 deletion completed in 6.591405567s

• [SLOW TEST:14.837 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:44:51.931: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-1a6280ce-b904-4a92-bcce-fb28b7904f3e
STEP: Creating a pod to test consume secrets
Dec 24 14:44:52.094: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c0a943f4-8304-4534-8ee6-780a45392efa" in namespace "projected-5075" to be "success or failure"
Dec 24 14:44:52.102: INFO: Pod "pod-projected-secrets-c0a943f4-8304-4534-8ee6-780a45392efa": Phase="Pending", Reason="", readiness=false. Elapsed: 6.950038ms
Dec 24 14:44:54.111: INFO: Pod "pod-projected-secrets-c0a943f4-8304-4534-8ee6-780a45392efa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016333787s
Dec 24 14:44:56.123: INFO: Pod "pod-projected-secrets-c0a943f4-8304-4534-8ee6-780a45392efa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028455571s
Dec 24 14:44:58.138: INFO: Pod "pod-projected-secrets-c0a943f4-8304-4534-8ee6-780a45392efa": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043317496s
Dec 24 14:45:00.154: INFO: Pod "pod-projected-secrets-c0a943f4-8304-4534-8ee6-780a45392efa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.059253328s
STEP: Saw pod success
Dec 24 14:45:00.154: INFO: Pod "pod-projected-secrets-c0a943f4-8304-4534-8ee6-780a45392efa" satisfied condition "success or failure"
Dec 24 14:45:00.157: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-c0a943f4-8304-4534-8ee6-780a45392efa container projected-secret-volume-test: 
STEP: delete the pod
Dec 24 14:45:00.289: INFO: Waiting for pod pod-projected-secrets-c0a943f4-8304-4534-8ee6-780a45392efa to disappear
Dec 24 14:45:00.297: INFO: Pod pod-projected-secrets-c0a943f4-8304-4534-8ee6-780a45392efa no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:45:00.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5075" for this suite.
Dec 24 14:45:06.472: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:45:06.608: INFO: namespace projected-5075 deletion completed in 6.260627237s

• [SLOW TEST:14.678 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:45:06.610: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting the proxy server
Dec 24 14:45:06.768: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:45:06.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-319" for this suite.
Dec 24 14:45:12.954: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:45:13.093: INFO: namespace kubectl-319 deletion completed in 6.16140472s

• [SLOW TEST:6.483 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:45:13.094: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Dec 24 14:45:21.761: INFO: Successfully updated pod "pod-update-b72e6034-7cee-49d7-ad0f-e897a3ac7a40"
STEP: verifying the updated pod is in kubernetes
Dec 24 14:45:21.772: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:45:21.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6279" for this suite.
Dec 24 14:45:43.805: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:45:43.982: INFO: namespace pods-6279 deletion completed in 22.205508223s

• [SLOW TEST:30.889 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:45:43.984: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-03a2f49d-0044-4a1f-a97d-215703a44472
STEP: Creating a pod to test consume configMaps
Dec 24 14:45:44.074: INFO: Waiting up to 5m0s for pod "pod-configmaps-bf10ca23-206b-4b01-999d-62fc0871628c" in namespace "configmap-6677" to be "success or failure"
Dec 24 14:45:44.078: INFO: Pod "pod-configmaps-bf10ca23-206b-4b01-999d-62fc0871628c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.381335ms
Dec 24 14:45:46.086: INFO: Pod "pod-configmaps-bf10ca23-206b-4b01-999d-62fc0871628c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011822553s
Dec 24 14:45:48.096: INFO: Pod "pod-configmaps-bf10ca23-206b-4b01-999d-62fc0871628c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021875195s
Dec 24 14:45:50.106: INFO: Pod "pod-configmaps-bf10ca23-206b-4b01-999d-62fc0871628c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032050017s
Dec 24 14:45:52.123: INFO: Pod "pod-configmaps-bf10ca23-206b-4b01-999d-62fc0871628c": Phase="Running", Reason="", readiness=true. Elapsed: 8.048716228s
Dec 24 14:45:54.133: INFO: Pod "pod-configmaps-bf10ca23-206b-4b01-999d-62fc0871628c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.058760519s
STEP: Saw pod success
Dec 24 14:45:54.133: INFO: Pod "pod-configmaps-bf10ca23-206b-4b01-999d-62fc0871628c" satisfied condition "success or failure"
Dec 24 14:45:54.136: INFO: Trying to get logs from node iruya-node pod pod-configmaps-bf10ca23-206b-4b01-999d-62fc0871628c container configmap-volume-test: 
STEP: delete the pod
Dec 24 14:45:54.295: INFO: Waiting for pod pod-configmaps-bf10ca23-206b-4b01-999d-62fc0871628c to disappear
Dec 24 14:45:54.325: INFO: Pod pod-configmaps-bf10ca23-206b-4b01-999d-62fc0871628c no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:45:54.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6677" for this suite.
Dec 24 14:46:00.493: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:46:00.596: INFO: namespace configmap-6677 deletion completed in 6.251484169s

• [SLOW TEST:16.613 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:46:00.597: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 24 14:46:00.675: INFO: Waiting up to 5m0s for pod "downwardapi-volume-62d7035e-486e-47fc-b2fa-3f8aa17b5707" in namespace "downward-api-3847" to be "success or failure"
Dec 24 14:46:00.773: INFO: Pod "downwardapi-volume-62d7035e-486e-47fc-b2fa-3f8aa17b5707": Phase="Pending", Reason="", readiness=false. Elapsed: 97.998833ms
Dec 24 14:46:02.779: INFO: Pod "downwardapi-volume-62d7035e-486e-47fc-b2fa-3f8aa17b5707": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104177379s
Dec 24 14:46:04.789: INFO: Pod "downwardapi-volume-62d7035e-486e-47fc-b2fa-3f8aa17b5707": Phase="Pending", Reason="", readiness=false. Elapsed: 4.114333074s
Dec 24 14:46:06.815: INFO: Pod "downwardapi-volume-62d7035e-486e-47fc-b2fa-3f8aa17b5707": Phase="Pending", Reason="", readiness=false. Elapsed: 6.140055985s
Dec 24 14:46:08.828: INFO: Pod "downwardapi-volume-62d7035e-486e-47fc-b2fa-3f8aa17b5707": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.152938291s
STEP: Saw pod success
Dec 24 14:46:08.828: INFO: Pod "downwardapi-volume-62d7035e-486e-47fc-b2fa-3f8aa17b5707" satisfied condition "success or failure"
Dec 24 14:46:08.833: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-62d7035e-486e-47fc-b2fa-3f8aa17b5707 container client-container: 
STEP: delete the pod
Dec 24 14:46:08.908: INFO: Waiting for pod downwardapi-volume-62d7035e-486e-47fc-b2fa-3f8aa17b5707 to disappear
Dec 24 14:46:08.913: INFO: Pod downwardapi-volume-62d7035e-486e-47fc-b2fa-3f8aa17b5707 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:46:08.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3847" for this suite.
Dec 24 14:46:14.958: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:46:15.063: INFO: namespace downward-api-3847 deletion completed in 6.14564346s

• [SLOW TEST:14.466 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:46:15.064: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test hostPath mode
Dec 24 14:46:15.289: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-9787" to be "success or failure"
Dec 24 14:46:15.307: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 17.147427ms
Dec 24 14:46:17.320: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030032408s
Dec 24 14:46:19.328: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038154617s
Dec 24 14:46:21.335: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045120294s
Dec 24 14:46:23.345: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.05496668s
Dec 24 14:46:25.358: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.068509977s
STEP: Saw pod success
Dec 24 14:46:25.358: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Dec 24 14:46:25.406: INFO: Trying to get logs from node iruya-node pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Dec 24 14:46:25.487: INFO: Waiting for pod pod-host-path-test to disappear
Dec 24 14:46:25.493: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:46:25.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-9787" for this suite.
Dec 24 14:46:31.552: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:46:31.744: INFO: namespace hostpath-9787 deletion completed in 6.245879886s

• [SLOW TEST:16.680 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:46:31.744: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override arguments
Dec 24 14:46:31.918: INFO: Waiting up to 5m0s for pod "client-containers-ea817fed-9db5-4b1a-9913-b7b4ba80affa" in namespace "containers-1323" to be "success or failure"
Dec 24 14:46:31.927: INFO: Pod "client-containers-ea817fed-9db5-4b1a-9913-b7b4ba80affa": Phase="Pending", Reason="", readiness=false. Elapsed: 8.501772ms
Dec 24 14:46:33.947: INFO: Pod "client-containers-ea817fed-9db5-4b1a-9913-b7b4ba80affa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028909048s
Dec 24 14:46:35.958: INFO: Pod "client-containers-ea817fed-9db5-4b1a-9913-b7b4ba80affa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040147758s
Dec 24 14:46:37.966: INFO: Pod "client-containers-ea817fed-9db5-4b1a-9913-b7b4ba80affa": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047609354s
Dec 24 14:46:39.979: INFO: Pod "client-containers-ea817fed-9db5-4b1a-9913-b7b4ba80affa": Phase="Running", Reason="", readiness=true. Elapsed: 8.060800944s
Dec 24 14:46:41.986: INFO: Pod "client-containers-ea817fed-9db5-4b1a-9913-b7b4ba80affa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.067784979s
STEP: Saw pod success
Dec 24 14:46:41.986: INFO: Pod "client-containers-ea817fed-9db5-4b1a-9913-b7b4ba80affa" satisfied condition "success or failure"
Dec 24 14:46:41.989: INFO: Trying to get logs from node iruya-node pod client-containers-ea817fed-9db5-4b1a-9913-b7b4ba80affa container test-container: 
STEP: delete the pod
Dec 24 14:46:42.040: INFO: Waiting for pod client-containers-ea817fed-9db5-4b1a-9913-b7b4ba80affa to disappear
Dec 24 14:46:42.049: INFO: Pod client-containers-ea817fed-9db5-4b1a-9913-b7b4ba80affa no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:46:42.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-1323" for this suite.
Dec 24 14:46:48.079: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:46:48.193: INFO: namespace containers-1323 deletion completed in 6.136519092s

• [SLOW TEST:16.449 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:46:48.194: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 24 14:46:48.293: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Dec 24 14:46:53.313: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 24 14:46:57.364: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Dec 24 14:46:57.436: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-8441,SelfLink:/apis/apps/v1/namespaces/deployment-8441/deployments/test-cleanup-deployment,UID:b3a882ff-0b30-45bc-acc0-a8ecc161bf7c,ResourceVersion:17904718,Generation:1,CreationTimestamp:2019-12-24 14:46:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Dec 24 14:46:57.445: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-8441,SelfLink:/apis/apps/v1/namespaces/deployment-8441/replicasets/test-cleanup-deployment-55bbcbc84c,UID:53ef66cc-1f13-4ae4-bdbe-0231d4fc203d,ResourceVersion:17904720,Generation:1,CreationTimestamp:2019-12-24 14:46:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment b3a882ff-0b30-45bc-acc0-a8ecc161bf7c 0xc00330b327 0xc00330b328}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 24 14:46:57.445: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Dec 24 14:46:57.445: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-8441,SelfLink:/apis/apps/v1/namespaces/deployment-8441/replicasets/test-cleanup-controller,UID:47c8360a-0e22-4bbf-bd72-58802af93ad6,ResourceVersion:17904719,Generation:1,CreationTimestamp:2019-12-24 14:46:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment b3a882ff-0b30-45bc-acc0-a8ecc161bf7c 0xc00330b23f 0xc00330b250}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Dec 24 14:46:57.477: INFO: Pod "test-cleanup-controller-8zqhn" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-8zqhn,GenerateName:test-cleanup-controller-,Namespace:deployment-8441,SelfLink:/api/v1/namespaces/deployment-8441/pods/test-cleanup-controller-8zqhn,UID:56d9b6d5-b5f0-4bed-aca4-7262f64207b0,ResourceVersion:17904715,Generation:0,CreationTimestamp:2019-12-24 14:46:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 47c8360a-0e22-4bbf-bd72-58802af93ad6 0xc0032ddbe7 0xc0032ddbe8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4zmgc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4zmgc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-4zmgc true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0032ddc60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0032ddc80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:46:48 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:46:56 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:46:56 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:46:48 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2019-12-24 14:46:48 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-24 14:46:55 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://2955534fad02ad4eb4abe02dd68950f4719cc4c236db6ca1d3f672d3c3c63b34}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 24 14:46:57.478: INFO: Pod "test-cleanup-deployment-55bbcbc84c-9z26b" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-9z26b,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-8441,SelfLink:/api/v1/namespaces/deployment-8441/pods/test-cleanup-deployment-55bbcbc84c-9z26b,UID:63358984-e878-4026-9678-9643495923d4,ResourceVersion:17904727,Generation:0,CreationTimestamp:2019-12-24 14:46:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 53ef66cc-1f13-4ae4-bdbe-0231d4fc203d 0xc0032ddd67 0xc0032ddd68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4zmgc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4zmgc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-4zmgc true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0032dde00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0032dde20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:46:57 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:46:57.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-8441" for this suite.
Dec 24 14:47:03.590: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:47:03.675: INFO: namespace deployment-8441 deletion completed in 6.116406338s

• [SLOW TEST:15.481 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:47:03.675: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Dec 24 14:47:03.858: INFO: Waiting up to 5m0s for pod "pod-9490b1d1-120e-4aa8-903e-953370c04824" in namespace "emptydir-6010" to be "success or failure"
Dec 24 14:47:03.905: INFO: Pod "pod-9490b1d1-120e-4aa8-903e-953370c04824": Phase="Pending", Reason="", readiness=false. Elapsed: 46.662588ms
Dec 24 14:47:05.923: INFO: Pod "pod-9490b1d1-120e-4aa8-903e-953370c04824": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064120159s
Dec 24 14:47:07.932: INFO: Pod "pod-9490b1d1-120e-4aa8-903e-953370c04824": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073580507s
Dec 24 14:47:09.943: INFO: Pod "pod-9490b1d1-120e-4aa8-903e-953370c04824": Phase="Pending", Reason="", readiness=false. Elapsed: 6.08412316s
Dec 24 14:47:11.953: INFO: Pod "pod-9490b1d1-120e-4aa8-903e-953370c04824": Phase="Pending", Reason="", readiness=false. Elapsed: 8.094027073s
Dec 24 14:47:13.967: INFO: Pod "pod-9490b1d1-120e-4aa8-903e-953370c04824": Phase="Pending", Reason="", readiness=false. Elapsed: 10.108522364s
Dec 24 14:47:15.986: INFO: Pod "pod-9490b1d1-120e-4aa8-903e-953370c04824": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.127390449s
STEP: Saw pod success
Dec 24 14:47:15.986: INFO: Pod "pod-9490b1d1-120e-4aa8-903e-953370c04824" satisfied condition "success or failure"
Dec 24 14:47:15.994: INFO: Trying to get logs from node iruya-node pod pod-9490b1d1-120e-4aa8-903e-953370c04824 container test-container: 
STEP: delete the pod
Dec 24 14:47:16.399: INFO: Waiting for pod pod-9490b1d1-120e-4aa8-903e-953370c04824 to disappear
Dec 24 14:47:16.414: INFO: Pod pod-9490b1d1-120e-4aa8-903e-953370c04824 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:47:16.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6010" for this suite.
Dec 24 14:47:22.451: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:47:22.576: INFO: namespace emptydir-6010 deletion completed in 6.155417256s

• [SLOW TEST:18.901 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:47:22.577: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-c3215cd3-1295-4942-b6eb-838ab6497fa2
STEP: Creating secret with name s-test-opt-upd-141c4a4f-89bd-4aa9-99cd-75b409bdcab0
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-c3215cd3-1295-4942-b6eb-838ab6497fa2
STEP: Updating secret s-test-opt-upd-141c4a4f-89bd-4aa9-99cd-75b409bdcab0
STEP: Creating secret with name s-test-opt-create-d32e6aba-40db-4b0e-af3f-9e99bcbfb57a
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:47:37.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5394" for this suite.
Dec 24 14:47:59.131: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:47:59.255: INFO: namespace secrets-5394 deletion completed in 22.149896691s

• [SLOW TEST:36.678 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:47:59.255: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 24 14:47:59.377: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9120bade-797d-413d-98c4-07dabd25f9a0" in namespace "projected-1423" to be "success or failure"
Dec 24 14:47:59.384: INFO: Pod "downwardapi-volume-9120bade-797d-413d-98c4-07dabd25f9a0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.851158ms
Dec 24 14:48:01.438: INFO: Pod "downwardapi-volume-9120bade-797d-413d-98c4-07dabd25f9a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060936367s
Dec 24 14:48:03.447: INFO: Pod "downwardapi-volume-9120bade-797d-413d-98c4-07dabd25f9a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069964319s
Dec 24 14:48:05.457: INFO: Pod "downwardapi-volume-9120bade-797d-413d-98c4-07dabd25f9a0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079726133s
Dec 24 14:48:07.467: INFO: Pod "downwardapi-volume-9120bade-797d-413d-98c4-07dabd25f9a0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.089559922s
Dec 24 14:48:09.477: INFO: Pod "downwardapi-volume-9120bade-797d-413d-98c4-07dabd25f9a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.099469597s
STEP: Saw pod success
Dec 24 14:48:09.477: INFO: Pod "downwardapi-volume-9120bade-797d-413d-98c4-07dabd25f9a0" satisfied condition "success or failure"
Dec 24 14:48:09.484: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-9120bade-797d-413d-98c4-07dabd25f9a0 container client-container: 
STEP: delete the pod
Dec 24 14:48:09.563: INFO: Waiting for pod downwardapi-volume-9120bade-797d-413d-98c4-07dabd25f9a0 to disappear
Dec 24 14:48:09.571: INFO: Pod downwardapi-volume-9120bade-797d-413d-98c4-07dabd25f9a0 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:48:09.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1423" for this suite.
Dec 24 14:48:15.603: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:48:15.748: INFO: namespace projected-1423 deletion completed in 6.168126099s

• [SLOW TEST:16.493 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:48:15.748: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Dec 24 14:48:22.252: INFO: 0 pods remaining
Dec 24 14:48:22.252: INFO: 0 pods has nil DeletionTimestamp
Dec 24 14:48:22.252: INFO: 
STEP: Gathering metrics
W1224 14:48:23.161333       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 24 14:48:23.161: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:48:23.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6607" for this suite.
Dec 24 14:48:33.365: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:48:33.496: INFO: namespace gc-6607 deletion completed in 10.331649692s

• [SLOW TEST:17.748 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:48:33.497: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292
STEP: creating an rc
Dec 24 14:48:33.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1584'
Dec 24 14:48:36.041: INFO: stderr: ""
Dec 24 14:48:36.041: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Waiting for Redis master to start.
Dec 24 14:48:37.055: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 14:48:37.055: INFO: Found 0 / 1
Dec 24 14:48:38.049: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 14:48:38.049: INFO: Found 0 / 1
Dec 24 14:48:39.069: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 14:48:39.069: INFO: Found 0 / 1
Dec 24 14:48:40.057: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 14:48:40.057: INFO: Found 0 / 1
Dec 24 14:48:41.048: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 14:48:41.048: INFO: Found 0 / 1
Dec 24 14:48:42.050: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 14:48:42.050: INFO: Found 0 / 1
Dec 24 14:48:43.054: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 14:48:43.054: INFO: Found 0 / 1
Dec 24 14:48:44.097: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 14:48:44.097: INFO: Found 1 / 1
Dec 24 14:48:44.097: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Dec 24 14:48:44.104: INFO: Selector matched 1 pods for map[app:redis]
Dec 24 14:48:44.104: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Dec 24 14:48:44.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-5ch9l redis-master --namespace=kubectl-1584'
Dec 24 14:48:44.262: INFO: stderr: ""
Dec 24 14:48:44.262: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 24 Dec 14:48:42.693 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 24 Dec 14:48:42.693 # Server started, Redis version 3.2.12\n1:M 24 Dec 14:48:42.693 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 24 Dec 14:48:42.693 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Dec 24 14:48:44.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-5ch9l redis-master --namespace=kubectl-1584 --tail=1'
Dec 24 14:48:44.460: INFO: stderr: ""
Dec 24 14:48:44.460: INFO: stdout: "1:M 24 Dec 14:48:42.693 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Dec 24 14:48:44.460: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-5ch9l redis-master --namespace=kubectl-1584 --limit-bytes=1'
Dec 24 14:48:44.650: INFO: stderr: ""
Dec 24 14:48:44.650: INFO: stdout: " "
STEP: exposing timestamps
Dec 24 14:48:44.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-5ch9l redis-master --namespace=kubectl-1584 --tail=1 --timestamps'
Dec 24 14:48:44.838: INFO: stderr: ""
Dec 24 14:48:44.838: INFO: stdout: "2019-12-24T14:48:42.694610544Z 1:M 24 Dec 14:48:42.693 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Dec 24 14:48:47.338: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-5ch9l redis-master --namespace=kubectl-1584 --since=1s'
Dec 24 14:48:47.626: INFO: stderr: ""
Dec 24 14:48:47.626: INFO: stdout: ""
Dec 24 14:48:47.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-5ch9l redis-master --namespace=kubectl-1584 --since=24h'
Dec 24 14:48:47.799: INFO: stderr: ""
Dec 24 14:48:47.799: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 24 Dec 14:48:42.693 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 24 Dec 14:48:42.693 # Server started, Redis version 3.2.12\n1:M 24 Dec 14:48:42.693 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 24 Dec 14:48:42.693 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
STEP: using delete to clean up resources
Dec 24 14:48:47.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1584'
Dec 24 14:48:47.927: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 24 14:48:47.927: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Dec 24 14:48:47.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-1584'
Dec 24 14:48:48.122: INFO: stderr: "No resources found.\n"
Dec 24 14:48:48.122: INFO: stdout: ""
Dec 24 14:48:48.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-1584 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 24 14:48:48.248: INFO: stderr: ""
Dec 24 14:48:48.248: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:48:48.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1584" for this suite.
Dec 24 14:49:10.277: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:49:10.379: INFO: namespace kubectl-1584 deletion completed in 22.127155435s

• [SLOW TEST:36.882 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:49:10.380: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 24 14:49:10.477: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Dec 24 14:49:10.512: INFO: Pod name sample-pod: Found 0 pods out of 1
Dec 24 14:49:15.530: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 24 14:49:17.554: INFO: Creating deployment "test-rolling-update-deployment"
Dec 24 14:49:17.561: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Dec 24 14:49:17.576: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Dec 24 14:49:19.592: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Dec 24 14:49:19.596: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712795757, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712795757, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712795757, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712795757, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 24 14:49:21.609: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712795757, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712795757, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712795757, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712795757, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 24 14:49:23.616: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712795757, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712795757, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712795757, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712795757, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 24 14:49:25.604: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712795757, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712795757, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712795757, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712795757, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 24 14:49:27.603: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Dec 24 14:49:27.616: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-9753,SelfLink:/apis/apps/v1/namespaces/deployment-9753/deployments/test-rolling-update-deployment,UID:85bd8aa1-0cc5-47c9-98e9-a4328b2c83f2,ResourceVersion:17905224,Generation:1,CreationTimestamp:2019-12-24 14:49:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-24 14:49:17 +0000 UTC 2019-12-24 14:49:17 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-24 14:49:26 +0000 UTC 2019-12-24 14:49:17 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Dec 24 14:49:27.621: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-9753,SelfLink:/apis/apps/v1/namespaces/deployment-9753/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:6ef1094f-0398-416b-b73e-166656b0d3ac,ResourceVersion:17905214,Generation:1,CreationTimestamp:2019-12-24 14:49:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 85bd8aa1-0cc5-47c9-98e9-a4328b2c83f2 0xc001976637 0xc001976638}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Dec 24 14:49:27.621: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Dec 24 14:49:27.622: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-9753,SelfLink:/apis/apps/v1/namespaces/deployment-9753/replicasets/test-rolling-update-controller,UID:836d3ed4-698b-48e6-aaaf-ffbde2e0ecad,ResourceVersion:17905223,Generation:2,CreationTimestamp:2019-12-24 14:49:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 85bd8aa1-0cc5-47c9-98e9-a4328b2c83f2 0xc00197654f 0xc001976560}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 24 14:49:27.629: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-42v56" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-42v56,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-9753,SelfLink:/api/v1/namespaces/deployment-9753/pods/test-rolling-update-deployment-79f6b9d75c-42v56,UID:b7b2c579-629d-41d8-8d56-8ecd23c40d38,ResourceVersion:17905213,Generation:0,CreationTimestamp:2019-12-24 14:49:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 6ef1094f-0398-416b-b73e-166656b0d3ac 0xc001c4bed7 0xc001c4bed8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jmgjg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jmgjg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-jmgjg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c4bf50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c4bf70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:49:17 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:49:26 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:49:26 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:49:17 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2019-12-24 14:49:17 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-24 14:49:25 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://1dbf6cf8f3fa9e6f7b67e7945b2241180a988c3da98bab46bcdc282a93e69fd1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:49:27.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-9753" for this suite.
Dec 24 14:49:33.664: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:49:33.760: INFO: namespace deployment-9753 deletion completed in 6.125658852s

• [SLOW TEST:23.380 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:49:33.761: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Dec 24 14:49:33.968: INFO: Waiting up to 5m0s for pod "pod-7875c91e-9639-4b3a-b62a-4cb26cfd4493" in namespace "emptydir-9894" to be "success or failure"
Dec 24 14:49:33.976: INFO: Pod "pod-7875c91e-9639-4b3a-b62a-4cb26cfd4493": Phase="Pending", Reason="", readiness=false. Elapsed: 7.939071ms
Dec 24 14:49:35.988: INFO: Pod "pod-7875c91e-9639-4b3a-b62a-4cb26cfd4493": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020218835s
Dec 24 14:49:38.001: INFO: Pod "pod-7875c91e-9639-4b3a-b62a-4cb26cfd4493": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0331167s
Dec 24 14:49:40.262: INFO: Pod "pod-7875c91e-9639-4b3a-b62a-4cb26cfd4493": Phase="Pending", Reason="", readiness=false. Elapsed: 6.294220693s
Dec 24 14:49:42.273: INFO: Pod "pod-7875c91e-9639-4b3a-b62a-4cb26cfd4493": Phase="Pending", Reason="", readiness=false. Elapsed: 8.304718034s
Dec 24 14:49:44.284: INFO: Pod "pod-7875c91e-9639-4b3a-b62a-4cb26cfd4493": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.315396896s
STEP: Saw pod success
Dec 24 14:49:44.284: INFO: Pod "pod-7875c91e-9639-4b3a-b62a-4cb26cfd4493" satisfied condition "success or failure"
Dec 24 14:49:44.288: INFO: Trying to get logs from node iruya-node pod pod-7875c91e-9639-4b3a-b62a-4cb26cfd4493 container test-container: 
STEP: delete the pod
Dec 24 14:49:44.436: INFO: Waiting for pod pod-7875c91e-9639-4b3a-b62a-4cb26cfd4493 to disappear
Dec 24 14:49:44.445: INFO: Pod pod-7875c91e-9639-4b3a-b62a-4cb26cfd4493 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:49:44.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9894" for this suite.
Dec 24 14:49:50.493: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:49:50.626: INFO: namespace emptydir-9894 deletion completed in 6.166466073s

• [SLOW TEST:16.865 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:49:50.626: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-k555
STEP: Creating a pod to test atomic-volume-subpath
Dec 24 14:49:50.742: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-k555" in namespace "subpath-1174" to be "success or failure"
Dec 24 14:49:50.746: INFO: Pod "pod-subpath-test-configmap-k555": Phase="Pending", Reason="", readiness=false. Elapsed: 4.350185ms
Dec 24 14:49:52.764: INFO: Pod "pod-subpath-test-configmap-k555": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021877443s
Dec 24 14:49:54.772: INFO: Pod "pod-subpath-test-configmap-k555": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030794893s
Dec 24 14:49:56.791: INFO: Pod "pod-subpath-test-configmap-k555": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049822613s
Dec 24 14:49:58.805: INFO: Pod "pod-subpath-test-configmap-k555": Phase="Running", Reason="", readiness=true. Elapsed: 8.063530157s
Dec 24 14:50:00.815: INFO: Pod "pod-subpath-test-configmap-k555": Phase="Running", Reason="", readiness=true. Elapsed: 10.073497515s
Dec 24 14:50:02.829: INFO: Pod "pod-subpath-test-configmap-k555": Phase="Running", Reason="", readiness=true. Elapsed: 12.087368967s
Dec 24 14:50:04.842: INFO: Pod "pod-subpath-test-configmap-k555": Phase="Running", Reason="", readiness=true. Elapsed: 14.100599832s
Dec 24 14:50:06.851: INFO: Pod "pod-subpath-test-configmap-k555": Phase="Running", Reason="", readiness=true. Elapsed: 16.109406134s
Dec 24 14:50:08.880: INFO: Pod "pod-subpath-test-configmap-k555": Phase="Running", Reason="", readiness=true. Elapsed: 18.13802806s
Dec 24 14:50:10.886: INFO: Pod "pod-subpath-test-configmap-k555": Phase="Running", Reason="", readiness=true. Elapsed: 20.144833171s
Dec 24 14:50:12.893: INFO: Pod "pod-subpath-test-configmap-k555": Phase="Running", Reason="", readiness=true. Elapsed: 22.151161116s
Dec 24 14:50:14.920: INFO: Pod "pod-subpath-test-configmap-k555": Phase="Running", Reason="", readiness=true. Elapsed: 24.178095479s
Dec 24 14:50:16.932: INFO: Pod "pod-subpath-test-configmap-k555": Phase="Running", Reason="", readiness=true. Elapsed: 26.190469734s
Dec 24 14:50:18.953: INFO: Pod "pod-subpath-test-configmap-k555": Phase="Running", Reason="", readiness=true. Elapsed: 28.211311644s
Dec 24 14:50:20.960: INFO: Pod "pod-subpath-test-configmap-k555": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.218734185s
STEP: Saw pod success
Dec 24 14:50:20.960: INFO: Pod "pod-subpath-test-configmap-k555" satisfied condition "success or failure"
Dec 24 14:50:20.963: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-k555 container test-container-subpath-configmap-k555: 
STEP: delete the pod
Dec 24 14:50:21.010: INFO: Waiting for pod pod-subpath-test-configmap-k555 to disappear
Dec 24 14:50:21.042: INFO: Pod pod-subpath-test-configmap-k555 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-k555
Dec 24 14:50:21.042: INFO: Deleting pod "pod-subpath-test-configmap-k555" in namespace "subpath-1174"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:50:21.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-1174" for this suite.
Dec 24 14:50:27.067: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:50:27.144: INFO: namespace subpath-1174 deletion completed in 6.093539867s

• [SLOW TEST:36.518 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:50:27.145: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Dec 24 14:50:43.438: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 24 14:50:43.454: INFO: Pod pod-with-poststart-http-hook still exists
Dec 24 14:50:45.457: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 24 14:50:45.472: INFO: Pod pod-with-poststart-http-hook still exists
Dec 24 14:50:47.454: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 24 14:50:47.462: INFO: Pod pod-with-poststart-http-hook still exists
Dec 24 14:50:49.454: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 24 14:50:49.467: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:50:49.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-4598" for this suite.
Dec 24 14:51:11.493: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:51:11.575: INFO: namespace container-lifecycle-hook-4598 deletion completed in 22.102961438s

• [SLOW TEST:44.431 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:51:11.576: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test env composition
Dec 24 14:51:11.688: INFO: Waiting up to 5m0s for pod "var-expansion-56f3bcf6-efbe-4f54-877e-d49b615c94f6" in namespace "var-expansion-5556" to be "success or failure"
Dec 24 14:51:11.703: INFO: Pod "var-expansion-56f3bcf6-efbe-4f54-877e-d49b615c94f6": Phase="Pending", Reason="", readiness=false. Elapsed: 14.803505ms
Dec 24 14:51:13.713: INFO: Pod "var-expansion-56f3bcf6-efbe-4f54-877e-d49b615c94f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024951557s
Dec 24 14:51:15.725: INFO: Pod "var-expansion-56f3bcf6-efbe-4f54-877e-d49b615c94f6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037685239s
Dec 24 14:51:17.737: INFO: Pod "var-expansion-56f3bcf6-efbe-4f54-877e-d49b615c94f6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049195214s
Dec 24 14:51:19.752: INFO: Pod "var-expansion-56f3bcf6-efbe-4f54-877e-d49b615c94f6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.064259572s
Dec 24 14:51:21.761: INFO: Pod "var-expansion-56f3bcf6-efbe-4f54-877e-d49b615c94f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.073156088s
STEP: Saw pod success
Dec 24 14:51:21.761: INFO: Pod "var-expansion-56f3bcf6-efbe-4f54-877e-d49b615c94f6" satisfied condition "success or failure"
Dec 24 14:51:21.766: INFO: Trying to get logs from node iruya-node pod var-expansion-56f3bcf6-efbe-4f54-877e-d49b615c94f6 container dapi-container: 
STEP: delete the pod
Dec 24 14:51:21.835: INFO: Waiting for pod var-expansion-56f3bcf6-efbe-4f54-877e-d49b615c94f6 to disappear
Dec 24 14:51:21.843: INFO: Pod var-expansion-56f3bcf6-efbe-4f54-877e-d49b615c94f6 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:51:21.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-5556" for this suite.
Dec 24 14:51:27.889: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:51:28.024: INFO: namespace var-expansion-5556 deletion completed in 6.171474755s

• [SLOW TEST:16.448 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:51:28.024: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 24 14:51:28.171: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f982f584-f4e8-4402-a6b3-eb20a9a1d691" in namespace "downward-api-2844" to be "success or failure"
Dec 24 14:51:28.196: INFO: Pod "downwardapi-volume-f982f584-f4e8-4402-a6b3-eb20a9a1d691": Phase="Pending", Reason="", readiness=false. Elapsed: 24.866572ms
Dec 24 14:51:30.205: INFO: Pod "downwardapi-volume-f982f584-f4e8-4402-a6b3-eb20a9a1d691": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033707674s
Dec 24 14:51:32.212: INFO: Pod "downwardapi-volume-f982f584-f4e8-4402-a6b3-eb20a9a1d691": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041150326s
Dec 24 14:51:34.219: INFO: Pod "downwardapi-volume-f982f584-f4e8-4402-a6b3-eb20a9a1d691": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04769033s
Dec 24 14:51:36.231: INFO: Pod "downwardapi-volume-f982f584-f4e8-4402-a6b3-eb20a9a1d691": Phase="Pending", Reason="", readiness=false. Elapsed: 8.059731804s
Dec 24 14:51:38.240: INFO: Pod "downwardapi-volume-f982f584-f4e8-4402-a6b3-eb20a9a1d691": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.069447641s
STEP: Saw pod success
Dec 24 14:51:38.240: INFO: Pod "downwardapi-volume-f982f584-f4e8-4402-a6b3-eb20a9a1d691" satisfied condition "success or failure"
Dec 24 14:51:38.249: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-f982f584-f4e8-4402-a6b3-eb20a9a1d691 container client-container: 
STEP: delete the pod
Dec 24 14:51:38.404: INFO: Waiting for pod downwardapi-volume-f982f584-f4e8-4402-a6b3-eb20a9a1d691 to disappear
Dec 24 14:51:38.417: INFO: Pod downwardapi-volume-f982f584-f4e8-4402-a6b3-eb20a9a1d691 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:51:38.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2844" for this suite.
Dec 24 14:51:44.467: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:51:44.582: INFO: namespace downward-api-2844 deletion completed in 6.15563856s

• [SLOW TEST:16.558 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:51:44.583: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-1164/configmap-test-cc6aef8c-054a-42d3-8d7a-7676a0b866f7
STEP: Creating a pod to test consume configMaps
Dec 24 14:51:44.741: INFO: Waiting up to 5m0s for pod "pod-configmaps-4f682860-7a86-4fba-9adb-019ecd611b97" in namespace "configmap-1164" to be "success or failure"
Dec 24 14:51:44.813: INFO: Pod "pod-configmaps-4f682860-7a86-4fba-9adb-019ecd611b97": Phase="Pending", Reason="", readiness=false. Elapsed: 71.979955ms
Dec 24 14:51:46.825: INFO: Pod "pod-configmaps-4f682860-7a86-4fba-9adb-019ecd611b97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083862762s
Dec 24 14:51:48.831: INFO: Pod "pod-configmaps-4f682860-7a86-4fba-9adb-019ecd611b97": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090046858s
Dec 24 14:51:50.893: INFO: Pod "pod-configmaps-4f682860-7a86-4fba-9adb-019ecd611b97": Phase="Pending", Reason="", readiness=false. Elapsed: 6.151608864s
Dec 24 14:51:52.900: INFO: Pod "pod-configmaps-4f682860-7a86-4fba-9adb-019ecd611b97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.159460064s
STEP: Saw pod success
Dec 24 14:51:52.901: INFO: Pod "pod-configmaps-4f682860-7a86-4fba-9adb-019ecd611b97" satisfied condition "success or failure"
Dec 24 14:51:52.905: INFO: Trying to get logs from node iruya-node pod pod-configmaps-4f682860-7a86-4fba-9adb-019ecd611b97 container env-test: 
STEP: delete the pod
Dec 24 14:51:52.973: INFO: Waiting for pod pod-configmaps-4f682860-7a86-4fba-9adb-019ecd611b97 to disappear
Dec 24 14:51:53.044: INFO: Pod pod-configmaps-4f682860-7a86-4fba-9adb-019ecd611b97 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:51:53.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1164" for this suite.
Dec 24 14:51:59.086: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:51:59.202: INFO: namespace configmap-1164 deletion completed in 6.149888381s

• [SLOW TEST:14.619 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:51:59.203: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Dec 24 14:51:59.292: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6078,SelfLink:/api/v1/namespaces/watch-6078/configmaps/e2e-watch-test-configmap-a,UID:9ae6cf17-6bfe-44de-88c1-5795663ed49d,ResourceVersion:17905622,Generation:0,CreationTimestamp:2019-12-24 14:51:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 24 14:51:59.293: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6078,SelfLink:/api/v1/namespaces/watch-6078/configmaps/e2e-watch-test-configmap-a,UID:9ae6cf17-6bfe-44de-88c1-5795663ed49d,ResourceVersion:17905622,Generation:0,CreationTimestamp:2019-12-24 14:51:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Dec 24 14:52:09.306: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6078,SelfLink:/api/v1/namespaces/watch-6078/configmaps/e2e-watch-test-configmap-a,UID:9ae6cf17-6bfe-44de-88c1-5795663ed49d,ResourceVersion:17905636,Generation:0,CreationTimestamp:2019-12-24 14:51:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Dec 24 14:52:09.307: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6078,SelfLink:/api/v1/namespaces/watch-6078/configmaps/e2e-watch-test-configmap-a,UID:9ae6cf17-6bfe-44de-88c1-5795663ed49d,ResourceVersion:17905636,Generation:0,CreationTimestamp:2019-12-24 14:51:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Dec 24 14:52:19.320: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6078,SelfLink:/api/v1/namespaces/watch-6078/configmaps/e2e-watch-test-configmap-a,UID:9ae6cf17-6bfe-44de-88c1-5795663ed49d,ResourceVersion:17905651,Generation:0,CreationTimestamp:2019-12-24 14:51:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 24 14:52:19.320: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6078,SelfLink:/api/v1/namespaces/watch-6078/configmaps/e2e-watch-test-configmap-a,UID:9ae6cf17-6bfe-44de-88c1-5795663ed49d,ResourceVersion:17905651,Generation:0,CreationTimestamp:2019-12-24 14:51:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Dec 24 14:52:29.333: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6078,SelfLink:/api/v1/namespaces/watch-6078/configmaps/e2e-watch-test-configmap-a,UID:9ae6cf17-6bfe-44de-88c1-5795663ed49d,ResourceVersion:17905665,Generation:0,CreationTimestamp:2019-12-24 14:51:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 24 14:52:29.333: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6078,SelfLink:/api/v1/namespaces/watch-6078/configmaps/e2e-watch-test-configmap-a,UID:9ae6cf17-6bfe-44de-88c1-5795663ed49d,ResourceVersion:17905665,Generation:0,CreationTimestamp:2019-12-24 14:51:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Dec 24 14:52:39.347: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-6078,SelfLink:/api/v1/namespaces/watch-6078/configmaps/e2e-watch-test-configmap-b,UID:c0e6e2b4-b70d-473e-9275-a95a70f8f924,ResourceVersion:17905680,Generation:0,CreationTimestamp:2019-12-24 14:52:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 24 14:52:39.347: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-6078,SelfLink:/api/v1/namespaces/watch-6078/configmaps/e2e-watch-test-configmap-b,UID:c0e6e2b4-b70d-473e-9275-a95a70f8f924,ResourceVersion:17905680,Generation:0,CreationTimestamp:2019-12-24 14:52:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Dec 24 14:52:49.368: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-6078,SelfLink:/api/v1/namespaces/watch-6078/configmaps/e2e-watch-test-configmap-b,UID:c0e6e2b4-b70d-473e-9275-a95a70f8f924,ResourceVersion:17905694,Generation:0,CreationTimestamp:2019-12-24 14:52:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 24 14:52:49.368: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-6078,SelfLink:/api/v1/namespaces/watch-6078/configmaps/e2e-watch-test-configmap-b,UID:c0e6e2b4-b70d-473e-9275-a95a70f8f924,ResourceVersion:17905694,Generation:0,CreationTimestamp:2019-12-24 14:52:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:52:59.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-6078" for this suite.
Dec 24 14:53:05.410: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:53:05.548: INFO: namespace watch-6078 deletion completed in 6.167673793s

• [SLOW TEST:66.345 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:53:05.549: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Dec 24 14:53:05.660: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:53:22.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-531" for this suite.
Dec 24 14:53:44.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:53:44.328: INFO: namespace init-container-531 deletion completed in 22.139066738s

• [SLOW TEST:38.779 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:53:44.329: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 24 14:53:44.402: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b556d502-403a-4671-b257-ae9a8c178cca" in namespace "projected-7023" to be "success or failure"
Dec 24 14:53:44.435: INFO: Pod "downwardapi-volume-b556d502-403a-4671-b257-ae9a8c178cca": Phase="Pending", Reason="", readiness=false. Elapsed: 32.128418ms
Dec 24 14:53:46.444: INFO: Pod "downwardapi-volume-b556d502-403a-4671-b257-ae9a8c178cca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04162351s
Dec 24 14:53:48.466: INFO: Pod "downwardapi-volume-b556d502-403a-4671-b257-ae9a8c178cca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063748095s
Dec 24 14:53:50.487: INFO: Pod "downwardapi-volume-b556d502-403a-4671-b257-ae9a8c178cca": Phase="Pending", Reason="", readiness=false. Elapsed: 6.084662969s
Dec 24 14:53:52.528: INFO: Pod "downwardapi-volume-b556d502-403a-4671-b257-ae9a8c178cca": Phase="Pending", Reason="", readiness=false. Elapsed: 8.125902937s
Dec 24 14:53:54.542: INFO: Pod "downwardapi-volume-b556d502-403a-4671-b257-ae9a8c178cca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.139539986s
STEP: Saw pod success
Dec 24 14:53:54.542: INFO: Pod "downwardapi-volume-b556d502-403a-4671-b257-ae9a8c178cca" satisfied condition "success or failure"
Dec 24 14:53:54.549: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-b556d502-403a-4671-b257-ae9a8c178cca container client-container: 
STEP: delete the pod
Dec 24 14:53:54.666: INFO: Waiting for pod downwardapi-volume-b556d502-403a-4671-b257-ae9a8c178cca to disappear
Dec 24 14:53:54.734: INFO: Pod downwardapi-volume-b556d502-403a-4671-b257-ae9a8c178cca no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:53:54.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7023" for this suite.
Dec 24 14:54:00.778: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:54:00.918: INFO: namespace projected-7023 deletion completed in 6.176759189s

• [SLOW TEST:16.590 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:54:00.920: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-9efb3843-9a1b-4c24-b1fa-94310b43fa69 in namespace container-probe-5147
Dec 24 14:54:09.012: INFO: Started pod liveness-9efb3843-9a1b-4c24-b1fa-94310b43fa69 in namespace container-probe-5147
STEP: checking the pod's current state and verifying that restartCount is present
Dec 24 14:54:09.018: INFO: Initial restart count of pod liveness-9efb3843-9a1b-4c24-b1fa-94310b43fa69 is 0
Dec 24 14:54:31.131: INFO: Restart count of pod container-probe-5147/liveness-9efb3843-9a1b-4c24-b1fa-94310b43fa69 is now 1 (22.113028916s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:54:31.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5147" for this suite.
Dec 24 14:54:37.261: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:54:37.396: INFO: namespace container-probe-5147 deletion completed in 6.159782699s

• [SLOW TEST:36.477 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:54:37.397: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 24 14:54:37.537: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e371909a-1832-4729-a67c-73656195f68f" in namespace "downward-api-9009" to be "success or failure"
Dec 24 14:54:37.550: INFO: Pod "downwardapi-volume-e371909a-1832-4729-a67c-73656195f68f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.583388ms
Dec 24 14:54:39.558: INFO: Pod "downwardapi-volume-e371909a-1832-4729-a67c-73656195f68f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020401675s
Dec 24 14:54:41.573: INFO: Pod "downwardapi-volume-e371909a-1832-4729-a67c-73656195f68f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036177742s
Dec 24 14:54:43.591: INFO: Pod "downwardapi-volume-e371909a-1832-4729-a67c-73656195f68f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053838051s
Dec 24 14:54:45.606: INFO: Pod "downwardapi-volume-e371909a-1832-4729-a67c-73656195f68f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.068533438s
STEP: Saw pod success
Dec 24 14:54:45.606: INFO: Pod "downwardapi-volume-e371909a-1832-4729-a67c-73656195f68f" satisfied condition "success or failure"
Dec 24 14:54:45.611: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-e371909a-1832-4729-a67c-73656195f68f container client-container: 
STEP: delete the pod
Dec 24 14:54:45.732: INFO: Waiting for pod downwardapi-volume-e371909a-1832-4729-a67c-73656195f68f to disappear
Dec 24 14:54:45.818: INFO: Pod downwardapi-volume-e371909a-1832-4729-a67c-73656195f68f no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 14:54:45.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9009" for this suite.
Dec 24 14:54:51.968: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 14:54:52.206: INFO: namespace downward-api-9009 deletion completed in 6.304524956s

• [SLOW TEST:14.809 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 14:54:52.207: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-2570
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-2570
STEP: Creating statefulset with conflicting port in namespace statefulset-2570
STEP: Waiting until pod test-pod will start running in namespace statefulset-2570
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-2570
Dec 24 14:55:04.593: INFO: Observed stateful pod in namespace: statefulset-2570, name: ss-0, uid: 7e7fd347-5003-4d51-a56b-dbf46be46883, status phase: Pending. Waiting for statefulset controller to delete.
Dec 24 15:00:04.593: INFO: Pod ss-0 expected to be re-created at least once
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Dec 24 15:00:04.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe po ss-0 --namespace=statefulset-2570'
Dec 24 15:00:06.745: INFO: stderr: ""
Dec 24 15:00:06.746: INFO: stdout: "Name:           ss-0\nNamespace:      statefulset-2570\nPriority:       0\nNode:           iruya-node/\nLabels:         baz=blah\n                controller-revision-hash=ss-6f98bdb9c4\n                foo=bar\n                statefulset.kubernetes.io/pod-name=ss-0\nAnnotations:    \nStatus:         Pending\nIP:             \nControlled By:  StatefulSet/ss\nContainers:\n  nginx:\n    Image:        docker.io/library/nginx:1.14-alpine\n    Port:         21017/TCP\n    Host Port:    21017/TCP\n    Environment:  \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rc88v (ro)\nVolumes:\n  default-token-rc88v:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-rc88v\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type     Reason            Age   From                 Message\n  ----     ------            ----  ----                 -------\n  Warning  PodFitsHostPorts  5m8s  kubelet, iruya-node  Predicate PodFitsHostPorts failed\n"
Dec 24 15:00:06.746: INFO: 
Output of kubectl describe ss-0:
Name:           ss-0
Namespace:      statefulset-2570
Priority:       0
Node:           iruya-node/
Labels:         baz=blah
                controller-revision-hash=ss-6f98bdb9c4
                foo=bar
                statefulset.kubernetes.io/pod-name=ss-0
Annotations:    
Status:         Pending
IP:             
Controlled By:  StatefulSet/ss
Containers:
  nginx:
    Image:        docker.io/library/nginx:1.14-alpine
    Port:         21017/TCP
    Host Port:    21017/TCP
    Environment:  
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rc88v (ro)
Volumes:
  default-token-rc88v:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-rc88v
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age   From                 Message
  ----     ------            ----  ----                 -------
  Warning  PodFitsHostPorts  5m8s  kubelet, iruya-node  Predicate PodFitsHostPorts failed

Dec 24 15:00:06.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs ss-0 --namespace=statefulset-2570 --tail=100'
Dec 24 15:00:06.905: INFO: rc: 1
Dec 24 15:00:06.905: INFO: 
Last 100 log lines of ss-0:

Dec 24 15:00:06.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe po test-pod --namespace=statefulset-2570'
Dec 24 15:00:07.166: INFO: stderr: ""
Dec 24 15:00:07.166: INFO: stdout: "Name:         test-pod\nNamespace:    statefulset-2570\nPriority:     0\nNode:         iruya-node/10.96.3.65\nStart Time:   Tue, 24 Dec 2019 14:54:52 +0000\nLabels:       \nAnnotations:  \nStatus:       Running\nIP:           10.44.0.1\nContainers:\n  nginx:\n    Container ID:   docker://9b4d75759d924d4c4a46accecb4c9841f2eeba3d524492b874ef2525f65841ca\n    Image:          docker.io/library/nginx:1.14-alpine\n    Image ID:       docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\n    Port:           21017/TCP\n    Host Port:      21017/TCP\n    State:          Running\n      Started:      Tue, 24 Dec 2019 14:55:01 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rc88v (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-rc88v:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-rc88v\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason   Age   From                 Message\n  ----    ------   ----  ----                 -------\n  Normal  Pulled   5m9s  kubelet, iruya-node  Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\n  Normal  Created  5m7s  kubelet, iruya-node  Created container nginx\n  Normal  Started  5m6s  kubelet, iruya-node  Started container nginx\n"
Dec 24 15:00:07.166: INFO: 
Output of kubectl describe test-pod:
Name:         test-pod
Namespace:    statefulset-2570
Priority:     0
Node:         iruya-node/10.96.3.65
Start Time:   Tue, 24 Dec 2019 14:54:52 +0000
Labels:       
Annotations:  
Status:       Running
IP:           10.44.0.1
Containers:
  nginx:
    Container ID:   docker://9b4d75759d924d4c4a46accecb4c9841f2eeba3d524492b874ef2525f65841ca
    Image:          docker.io/library/nginx:1.14-alpine
    Image ID:       docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7
    Port:           21017/TCP
    Host Port:      21017/TCP
    State:          Running
      Started:      Tue, 24 Dec 2019 14:55:01 +0000
    Ready:          True
    Restart Count:  0
    Environment:    
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rc88v (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-rc88v:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-rc88v
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason   Age   From                 Message
  ----    ------   ----  ----                 -------
  Normal  Pulled   5m9s  kubelet, iruya-node  Container image "docker.io/library/nginx:1.14-alpine" already present on machine
  Normal  Created  5m7s  kubelet, iruya-node  Created container nginx
  Normal  Started  5m6s  kubelet, iruya-node  Started container nginx

Dec 24 15:00:07.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs test-pod --namespace=statefulset-2570 --tail=100'
Dec 24 15:00:07.297: INFO: stderr: ""
Dec 24 15:00:07.297: INFO: stdout: ""
Dec 24 15:00:07.297: INFO: 
Last 100 log lines of test-pod:

Dec 24 15:00:07.297: INFO: Deleting all statefulset in ns statefulset-2570
Dec 24 15:00:07.303: INFO: Scaling statefulset ss to 0
Dec 24 15:00:17.381: INFO: Waiting for statefulset status.replicas updated to 0
Dec 24 15:00:17.385: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Collecting events from namespace "statefulset-2570".
STEP: Found 12 events.
Dec 24 15:00:17.424: INFO: At 2019-12-24 14:54:52 +0000 UTC - event for ss: {statefulset-controller } SuccessfulCreate: create Pod ss-0 in StatefulSet ss successful
Dec 24 15:00:17.424: INFO: At 2019-12-24 14:54:52 +0000 UTC - event for ss: {statefulset-controller } RecreatingFailedPod: StatefulSet statefulset-2570/ss is recreating failed Pod ss-0
Dec 24 15:00:17.424: INFO: At 2019-12-24 14:54:52 +0000 UTC - event for ss: {statefulset-controller } SuccessfulDelete: delete Pod ss-0 in StatefulSet ss successful
Dec 24 15:00:17.424: INFO: At 2019-12-24 14:54:52 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Dec 24 15:00:17.424: INFO: At 2019-12-24 14:54:53 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Dec 24 15:00:17.424: INFO: At 2019-12-24 14:54:54 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Dec 24 15:00:17.424: INFO: At 2019-12-24 14:54:56 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Dec 24 15:00:17.424: INFO: At 2019-12-24 14:54:57 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Dec 24 15:00:17.424: INFO: At 2019-12-24 14:54:58 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Dec 24 15:00:17.424: INFO: At 2019-12-24 14:54:58 +0000 UTC - event for test-pod: {kubelet iruya-node} Pulled: Container image "docker.io/library/nginx:1.14-alpine" already present on machine
Dec 24 15:00:17.424: INFO: At 2019-12-24 14:55:00 +0000 UTC - event for test-pod: {kubelet iruya-node} Created: Created container nginx
Dec 24 15:00:17.424: INFO: At 2019-12-24 14:55:01 +0000 UTC - event for test-pod: {kubelet iruya-node} Started: Started container nginx
Dec 24 15:00:17.431: INFO: POD       NODE        PHASE    GRACE  CONDITIONS
Dec 24 15:00:17.431: INFO: test-pod  iruya-node  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:54:52 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:55:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:55:02 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-24 14:54:52 +0000 UTC  }]
Dec 24 15:00:17.431: INFO: 
Dec 24 15:00:17.449: INFO: 
Logging node info for node iruya-node
Dec 24 15:00:17.460: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:iruya-node,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/iruya-node,UID:b2aa273d-23ea-4c86-9e2f-72569e3392bd,ResourceVersion:17906390,Generation:0,CreationTimestamp:2019-08-04 09:01:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: iruya-node,kubernetes.io/os: linux,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.96.1.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-10-12 11:56:49 +0000 UTC 2019-10-12 11:56:49 +0000 UTC WeaveIsUp Weave pod has set this} {MemoryPressure False 2019-12-24 14:59:17 +0000 UTC 2019-08-04 09:01:39 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-12-24 14:59:17 +0000 UTC 2019-08-04 09:01:39 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-12-24 14:59:17 +0000 UTC 2019-08-04 09:01:39 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-12-24 14:59:17 +0000 UTC 2019-08-04 09:02:19 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.96.3.65} {Hostname iruya-node}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f573dcf04d6f4a87856a35d266a2fa7a,SystemUUID:F573DCF0-4D6F-4A87-856A-35D266A2FA7A,BootID:8baf4beb-8391-43e6-b17b-b1e184b5370a,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.15.1,KubeProxyVersion:v1.15.1,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6] 373099368} {[k8s.gcr.io/etcd@sha256:17da501f5d2a675be46040422a27b7cc21b8a43895ac998b171db1c346f361f7 k8s.gcr.io/etcd:3.3.10] 258116302} {[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15] 246640776} {[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0] 195659796} {[weaveworks/weave-kube@sha256:8fea236b8e64192c454e459b40381bd48795bd54d791fa684d818afdc12bd100 weaveworks/weave-kube:2.5.2] 148150868} {[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine] 126894770} {[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine] 123781643} {[gcr.io/google-samples/gb-redisslave@sha256:57730a481f97b3321138161ba2c8c9ca3b32df32ce9180e4029e6940446800ec gcr.io/google-samples/gb-redisslave:v3] 98945667} {[k8s.gcr.io/kube-proxy@sha256:08186f4897488e96cb098dd8d1d931af9a6ea718bb8737bf44bb76e42075f0ce k8s.gcr.io/kube-proxy:v1.15.1] 82408284} {[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10] 61365829} {[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6] 57345321} {[weaveworks/weave-npc@sha256:56c93a359d54107558720a2859b83cb28a31c70c82a1aaa3dc4704e6c62e3b15 weaveworks/weave-npc:2.5.2] 49569458} {[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine] 29331594} {[gcr.io/kubernetes-e2e-test-images/nettest@sha256:6aa91bc71993260a87513e31b672ec14ce84bc253cd5233406c6946d3a8f55a1 gcr.io/kubernetes-e2e-test-images/nettest:1.0] 27413498} {[nginx@sha256:57a226fb6ab6823027c0704a9346a890ffb0cacde06bc19bbc234c8720673555 nginx:1.15-alpine] 16087791} {[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine] 16032814} {[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0] 11443478} {[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1] 9349974} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 8490662} {[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0] 6757579} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 6705349} {[gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 gcr.io/kubernetes-e2e-test-images/redis:1.0] 5905732} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 5851985} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:71c3fc838e0637df570497febafa0ee73bf47176dfd43612de5c55a71230674e gcr.io/kubernetes-e2e-test-images/liveness:1.1] 5829944} {[appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 appropriate/curl:latest] 5496756} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 4753501} {[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0] 4747037} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 4732240} {[gcr.io/kubernetes-e2e-test-images/porter@sha256:d6389405e453950618ae7749d9eee388f0eb32e0328a7e6583c41433aa5f2a77 gcr.io/kubernetes-e2e-test-images/porter:1.0] 4681408} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 2729534} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 1563521} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 1450451} {[busybox@sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e busybox:latest] 1219782} {[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29] 1154361} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472} {[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest] 239840}],VolumesInUse:[],VolumesAttached:[],Config:nil,},}
Dec 24 15:00:17.461: INFO: 
Logging kubelet events for node iruya-node
Dec 24 15:00:17.471: INFO: 
Logging pods the kubelet thinks is on node iruya-node
Dec 24 15:00:17.511: INFO: test-pod started at 2019-12-24 14:54:52 +0000 UTC (0+1 container statuses recorded)
Dec 24 15:00:17.511: INFO: 	Container nginx ready: true, restart count 0
Dec 24 15:00:17.511: INFO: kube-proxy-976zl started at 2019-08-04 09:01:39 +0000 UTC (0+1 container statuses recorded)
Dec 24 15:00:17.511: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 24 15:00:17.511: INFO: weave-net-rlp57 started at 2019-10-12 11:56:39 +0000 UTC (0+2 container statuses recorded)
Dec 24 15:00:17.511: INFO: 	Container weave ready: true, restart count 0
Dec 24 15:00:17.511: INFO: 	Container weave-npc ready: true, restart count 0
W1224 15:00:17.523080       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 24 15:00:17.614: INFO: 
Latency metrics for node iruya-node
Dec 24 15:00:17.614: INFO: 
Logging node info for node iruya-server-sfge57q7djm7
Dec 24 15:00:17.647: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:iruya-server-sfge57q7djm7,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/iruya-server-sfge57q7djm7,UID:67f2a658-4743-4118-95e7-463a23bcd212,ResourceVersion:17906411,Generation:0,CreationTimestamp:2019-08-04 08:52:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: iruya-server-sfge57q7djm7,kubernetes.io/os: linux,node-role.kubernetes.io/master: ,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.96.0.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-08-04 08:53:00 +0000 UTC 2019-08-04 08:53:00 +0000 UTC WeaveIsUp Weave pod has set this} {MemoryPressure False 2019-12-24 14:59:33 +0000 UTC 2019-08-04 08:52:04 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-12-24 14:59:33 +0000 UTC 2019-08-04 08:52:04 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-12-24 14:59:33 +0000 UTC 2019-08-04 08:52:04 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-12-24 14:59:33 +0000 UTC 2019-08-04 08:53:09 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.96.2.216} {Hostname iruya-server-sfge57q7djm7}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:78bacef342604a51913cae58dd95802b,SystemUUID:78BACEF3-4260-4A51-913C-AE58DD95802B,BootID:db143d3a-01b3-4483-b23e-e72adff2b28d,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.15.1,KubeProxyVersion:v1.15.1,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6] 373099368} {[k8s.gcr.io/etcd@sha256:17da501f5d2a675be46040422a27b7cc21b8a43895ac998b171db1c346f361f7 k8s.gcr.io/etcd:3.3.10] 258116302} {[k8s.gcr.io/kube-apiserver@sha256:304a1c38707834062ee87df62ef329d52a8b9a3e70459565d0a396479073f54c k8s.gcr.io/kube-apiserver:v1.15.1] 206827454} {[k8s.gcr.io/kube-controller-manager@sha256:9abae95e428e228fe8f6d1630d55e79e018037460f3731312805c0f37471e4bf k8s.gcr.io/kube-controller-manager:v1.15.1] 158722622} {[weaveworks/weave-kube@sha256:8fea236b8e64192c454e459b40381bd48795bd54d791fa684d818afdc12bd100 weaveworks/weave-kube:2.5.2] 148150868} {[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine] 126894770} {[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine] 123781643} {[gcr.io/google-samples/gb-redisslave@sha256:57730a481f97b3321138161ba2c8c9ca3b32df32ce9180e4029e6940446800ec gcr.io/google-samples/gb-redisslave:v3] 98945667} {[k8s.gcr.io/kube-proxy@sha256:08186f4897488e96cb098dd8d1d931af9a6ea718bb8737bf44bb76e42075f0ce k8s.gcr.io/kube-proxy:v1.15.1] 82408284} {[k8s.gcr.io/kube-scheduler@sha256:d0ee18a9593013fbc44b1920e4930f29b664b59a3958749763cb33b57e0e8956 k8s.gcr.io/kube-scheduler:v1.15.1] 81107582} {[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6] 57345321} {[weaveworks/weave-npc@sha256:56c93a359d54107558720a2859b83cb28a31c70c82a1aaa3dc4704e6c62e3b15 weaveworks/weave-npc:2.5.2] 49569458} {[k8s.gcr.io/coredns@sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4 k8s.gcr.io/coredns:1.3.1] 40303560} {[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine] 29331594} {[nginx@sha256:57a226fb6ab6823027c0704a9346a890ffb0cacde06bc19bbc234c8720673555 nginx:1.15-alpine] 16087791} {[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine] 16032814} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 6705349} {[gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 gcr.io/kubernetes-e2e-test-images/redis:1.0] 5905732} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 4753501} {[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0] 4747037} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 1563521} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472} {[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest] 239840}],VolumesInUse:[],VolumesAttached:[],Config:nil,},}
Dec 24 15:00:17.648: INFO: 
Logging kubelet events for node iruya-server-sfge57q7djm7
Dec 24 15:00:17.653: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7
Dec 24 15:00:17.682: INFO: kube-apiserver-iruya-server-sfge57q7djm7 started at 2019-08-04 08:51:39 +0000 UTC (0+1 container statuses recorded)
Dec 24 15:00:17.682: INFO: 	Container kube-apiserver ready: true, restart count 0
Dec 24 15:00:17.682: INFO: kube-scheduler-iruya-server-sfge57q7djm7 started at 2019-08-04 08:51:43 +0000 UTC (0+1 container statuses recorded)
Dec 24 15:00:17.682: INFO: 	Container kube-scheduler ready: true, restart count 7
Dec 24 15:00:17.682: INFO: coredns-5c98db65d4-xx8w8 started at 2019-08-04 08:53:12 +0000 UTC (0+1 container statuses recorded)
Dec 24 15:00:17.682: INFO: 	Container coredns ready: true, restart count 0
Dec 24 15:00:17.682: INFO: etcd-iruya-server-sfge57q7djm7 started at 2019-08-04 08:51:38 +0000 UTC (0+1 container statuses recorded)
Dec 24 15:00:17.682: INFO: 	Container etcd ready: true, restart count 0
Dec 24 15:00:17.682: INFO: weave-net-bzl4d started at 2019-08-04 08:52:37 +0000 UTC (0+2 container statuses recorded)
Dec 24 15:00:17.682: INFO: 	Container weave ready: true, restart count 0
Dec 24 15:00:17.682: INFO: 	Container weave-npc ready: true, restart count 0
Dec 24 15:00:17.682: INFO: coredns-5c98db65d4-bm4gs started at 2019-08-04 08:53:12 +0000 UTC (0+1 container statuses recorded)
Dec 24 15:00:17.682: INFO: 	Container coredns ready: true, restart count 0
Dec 24 15:00:17.682: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 started at 2019-08-04 08:51:42 +0000 UTC (0+1 container statuses recorded)
Dec 24 15:00:17.682: INFO: 	Container kube-controller-manager ready: true, restart count 10
Dec 24 15:00:17.682: INFO: kube-proxy-58v95 started at 2019-08-04 08:52:37 +0000 UTC (0+1 container statuses recorded)
Dec 24 15:00:17.682: INFO: 	Container kube-proxy ready: true, restart count 0
W1224 15:00:17.691625       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 24 15:00:17.735: INFO: 
Latency metrics for node iruya-server-sfge57q7djm7
Dec 24 15:00:17.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-2570" for this suite.
Dec 24 15:00:39.806: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 15:00:39.969: INFO: namespace statefulset-2570 deletion completed in 22.227645576s

• Failure [347.762 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Should recreate evicted statefulset [Conformance] [It]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697

    Dec 24 15:00:04.593: Pod ss-0 expected to be re-created at least once

    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:769
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 15:00:39.970: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating all guestbook components
Dec 24 15:00:40.044: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Dec 24 15:00:40.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5986'
Dec 24 15:00:40.534: INFO: stderr: ""
Dec 24 15:00:40.534: INFO: stdout: "service/redis-slave created\n"
Dec 24 15:00:40.535: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Dec 24 15:00:40.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5986'
Dec 24 15:00:41.003: INFO: stderr: ""
Dec 24 15:00:41.003: INFO: stdout: "service/redis-master created\n"
Dec 24 15:00:41.004: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Dec 24 15:00:41.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5986'
Dec 24 15:00:41.424: INFO: stderr: ""
Dec 24 15:00:41.425: INFO: stdout: "service/frontend created\n"
Dec 24 15:00:41.426: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Dec 24 15:00:41.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5986'
Dec 24 15:00:41.705: INFO: stderr: ""
Dec 24 15:00:41.705: INFO: stdout: "deployment.apps/frontend created\n"
Dec 24 15:00:41.705: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Dec 24 15:00:41.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5986'
Dec 24 15:00:42.487: INFO: stderr: ""
Dec 24 15:00:42.488: INFO: stdout: "deployment.apps/redis-master created\n"
Dec 24 15:00:42.488: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Dec 24 15:00:42.489: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5986'
Dec 24 15:00:43.721: INFO: stderr: ""
Dec 24 15:00:43.721: INFO: stdout: "deployment.apps/redis-slave created\n"
STEP: validating guestbook app
Dec 24 15:00:43.721: INFO: Waiting for all frontend pods to be Running.
Dec 24 15:01:08.774: INFO: Waiting for frontend to serve content.
Dec 24 15:01:08.944: INFO: Trying to add a new entry to the guestbook.
Dec 24 15:01:08.968: INFO: Verifying that added entry can be retrieved.
Dec 24 15:01:10.547: INFO: Failed to get response from guestbook. err: , response: {"data": ""}
STEP: using delete to clean up resources
Dec 24 15:01:15.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5986'
Dec 24 15:01:16.027: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 24 15:01:16.027: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Dec 24 15:01:16.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5986'
Dec 24 15:01:16.397: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 24 15:01:16.397: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Dec 24 15:01:16.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5986'
Dec 24 15:01:16.587: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 24 15:01:16.587: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Dec 24 15:01:16.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5986'
Dec 24 15:01:16.718: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 24 15:01:16.718: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Dec 24 15:01:16.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5986'
Dec 24 15:01:16.911: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 24 15:01:16.911: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Dec 24 15:01:16.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5986'
Dec 24 15:01:17.291: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 24 15:01:17.291: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 15:01:17.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5986" for this suite.
Dec 24 15:02:01.504: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 15:02:01.629: INFO: namespace kubectl-5986 deletion completed in 44.161613497s

• [SLOW TEST:81.659 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 15:02:01.629: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Dec 24 15:02:21.948: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2810 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 24 15:02:21.948: INFO: >>> kubeConfig: /root/.kube/config
Dec 24 15:02:22.405: INFO: Exec stderr: ""
Dec 24 15:02:22.406: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2810 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 24 15:02:22.406: INFO: >>> kubeConfig: /root/.kube/config
Dec 24 15:02:22.831: INFO: Exec stderr: ""
Dec 24 15:02:22.831: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2810 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 24 15:02:22.831: INFO: >>> kubeConfig: /root/.kube/config
Dec 24 15:02:23.157: INFO: Exec stderr: ""
Dec 24 15:02:23.157: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2810 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 24 15:02:23.157: INFO: >>> kubeConfig: /root/.kube/config
Dec 24 15:02:23.553: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Dec 24 15:02:23.553: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2810 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 24 15:02:23.554: INFO: >>> kubeConfig: /root/.kube/config
Dec 24 15:02:24.021: INFO: Exec stderr: ""
Dec 24 15:02:24.022: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2810 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 24 15:02:24.022: INFO: >>> kubeConfig: /root/.kube/config
Dec 24 15:02:24.535: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Dec 24 15:02:24.536: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2810 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 24 15:02:24.536: INFO: >>> kubeConfig: /root/.kube/config
Dec 24 15:02:24.988: INFO: Exec stderr: ""
Dec 24 15:02:24.988: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2810 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 24 15:02:24.988: INFO: >>> kubeConfig: /root/.kube/config
Dec 24 15:02:25.229: INFO: Exec stderr: ""
Dec 24 15:02:25.229: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2810 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 24 15:02:25.229: INFO: >>> kubeConfig: /root/.kube/config
Dec 24 15:02:25.518: INFO: Exec stderr: ""
Dec 24 15:02:25.518: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2810 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 24 15:02:25.518: INFO: >>> kubeConfig: /root/.kube/config
Dec 24 15:02:25.916: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 15:02:25.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-2810" for this suite.
Dec 24 15:03:17.954: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 15:03:18.070: INFO: namespace e2e-kubelet-etc-hosts-2810 deletion completed in 52.141755476s

• [SLOW TEST:76.441 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 15:03:18.071: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name secret-emptykey-test-995df333-6b70-427b-b76b-6ac5da05b5e3
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 15:03:18.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2970" for this suite.
Dec 24 15:03:24.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 15:03:24.328: INFO: namespace secrets-2970 deletion completed in 6.131883323s

• [SLOW TEST:6.257 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 15:03:24.329: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test use defaults
Dec 24 15:03:24.442: INFO: Waiting up to 5m0s for pod "client-containers-d0fb40b3-a597-4b07-8a10-d49850f1b5f1" in namespace "containers-9259" to be "success or failure"
Dec 24 15:03:24.518: INFO: Pod "client-containers-d0fb40b3-a597-4b07-8a10-d49850f1b5f1": Phase="Pending", Reason="", readiness=false. Elapsed: 76.003009ms
Dec 24 15:03:26.541: INFO: Pod "client-containers-d0fb40b3-a597-4b07-8a10-d49850f1b5f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098414043s
Dec 24 15:03:28.585: INFO: Pod "client-containers-d0fb40b3-a597-4b07-8a10-d49850f1b5f1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.142615523s
Dec 24 15:03:30.592: INFO: Pod "client-containers-d0fb40b3-a597-4b07-8a10-d49850f1b5f1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.149524093s
Dec 24 15:03:32.616: INFO: Pod "client-containers-d0fb40b3-a597-4b07-8a10-d49850f1b5f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.173773297s
STEP: Saw pod success
Dec 24 15:03:32.616: INFO: Pod "client-containers-d0fb40b3-a597-4b07-8a10-d49850f1b5f1" satisfied condition "success or failure"
Dec 24 15:03:32.626: INFO: Trying to get logs from node iruya-node pod client-containers-d0fb40b3-a597-4b07-8a10-d49850f1b5f1 container test-container: 
STEP: delete the pod
Dec 24 15:03:32.756: INFO: Waiting for pod client-containers-d0fb40b3-a597-4b07-8a10-d49850f1b5f1 to disappear
Dec 24 15:03:32.761: INFO: Pod client-containers-d0fb40b3-a597-4b07-8a10-d49850f1b5f1 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 15:03:32.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-9259" for this suite.
Dec 24 15:03:38.799: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 15:03:38.951: INFO: namespace containers-9259 deletion completed in 6.184661409s

• [SLOW TEST:14.623 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 15:03:38.952: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
STEP: reading a file in the container
Dec 24 15:03:47.655: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-666 pod-service-account-84350780-c1ee-4aad-a993-a8109be0192c -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Dec 24 15:03:48.259: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-666 pod-service-account-84350780-c1ee-4aad-a993-a8109be0192c -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Dec 24 15:03:48.897: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-666 pod-service-account-84350780-c1ee-4aad-a993-a8109be0192c -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 15:03:49.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-666" for this suite.
Dec 24 15:03:55.457: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 15:03:55.652: INFO: namespace svcaccounts-666 deletion completed in 6.226283448s

• [SLOW TEST:16.701 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 15:03:55.653: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-b8f848fa-9df0-421b-8298-c227d404ac6d
STEP: Creating a pod to test consume secrets
Dec 24 15:03:55.808: INFO: Waiting up to 5m0s for pod "pod-secrets-d9522525-ec34-4b93-b957-5c7341b426d4" in namespace "secrets-6841" to be "success or failure"
Dec 24 15:03:55.943: INFO: Pod "pod-secrets-d9522525-ec34-4b93-b957-5c7341b426d4": Phase="Pending", Reason="", readiness=false. Elapsed: 135.591993ms
Dec 24 15:03:57.952: INFO: Pod "pod-secrets-d9522525-ec34-4b93-b957-5c7341b426d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.143995044s
Dec 24 15:03:59.968: INFO: Pod "pod-secrets-d9522525-ec34-4b93-b957-5c7341b426d4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.160044891s
Dec 24 15:04:01.975: INFO: Pod "pod-secrets-d9522525-ec34-4b93-b957-5c7341b426d4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.166695383s
Dec 24 15:04:04.006: INFO: Pod "pod-secrets-d9522525-ec34-4b93-b957-5c7341b426d4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.197975147s
Dec 24 15:04:06.019: INFO: Pod "pod-secrets-d9522525-ec34-4b93-b957-5c7341b426d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.210749002s
STEP: Saw pod success
Dec 24 15:04:06.019: INFO: Pod "pod-secrets-d9522525-ec34-4b93-b957-5c7341b426d4" satisfied condition "success or failure"
Dec 24 15:04:06.025: INFO: Trying to get logs from node iruya-node pod pod-secrets-d9522525-ec34-4b93-b957-5c7341b426d4 container secret-volume-test: 
STEP: delete the pod
Dec 24 15:04:06.073: INFO: Waiting for pod pod-secrets-d9522525-ec34-4b93-b957-5c7341b426d4 to disappear
Dec 24 15:04:06.081: INFO: Pod pod-secrets-d9522525-ec34-4b93-b957-5c7341b426d4 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 15:04:06.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6841" for this suite.
Dec 24 15:04:12.189: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 15:04:12.267: INFO: namespace secrets-6841 deletion completed in 6.113058072s
STEP: Destroying namespace "secret-namespace-3519" for this suite.
Dec 24 15:04:18.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 15:04:18.468: INFO: namespace secret-namespace-3519 deletion completed in 6.199996839s

• [SLOW TEST:22.815 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 15:04:18.470: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 15:04:26.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-4966" for this suite.
Dec 24 15:05:18.717: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 15:05:18.810: INFO: namespace kubelet-test-4966 deletion completed in 52.123042604s

• [SLOW TEST:60.340 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 15:05:18.810: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Dec 24 15:05:18.971: INFO: Number of nodes with available pods: 0
Dec 24 15:05:18.971: INFO: Node iruya-node is running more than one daemon pod
Dec 24 15:05:19.996: INFO: Number of nodes with available pods: 0
Dec 24 15:05:19.996: INFO: Node iruya-node is running more than one daemon pod
Dec 24 15:05:21.220: INFO: Number of nodes with available pods: 0
Dec 24 15:05:21.220: INFO: Node iruya-node is running more than one daemon pod
Dec 24 15:05:21.991: INFO: Number of nodes with available pods: 0
Dec 24 15:05:21.991: INFO: Node iruya-node is running more than one daemon pod
Dec 24 15:05:23.004: INFO: Number of nodes with available pods: 0
Dec 24 15:05:23.004: INFO: Node iruya-node is running more than one daemon pod
Dec 24 15:05:25.552: INFO: Number of nodes with available pods: 0
Dec 24 15:05:25.552: INFO: Node iruya-node is running more than one daemon pod
Dec 24 15:05:26.083: INFO: Number of nodes with available pods: 0
Dec 24 15:05:26.083: INFO: Node iruya-node is running more than one daemon pod
Dec 24 15:05:27.571: INFO: Number of nodes with available pods: 0
Dec 24 15:05:27.571: INFO: Node iruya-node is running more than one daemon pod
Dec 24 15:05:28.016: INFO: Number of nodes with available pods: 0
Dec 24 15:05:28.016: INFO: Node iruya-node is running more than one daemon pod
Dec 24 15:05:28.987: INFO: Number of nodes with available pods: 1
Dec 24 15:05:28.987: INFO: Node iruya-node is running more than one daemon pod
Dec 24 15:05:29.994: INFO: Number of nodes with available pods: 2
Dec 24 15:05:29.994: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Dec 24 15:05:30.049: INFO: Number of nodes with available pods: 1
Dec 24 15:05:30.049: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 24 15:05:31.061: INFO: Number of nodes with available pods: 1
Dec 24 15:05:31.061: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 24 15:05:32.099: INFO: Number of nodes with available pods: 1
Dec 24 15:05:32.099: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 24 15:05:33.182: INFO: Number of nodes with available pods: 1
Dec 24 15:05:33.182: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 24 15:05:34.079: INFO: Number of nodes with available pods: 1
Dec 24 15:05:34.079: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 24 15:05:35.065: INFO: Number of nodes with available pods: 1
Dec 24 15:05:35.065: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 24 15:05:36.109: INFO: Number of nodes with available pods: 1
Dec 24 15:05:36.109: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 24 15:05:37.064: INFO: Number of nodes with available pods: 1
Dec 24 15:05:37.064: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 24 15:05:38.099: INFO: Number of nodes with available pods: 1
Dec 24 15:05:38.099: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 24 15:05:39.061: INFO: Number of nodes with available pods: 1
Dec 24 15:05:39.061: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 24 15:05:40.441: INFO: Number of nodes with available pods: 1
Dec 24 15:05:40.441: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 24 15:05:41.098: INFO: Number of nodes with available pods: 1
Dec 24 15:05:41.098: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 24 15:05:42.177: INFO: Number of nodes with available pods: 1
Dec 24 15:05:42.177: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 24 15:05:43.068: INFO: Number of nodes with available pods: 1
Dec 24 15:05:43.068: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 24 15:05:44.061: INFO: Number of nodes with available pods: 2
Dec 24 15:05:44.061: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5710, will wait for the garbage collector to delete the pods
Dec 24 15:05:44.125: INFO: Deleting DaemonSet.extensions daemon-set took: 7.96183ms
Dec 24 15:05:44.425: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.474105ms
Dec 24 15:05:57.934: INFO: Number of nodes with available pods: 0
Dec 24 15:05:57.934: INFO: Number of running nodes: 0, number of available pods: 0
Dec 24 15:05:57.943: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5710/daemonsets","resourceVersion":"17907381"},"items":null}

Dec 24 15:05:57.946: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5710/pods","resourceVersion":"17907381"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 15:05:57.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-5710" for this suite.
Dec 24 15:06:03.985: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 15:06:04.128: INFO: namespace daemonsets-5710 deletion completed in 6.166736572s

• [SLOW TEST:45.318 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 15:06:04.129: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 24 15:06:04.275: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8c431d63-44a5-4459-a27e-93ea982e75bb" in namespace "projected-1595" to be "success or failure"
Dec 24 15:06:04.377: INFO: Pod "downwardapi-volume-8c431d63-44a5-4459-a27e-93ea982e75bb": Phase="Pending", Reason="", readiness=false. Elapsed: 101.105474ms
Dec 24 15:06:06.389: INFO: Pod "downwardapi-volume-8c431d63-44a5-4459-a27e-93ea982e75bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113541865s
Dec 24 15:06:08.416: INFO: Pod "downwardapi-volume-8c431d63-44a5-4459-a27e-93ea982e75bb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.140162951s
Dec 24 15:06:10.427: INFO: Pod "downwardapi-volume-8c431d63-44a5-4459-a27e-93ea982e75bb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.151089557s
Dec 24 15:06:12.439: INFO: Pod "downwardapi-volume-8c431d63-44a5-4459-a27e-93ea982e75bb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.163633488s
Dec 24 15:06:14.452: INFO: Pod "downwardapi-volume-8c431d63-44a5-4459-a27e-93ea982e75bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.176430823s
STEP: Saw pod success
Dec 24 15:06:14.452: INFO: Pod "downwardapi-volume-8c431d63-44a5-4459-a27e-93ea982e75bb" satisfied condition "success or failure"
Dec 24 15:06:14.456: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-8c431d63-44a5-4459-a27e-93ea982e75bb container client-container: 
STEP: delete the pod
Dec 24 15:06:14.569: INFO: Waiting for pod downwardapi-volume-8c431d63-44a5-4459-a27e-93ea982e75bb to disappear
Dec 24 15:06:14.576: INFO: Pod downwardapi-volume-8c431d63-44a5-4459-a27e-93ea982e75bb no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 15:06:14.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1595" for this suite.
Dec 24 15:06:20.624: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 15:06:20.726: INFO: namespace projected-1595 deletion completed in 6.138814234s

• [SLOW TEST:16.598 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 15:06:20.727: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-a40fd92a-2db3-4053-ac5a-16f1c770dc64
STEP: Creating a pod to test consume configMaps
Dec 24 15:06:20.889: INFO: Waiting up to 5m0s for pod "pod-configmaps-67bb13ba-8f09-4d35-a07f-efeaadfc93e4" in namespace "configmap-1043" to be "success or failure"
Dec 24 15:06:20.940: INFO: Pod "pod-configmaps-67bb13ba-8f09-4d35-a07f-efeaadfc93e4": Phase="Pending", Reason="", readiness=false. Elapsed: 50.429721ms
Dec 24 15:06:22.948: INFO: Pod "pod-configmaps-67bb13ba-8f09-4d35-a07f-efeaadfc93e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05848677s
Dec 24 15:06:24.957: INFO: Pod "pod-configmaps-67bb13ba-8f09-4d35-a07f-efeaadfc93e4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067665405s
Dec 24 15:06:26.997: INFO: Pod "pod-configmaps-67bb13ba-8f09-4d35-a07f-efeaadfc93e4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.107305801s
Dec 24 15:06:29.012: INFO: Pod "pod-configmaps-67bb13ba-8f09-4d35-a07f-efeaadfc93e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.122396459s
STEP: Saw pod success
Dec 24 15:06:29.012: INFO: Pod "pod-configmaps-67bb13ba-8f09-4d35-a07f-efeaadfc93e4" satisfied condition "success or failure"
Dec 24 15:06:29.017: INFO: Trying to get logs from node iruya-node pod pod-configmaps-67bb13ba-8f09-4d35-a07f-efeaadfc93e4 container configmap-volume-test: 
STEP: delete the pod
Dec 24 15:06:29.103: INFO: Waiting for pod pod-configmaps-67bb13ba-8f09-4d35-a07f-efeaadfc93e4 to disappear
Dec 24 15:06:29.128: INFO: Pod pod-configmaps-67bb13ba-8f09-4d35-a07f-efeaadfc93e4 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 15:06:29.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1043" for this suite.
Dec 24 15:06:35.170: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 15:06:35.301: INFO: namespace configmap-1043 deletion completed in 6.166433702s

• [SLOW TEST:14.574 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 15:06:35.301: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
Dec 24 15:06:35.935: INFO: created pod pod-service-account-defaultsa
Dec 24 15:06:35.935: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Dec 24 15:06:36.001: INFO: created pod pod-service-account-mountsa
Dec 24 15:06:36.001: INFO: pod pod-service-account-mountsa service account token volume mount: true
Dec 24 15:06:36.035: INFO: created pod pod-service-account-nomountsa
Dec 24 15:06:36.035: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Dec 24 15:06:36.086: INFO: created pod pod-service-account-defaultsa-mountspec
Dec 24 15:06:36.086: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Dec 24 15:06:36.180: INFO: created pod pod-service-account-mountsa-mountspec
Dec 24 15:06:36.180: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Dec 24 15:06:36.224: INFO: created pod pod-service-account-nomountsa-mountspec
Dec 24 15:06:36.224: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Dec 24 15:06:36.417: INFO: created pod pod-service-account-defaultsa-nomountspec
Dec 24 15:06:36.417: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Dec 24 15:06:36.438: INFO: created pod pod-service-account-mountsa-nomountspec
Dec 24 15:06:36.439: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Dec 24 15:06:36.474: INFO: created pod pod-service-account-nomountsa-nomountspec
Dec 24 15:06:36.474: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 15:06:36.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-5702" for this suite.
Dec 24 15:07:02.300: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 15:07:02.465: INFO: namespace svcaccounts-5702 deletion completed in 25.03161228s

• [SLOW TEST:27.164 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 15:07:02.466: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Dec 24 15:07:11.201: INFO: Successfully updated pod "annotationupdateac45a6f5-dc71-41d6-9b73-e435ccca96ba"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 15:07:15.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9876" for this suite.
Dec 24 15:07:39.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 15:07:39.537: INFO: namespace projected-9876 deletion completed in 24.207422787s

• [SLOW TEST:37.072 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 15:07:39.538: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override command
Dec 24 15:07:39.651: INFO: Waiting up to 5m0s for pod "client-containers-4ab29269-a5b2-41b3-bf9b-29f9141608e7" in namespace "containers-258" to be "success or failure"
Dec 24 15:07:39.661: INFO: Pod "client-containers-4ab29269-a5b2-41b3-bf9b-29f9141608e7": Phase="Pending", Reason="", readiness=false. Elapsed: 9.020069ms
Dec 24 15:07:41.676: INFO: Pod "client-containers-4ab29269-a5b2-41b3-bf9b-29f9141608e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024254695s
Dec 24 15:07:43.684: INFO: Pod "client-containers-4ab29269-a5b2-41b3-bf9b-29f9141608e7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032415668s
Dec 24 15:07:45.694: INFO: Pod "client-containers-4ab29269-a5b2-41b3-bf9b-29f9141608e7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042523725s
Dec 24 15:07:47.701: INFO: Pod "client-containers-4ab29269-a5b2-41b3-bf9b-29f9141608e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.048936244s
STEP: Saw pod success
Dec 24 15:07:47.701: INFO: Pod "client-containers-4ab29269-a5b2-41b3-bf9b-29f9141608e7" satisfied condition "success or failure"
Dec 24 15:07:47.703: INFO: Trying to get logs from node iruya-node pod client-containers-4ab29269-a5b2-41b3-bf9b-29f9141608e7 container test-container: 
STEP: delete the pod
Dec 24 15:07:47.791: INFO: Waiting for pod client-containers-4ab29269-a5b2-41b3-bf9b-29f9141608e7 to disappear
Dec 24 15:07:47.834: INFO: Pod client-containers-4ab29269-a5b2-41b3-bf9b-29f9141608e7 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 15:07:47.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-258" for this suite.
Dec 24 15:07:53.894: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 15:07:54.107: INFO: namespace containers-258 deletion completed in 6.266186325s

• [SLOW TEST:14.569 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 15:07:54.107: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 24 15:07:54.222: INFO: Create a RollingUpdate DaemonSet
Dec 24 15:07:54.230: INFO: Check that daemon pods launch on every node of the cluster
Dec 24 15:07:54.247: INFO: Number of nodes with available pods: 0
Dec 24 15:07:54.247: INFO: Node iruya-node is running more than one daemon pod
Dec 24 15:07:55.373: INFO: Number of nodes with available pods: 0
Dec 24 15:07:55.373: INFO: Node iruya-node is running more than one daemon pod
Dec 24 15:07:56.742: INFO: Number of nodes with available pods: 0
Dec 24 15:07:56.742: INFO: Node iruya-node is running more than one daemon pod
Dec 24 15:07:57.261: INFO: Number of nodes with available pods: 0
Dec 24 15:07:57.261: INFO: Node iruya-node is running more than one daemon pod
Dec 24 15:07:58.275: INFO: Number of nodes with available pods: 0
Dec 24 15:07:58.275: INFO: Node iruya-node is running more than one daemon pod
Dec 24 15:08:00.988: INFO: Number of nodes with available pods: 0
Dec 24 15:08:00.988: INFO: Node iruya-node is running more than one daemon pod
Dec 24 15:08:01.481: INFO: Number of nodes with available pods: 0
Dec 24 15:08:01.481: INFO: Node iruya-node is running more than one daemon pod
Dec 24 15:08:02.264: INFO: Number of nodes with available pods: 0
Dec 24 15:08:02.264: INFO: Node iruya-node is running more than one daemon pod
Dec 24 15:08:03.342: INFO: Number of nodes with available pods: 0
Dec 24 15:08:03.342: INFO: Node iruya-node is running more than one daemon pod
Dec 24 15:08:04.274: INFO: Number of nodes with available pods: 0
Dec 24 15:08:04.274: INFO: Node iruya-node is running more than one daemon pod
Dec 24 15:08:05.271: INFO: Number of nodes with available pods: 1
Dec 24 15:08:05.271: INFO: Node iruya-node is running more than one daemon pod
Dec 24 15:08:06.259: INFO: Number of nodes with available pods: 2
Dec 24 15:08:06.259: INFO: Number of running nodes: 2, number of available pods: 2
Dec 24 15:08:06.259: INFO: Update the DaemonSet to trigger a rollout
Dec 24 15:08:06.273: INFO: Updating DaemonSet daemon-set
Dec 24 15:08:18.323: INFO: Roll back the DaemonSet before rollout is complete
Dec 24 15:08:18.341: INFO: Updating DaemonSet daemon-set
Dec 24 15:08:18.341: INFO: Make sure DaemonSet rollback is complete
Dec 24 15:08:18.351: INFO: Wrong image for pod: daemon-set-xzr9d. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Dec 24 15:08:18.351: INFO: Pod daemon-set-xzr9d is not available
Dec 24 15:08:20.144: INFO: Wrong image for pod: daemon-set-xzr9d. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Dec 24 15:08:20.145: INFO: Pod daemon-set-xzr9d is not available
Dec 24 15:08:21.035: INFO: Wrong image for pod: daemon-set-xzr9d. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Dec 24 15:08:21.035: INFO: Pod daemon-set-xzr9d is not available
Dec 24 15:08:22.038: INFO: Wrong image for pod: daemon-set-xzr9d. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Dec 24 15:08:22.038: INFO: Pod daemon-set-xzr9d is not available
Dec 24 15:08:23.031: INFO: Pod daemon-set-h7nsp is not available
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4336, will wait for the garbage collector to delete the pods
Dec 24 15:08:23.138: INFO: Deleting DaemonSet.extensions daemon-set took: 39.514952ms
Dec 24 15:08:23.838: INFO: Terminating DaemonSet.extensions daemon-set pods took: 700.763288ms
Dec 24 15:08:30.460: INFO: Number of nodes with available pods: 0
Dec 24 15:08:30.460: INFO: Number of running nodes: 0, number of available pods: 0
Dec 24 15:08:30.465: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4336/daemonsets","resourceVersion":"17907868"},"items":null}

Dec 24 15:08:30.467: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4336/pods","resourceVersion":"17907868"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 15:08:30.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4336" for this suite.
Dec 24 15:08:36.528: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 15:08:36.686: INFO: namespace daemonsets-4336 deletion completed in 6.201611326s

• [SLOW TEST:42.579 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 15:08:36.686: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8324.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8324.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 24 15:08:48.863: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-8324/dns-test-604c599f-1bfc-46bd-b425-40efb9a286d3: the server could not find the requested resource (get pods dns-test-604c599f-1bfc-46bd-b425-40efb9a286d3)
Dec 24 15:08:48.881: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-8324/dns-test-604c599f-1bfc-46bd-b425-40efb9a286d3: the server could not find the requested resource (get pods dns-test-604c599f-1bfc-46bd-b425-40efb9a286d3)
Dec 24 15:08:48.891: INFO: Unable to read wheezy_udp@PodARecord from pod dns-8324/dns-test-604c599f-1bfc-46bd-b425-40efb9a286d3: the server could not find the requested resource (get pods dns-test-604c599f-1bfc-46bd-b425-40efb9a286d3)
Dec 24 15:08:48.899: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-8324/dns-test-604c599f-1bfc-46bd-b425-40efb9a286d3: the server could not find the requested resource (get pods dns-test-604c599f-1bfc-46bd-b425-40efb9a286d3)
Dec 24 15:08:48.905: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-8324/dns-test-604c599f-1bfc-46bd-b425-40efb9a286d3: the server could not find the requested resource (get pods dns-test-604c599f-1bfc-46bd-b425-40efb9a286d3)
Dec 24 15:08:48.912: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-8324/dns-test-604c599f-1bfc-46bd-b425-40efb9a286d3: the server could not find the requested resource (get pods dns-test-604c599f-1bfc-46bd-b425-40efb9a286d3)
Dec 24 15:08:48.919: INFO: Unable to read jessie_udp@PodARecord from pod dns-8324/dns-test-604c599f-1bfc-46bd-b425-40efb9a286d3: the server could not find the requested resource (get pods dns-test-604c599f-1bfc-46bd-b425-40efb9a286d3)
Dec 24 15:08:48.923: INFO: Unable to read jessie_tcp@PodARecord from pod dns-8324/dns-test-604c599f-1bfc-46bd-b425-40efb9a286d3: the server could not find the requested resource (get pods dns-test-604c599f-1bfc-46bd-b425-40efb9a286d3)
Dec 24 15:08:48.923: INFO: Lookups using dns-8324/dns-test-604c599f-1bfc-46bd-b425-40efb9a286d3 failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Dec 24 15:08:54.046: INFO: DNS probes using dns-8324/dns-test-604c599f-1bfc-46bd-b425-40efb9a286d3 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 15:08:54.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8324" for this suite.
Dec 24 15:09:00.216: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 15:09:00.347: INFO: namespace dns-8324 deletion completed in 6.193382534s

• [SLOW TEST:23.662 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 15:09:00.348: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: executing a command with run --rm and attach with stdin
Dec 24 15:09:00.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4875 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Dec 24 15:09:09.449: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\n"
Dec 24 15:09:09.449: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 15:09:11.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4875" for this suite.
Dec 24 15:09:17.490: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 15:09:17.609: INFO: namespace kubectl-4875 deletion completed in 6.146028512s

• [SLOW TEST:17.262 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 15:09:17.610: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Dec 24 15:09:25.867: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 15:09:25.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-3026" for this suite.
Dec 24 15:09:32.582: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 15:09:32.807: INFO: namespace container-runtime-3026 deletion completed in 6.858771922s

• [SLOW TEST:15.198 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 15:09:32.808: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service endpoint-test2 in namespace services-9987
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9987 to expose endpoints map[]
Dec 24 15:09:32.992: INFO: Get endpoints failed (7.809558ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Dec 24 15:09:34.023: INFO: successfully validated that service endpoint-test2 in namespace services-9987 exposes endpoints map[] (1.038912839s elapsed)
STEP: Creating pod pod1 in namespace services-9987
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9987 to expose endpoints map[pod1:[80]]
Dec 24 15:09:38.122: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.083746703s elapsed, will retry)
Dec 24 15:09:41.169: INFO: successfully validated that service endpoint-test2 in namespace services-9987 exposes endpoints map[pod1:[80]] (7.131573845s elapsed)
STEP: Creating pod pod2 in namespace services-9987
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9987 to expose endpoints map[pod1:[80] pod2:[80]]
Dec 24 15:09:45.755: INFO: Unexpected endpoints: found map[9054f4ca-7df8-4d7b-865e-9b320bbf9a79:[80]], expected map[pod1:[80] pod2:[80]] (4.569310356s elapsed, will retry)
Dec 24 15:09:50.034: INFO: successfully validated that service endpoint-test2 in namespace services-9987 exposes endpoints map[pod1:[80] pod2:[80]] (8.848359755s elapsed)
STEP: Deleting pod pod1 in namespace services-9987
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9987 to expose endpoints map[pod2:[80]]
Dec 24 15:09:51.149: INFO: successfully validated that service endpoint-test2 in namespace services-9987 exposes endpoints map[pod2:[80]] (1.108681445s elapsed)
STEP: Deleting pod pod2 in namespace services-9987
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9987 to expose endpoints map[]
Dec 24 15:09:51.245: INFO: successfully validated that service endpoint-test2 in namespace services-9987 exposes endpoints map[] (74.482422ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 15:09:51.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9987" for this suite.
Dec 24 15:10:13.354: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 15:10:13.513: INFO: namespace services-9987 deletion completed in 22.195442209s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:40.705 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 15:10:13.514: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 24 15:10:13.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-1759'
Dec 24 15:10:15.853: INFO: stderr: ""
Dec 24 15:10:15.853: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Dec 24 15:10:25.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-1759 -o json'
Dec 24 15:10:26.100: INFO: stderr: ""
Dec 24 15:10:26.100: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2019-12-24T15:10:15Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"kubectl-1759\",\n        \"resourceVersion\": \"17908205\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-1759/pods/e2e-test-nginx-pod\",\n        \"uid\": \"77c6a99a-aeb2-451e-a045-289cd0148704\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-6jqhg\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"iruya-node\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-6jqhg\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-6jqhg\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-24T15:10:15Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-24T15:10:23Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-24T15:10:23Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-24T15:10:15Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://28bc14f0e61b919622a1e594febd06f08b32e056163a13c961421848ba2fb47e\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2019-12-24T15:10:22Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.3.65\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.44.0.1\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2019-12-24T15:10:15Z\"\n    }\n}\n"
STEP: replace the image in the pod
Dec 24 15:10:26.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-1759'
Dec 24 15:10:26.541: INFO: stderr: ""
Dec 24 15:10:26.541: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726
Dec 24 15:10:26.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-1759'
Dec 24 15:10:33.611: INFO: stderr: ""
Dec 24 15:10:33.611: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 15:10:33.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1759" for this suite.
Dec 24 15:10:39.654: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 15:10:39.830: INFO: namespace kubectl-1759 deletion completed in 6.207443812s

• [SLOW TEST:26.316 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 15:10:39.831: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-59148d8a-a0ee-486d-8884-ca9a6046d1fe
STEP: Creating a pod to test consume secrets
Dec 24 15:10:39.935: INFO: Waiting up to 5m0s for pod "pod-secrets-bb4bc604-80a7-4258-9e3c-d46ba641e3dc" in namespace "secrets-8605" to be "success or failure"
Dec 24 15:10:39.947: INFO: Pod "pod-secrets-bb4bc604-80a7-4258-9e3c-d46ba641e3dc": Phase="Pending", Reason="", readiness=false. Elapsed: 12.871227ms
Dec 24 15:10:41.954: INFO: Pod "pod-secrets-bb4bc604-80a7-4258-9e3c-d46ba641e3dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019295782s
Dec 24 15:10:43.972: INFO: Pod "pod-secrets-bb4bc604-80a7-4258-9e3c-d46ba641e3dc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037785887s
Dec 24 15:10:45.981: INFO: Pod "pod-secrets-bb4bc604-80a7-4258-9e3c-d46ba641e3dc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046474019s
Dec 24 15:10:48.003: INFO: Pod "pod-secrets-bb4bc604-80a7-4258-9e3c-d46ba641e3dc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.068614094s
Dec 24 15:10:50.011: INFO: Pod "pod-secrets-bb4bc604-80a7-4258-9e3c-d46ba641e3dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.076315162s
STEP: Saw pod success
Dec 24 15:10:50.011: INFO: Pod "pod-secrets-bb4bc604-80a7-4258-9e3c-d46ba641e3dc" satisfied condition "success or failure"
Dec 24 15:10:50.014: INFO: Trying to get logs from node iruya-node pod pod-secrets-bb4bc604-80a7-4258-9e3c-d46ba641e3dc container secret-volume-test: 
STEP: delete the pod
Dec 24 15:10:50.227: INFO: Waiting for pod pod-secrets-bb4bc604-80a7-4258-9e3c-d46ba641e3dc to disappear
Dec 24 15:10:50.246: INFO: Pod pod-secrets-bb4bc604-80a7-4258-9e3c-d46ba641e3dc no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 15:10:50.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8605" for this suite.
Dec 24 15:10:56.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 15:10:56.368: INFO: namespace secrets-8605 deletion completed in 6.11437032s

• [SLOW TEST:16.537 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 15:10:56.369: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 24 15:10:56.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-7945'
Dec 24 15:10:56.693: INFO: stderr: ""
Dec 24 15:10:56.694: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Dec 24 15:10:56.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-7945'
Dec 24 15:11:03.116: INFO: stderr: ""
Dec 24 15:11:03.116: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 15:11:03.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7945" for this suite.
Dec 24 15:11:09.178: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 15:11:09.322: INFO: namespace kubectl-7945 deletion completed in 6.198505411s

• [SLOW TEST:12.954 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 15:11:09.323: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Dec 24 15:11:09.516: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5168,SelfLink:/api/v1/namespaces/watch-5168/configmaps/e2e-watch-test-label-changed,UID:7839e58f-2e69-4ffe-aa93-d4218d46e828,ResourceVersion:17908338,Generation:0,CreationTimestamp:2019-12-24 15:11:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 24 15:11:09.516: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5168,SelfLink:/api/v1/namespaces/watch-5168/configmaps/e2e-watch-test-label-changed,UID:7839e58f-2e69-4ffe-aa93-d4218d46e828,ResourceVersion:17908339,Generation:0,CreationTimestamp:2019-12-24 15:11:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Dec 24 15:11:09.516: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5168,SelfLink:/api/v1/namespaces/watch-5168/configmaps/e2e-watch-test-label-changed,UID:7839e58f-2e69-4ffe-aa93-d4218d46e828,ResourceVersion:17908340,Generation:0,CreationTimestamp:2019-12-24 15:11:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Dec 24 15:11:19.589: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5168,SelfLink:/api/v1/namespaces/watch-5168/configmaps/e2e-watch-test-label-changed,UID:7839e58f-2e69-4ffe-aa93-d4218d46e828,ResourceVersion:17908355,Generation:0,CreationTimestamp:2019-12-24 15:11:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 24 15:11:19.590: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5168,SelfLink:/api/v1/namespaces/watch-5168/configmaps/e2e-watch-test-label-changed,UID:7839e58f-2e69-4ffe-aa93-d4218d46e828,ResourceVersion:17908356,Generation:0,CreationTimestamp:2019-12-24 15:11:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Dec 24 15:11:19.590: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5168,SelfLink:/api/v1/namespaces/watch-5168/configmaps/e2e-watch-test-label-changed,UID:7839e58f-2e69-4ffe-aa93-d4218d46e828,ResourceVersion:17908357,Generation:0,CreationTimestamp:2019-12-24 15:11:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 15:11:19.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5168" for this suite.
Dec 24 15:11:25.673: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 15:11:25.857: INFO: namespace watch-5168 deletion completed in 6.252398342s

• [SLOW TEST:16.534 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 15:11:25.857: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 15:11:32.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-4547" for this suite.
Dec 24 15:11:38.344: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 15:11:38.460: INFO: namespace namespaces-4547 deletion completed in 6.138065576s
STEP: Destroying namespace "nsdeletetest-8468" for this suite.
Dec 24 15:11:38.463: INFO: Namespace nsdeletetest-8468 was already deleted
STEP: Destroying namespace "nsdeletetest-6488" for this suite.
Dec 24 15:11:44.491: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 15:11:44.665: INFO: namespace nsdeletetest-6488 deletion completed in 6.202045582s

• [SLOW TEST:18.808 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 15:11:44.665: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Dec 24 15:11:44.730: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 15:11:58.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-8634" for this suite.
Dec 24 15:12:04.364: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 15:12:04.500: INFO: namespace init-container-8634 deletion completed in 6.195979029s

• [SLOW TEST:19.835 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 24 15:12:04.500: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Dec 24 15:12:04.616: INFO: Waiting up to 5m0s for pod "pod-a0e4568b-b43e-4d7c-8bd9-efe1a601fe97" in namespace "emptydir-6518" to be "success or failure"
Dec 24 15:12:04.620: INFO: Pod "pod-a0e4568b-b43e-4d7c-8bd9-efe1a601fe97": Phase="Pending", Reason="", readiness=false. Elapsed: 4.099911ms
Dec 24 15:12:06.669: INFO: Pod "pod-a0e4568b-b43e-4d7c-8bd9-efe1a601fe97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053200615s
Dec 24 15:12:08.678: INFO: Pod "pod-a0e4568b-b43e-4d7c-8bd9-efe1a601fe97": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062151556s
Dec 24 15:12:10.689: INFO: Pod "pod-a0e4568b-b43e-4d7c-8bd9-efe1a601fe97": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073555772s
Dec 24 15:12:12.699: INFO: Pod "pod-a0e4568b-b43e-4d7c-8bd9-efe1a601fe97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.083391482s
STEP: Saw pod success
Dec 24 15:12:12.699: INFO: Pod "pod-a0e4568b-b43e-4d7c-8bd9-efe1a601fe97" satisfied condition "success or failure"
Dec 24 15:12:12.704: INFO: Trying to get logs from node iruya-node pod pod-a0e4568b-b43e-4d7c-8bd9-efe1a601fe97 container test-container: 
STEP: delete the pod
Dec 24 15:12:12.757: INFO: Waiting for pod pod-a0e4568b-b43e-4d7c-8bd9-efe1a601fe97 to disappear
Dec 24 15:12:12.774: INFO: Pod pod-a0e4568b-b43e-4d7c-8bd9-efe1a601fe97 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 24 15:12:12.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6518" for this suite.
Dec 24 15:12:18.825: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 24 15:12:18.941: INFO: namespace emptydir-6518 deletion completed in 6.153001784s

• [SLOW TEST:14.441 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
Dec 24 15:12:18.941: INFO: Running AfterSuite actions on all nodes
Dec 24 15:12:18.941: INFO: Running AfterSuite actions on node 1
Dec 24 15:12:18.941: INFO: Skipping dumping logs from cluster


Summarizing 1 Failure:

[Fail] [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] [It] Should recreate evicted statefulset [Conformance] 
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:769

Ran 215 of 4412 Specs in 8169.438 seconds
FAIL! -- 214 Passed | 1 Failed | 0 Pending | 4197 Skipped
--- FAIL: TestE2E (8169.85s)
FAIL