I1220 12:56:10.438320 8 e2e.go:243] Starting e2e run "723963dc-003b-4ece-954c-dab8a15cf56a" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1576846568 - Will randomize all specs Will run 215 of 4412 specs Dec 20 12:56:10.797: INFO: >>> kubeConfig: /root/.kube/config Dec 20 12:56:10.804: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Dec 20 12:56:10.835: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Dec 20 12:56:10.863: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Dec 20 12:56:10.863: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Dec 20 12:56:10.863: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Dec 20 12:56:10.872: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Dec 20 12:56:10.872: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Dec 20 12:56:10.872: INFO: e2e test version: v1.15.7 Dec 20 12:56:10.874: INFO: kube-apiserver version: v1.15.1 SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 12:56:10.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces Dec 20 12:56:11.014: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 12:56:17.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-7929" for this suite. Dec 20 12:56:25.620: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 12:56:25.766: INFO: namespace namespaces-7929 deletion completed in 8.189448603s STEP: Destroying namespace "nsdeletetest-7988" for this suite. Dec 20 12:56:25.776: INFO: Namespace nsdeletetest-7988 was already deleted STEP: Destroying namespace "nsdeletetest-6450" for this suite. Dec 20 12:56:31.880: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 12:56:31.992: INFO: namespace nsdeletetest-6450 deletion completed in 6.216526555s • [SLOW TEST:21.118 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 12:56:31.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-7f6ad9fd-21c3-4781-983d-a05a728586b4 STEP: Creating a pod to test consume secrets Dec 20 12:56:32.191: INFO: Waiting up to 5m0s for pod "pod-secrets-42bc2cf1-ebb6-46d9-bf0e-9b9e7346258b" in namespace "secrets-8703" to be "success or failure" Dec 20 12:56:32.298: INFO: Pod "pod-secrets-42bc2cf1-ebb6-46d9-bf0e-9b9e7346258b": Phase="Pending", Reason="", readiness=false. Elapsed: 107.115989ms Dec 20 12:56:34.314: INFO: Pod "pod-secrets-42bc2cf1-ebb6-46d9-bf0e-9b9e7346258b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123421897s Dec 20 12:56:36.333: INFO: Pod "pod-secrets-42bc2cf1-ebb6-46d9-bf0e-9b9e7346258b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.142634817s Dec 20 12:56:38.345: INFO: Pod "pod-secrets-42bc2cf1-ebb6-46d9-bf0e-9b9e7346258b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.15400481s Dec 20 12:56:40.434: INFO: Pod "pod-secrets-42bc2cf1-ebb6-46d9-bf0e-9b9e7346258b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.242785434s Dec 20 12:56:42.443: INFO: Pod "pod-secrets-42bc2cf1-ebb6-46d9-bf0e-9b9e7346258b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.252364167s Dec 20 12:56:44.454: INFO: Pod "pod-secrets-42bc2cf1-ebb6-46d9-bf0e-9b9e7346258b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.262805083s Dec 20 12:56:46.469: INFO: Pod "pod-secrets-42bc2cf1-ebb6-46d9-bf0e-9b9e7346258b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.277756743s STEP: Saw pod success Dec 20 12:56:46.469: INFO: Pod "pod-secrets-42bc2cf1-ebb6-46d9-bf0e-9b9e7346258b" satisfied condition "success or failure" Dec 20 12:56:46.475: INFO: Trying to get logs from node iruya-node pod pod-secrets-42bc2cf1-ebb6-46d9-bf0e-9b9e7346258b container secret-volume-test: STEP: delete the pod Dec 20 12:56:46.573: INFO: Waiting for pod pod-secrets-42bc2cf1-ebb6-46d9-bf0e-9b9e7346258b to disappear Dec 20 12:56:46.587: INFO: Pod pod-secrets-42bc2cf1-ebb6-46d9-bf0e-9b9e7346258b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 12:56:46.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8703" for this suite. Dec 20 12:56:52.755: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 12:56:52.937: INFO: namespace secrets-8703 deletion completed in 6.337203341s • [SLOW TEST:20.944 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 12:56:52.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 20 12:56:53.008: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a7b15c1d-23c5-4f44-956e-0f15298e5a0f" in namespace "downward-api-1925" to be "success or failure" Dec 20 12:56:53.133: INFO: Pod "downwardapi-volume-a7b15c1d-23c5-4f44-956e-0f15298e5a0f": Phase="Pending", Reason="", readiness=false. Elapsed: 125.354199ms Dec 20 12:56:55.144: INFO: Pod "downwardapi-volume-a7b15c1d-23c5-4f44-956e-0f15298e5a0f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.136313638s Dec 20 12:56:57.152: INFO: Pod "downwardapi-volume-a7b15c1d-23c5-4f44-956e-0f15298e5a0f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.144362624s Dec 20 12:57:00.414: INFO: Pod "downwardapi-volume-a7b15c1d-23c5-4f44-956e-0f15298e5a0f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.405692949s Dec 20 12:57:02.432: INFO: Pod "downwardapi-volume-a7b15c1d-23c5-4f44-956e-0f15298e5a0f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.423568966s STEP: Saw pod success Dec 20 12:57:02.432: INFO: Pod "downwardapi-volume-a7b15c1d-23c5-4f44-956e-0f15298e5a0f" satisfied condition "success or failure" Dec 20 12:57:02.440: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-a7b15c1d-23c5-4f44-956e-0f15298e5a0f container client-container: STEP: delete the pod Dec 20 12:57:02.550: INFO: Waiting for pod downwardapi-volume-a7b15c1d-23c5-4f44-956e-0f15298e5a0f to disappear Dec 20 12:57:02.638: INFO: Pod downwardapi-volume-a7b15c1d-23c5-4f44-956e-0f15298e5a0f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 12:57:02.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1925" for this suite. Dec 20 12:57:08.740: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 12:57:08.855: INFO: namespace downward-api-1925 deletion completed in 6.202913289s • [SLOW TEST:15.917 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 12:57:08.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-0deec372-7d50-4cc7-8832-af5a3849b024 STEP: Creating a pod to test consume configMaps Dec 20 12:57:09.035: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-59ad758e-3603-48dc-ba17-c269912170c1" in namespace "projected-7328" to be "success or failure" Dec 20 12:57:09.043: INFO: Pod "pod-projected-configmaps-59ad758e-3603-48dc-ba17-c269912170c1": Phase="Pending", Reason="", readiness=false. Elapsed: 7.443885ms Dec 20 12:57:11.050: INFO: Pod "pod-projected-configmaps-59ad758e-3603-48dc-ba17-c269912170c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014982496s Dec 20 12:57:13.084: INFO: Pod "pod-projected-configmaps-59ad758e-3603-48dc-ba17-c269912170c1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048835907s Dec 20 12:57:15.093: INFO: Pod "pod-projected-configmaps-59ad758e-3603-48dc-ba17-c269912170c1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057241582s Dec 20 12:57:17.098: INFO: Pod "pod-projected-configmaps-59ad758e-3603-48dc-ba17-c269912170c1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.062223339s Dec 20 12:57:19.105: INFO: Pod "pod-projected-configmaps-59ad758e-3603-48dc-ba17-c269912170c1": Phase="Pending", Reason="", readiness=false. Elapsed: 10.069731801s Dec 20 12:57:21.218: INFO: Pod "pod-projected-configmaps-59ad758e-3603-48dc-ba17-c269912170c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.182066087s STEP: Saw pod success Dec 20 12:57:21.218: INFO: Pod "pod-projected-configmaps-59ad758e-3603-48dc-ba17-c269912170c1" satisfied condition "success or failure" Dec 20 12:57:21.227: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-59ad758e-3603-48dc-ba17-c269912170c1 container projected-configmap-volume-test: STEP: delete the pod Dec 20 12:57:21.318: INFO: Waiting for pod pod-projected-configmaps-59ad758e-3603-48dc-ba17-c269912170c1 to disappear Dec 20 12:57:21.410: INFO: Pod pod-projected-configmaps-59ad758e-3603-48dc-ba17-c269912170c1 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 12:57:21.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7328" for this suite. Dec 20 12:57:27.444: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 12:57:27.553: INFO: namespace projected-7328 deletion completed in 6.133543509s • [SLOW TEST:18.697 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 12:57:27.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-9be88487-1fcc-48aa-a2a9-ddb91b03045e STEP: Creating a pod to test consume secrets Dec 20 12:57:27.685: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ea04e836-ee89-4144-b387-8fa6dee7e317" in namespace "projected-231" to be "success or failure" Dec 20 12:57:27.702: INFO: Pod "pod-projected-secrets-ea04e836-ee89-4144-b387-8fa6dee7e317": Phase="Pending", Reason="", readiness=false. Elapsed: 16.323047ms Dec 20 12:57:29.717: INFO: Pod "pod-projected-secrets-ea04e836-ee89-4144-b387-8fa6dee7e317": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031334822s Dec 20 12:57:31.799: INFO: Pod "pod-projected-secrets-ea04e836-ee89-4144-b387-8fa6dee7e317": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113474155s Dec 20 12:57:33.815: INFO: Pod "pod-projected-secrets-ea04e836-ee89-4144-b387-8fa6dee7e317": Phase="Pending", Reason="", readiness=false. Elapsed: 6.129571564s Dec 20 12:57:35.831: INFO: Pod "pod-projected-secrets-ea04e836-ee89-4144-b387-8fa6dee7e317": Phase="Pending", Reason="", readiness=false. Elapsed: 8.145595302s Dec 20 12:57:37.844: INFO: Pod "pod-projected-secrets-ea04e836-ee89-4144-b387-8fa6dee7e317": Phase="Pending", Reason="", readiness=false. Elapsed: 10.158840677s Dec 20 12:57:39.855: INFO: Pod "pod-projected-secrets-ea04e836-ee89-4144-b387-8fa6dee7e317": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.170012229s STEP: Saw pod success Dec 20 12:57:39.856: INFO: Pod "pod-projected-secrets-ea04e836-ee89-4144-b387-8fa6dee7e317" satisfied condition "success or failure" Dec 20 12:57:39.863: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-ea04e836-ee89-4144-b387-8fa6dee7e317 container secret-volume-test: STEP: delete the pod Dec 20 12:57:40.078: INFO: Waiting for pod pod-projected-secrets-ea04e836-ee89-4144-b387-8fa6dee7e317 to disappear Dec 20 12:57:40.089: INFO: Pod pod-projected-secrets-ea04e836-ee89-4144-b387-8fa6dee7e317 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 12:57:40.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-231" for this suite. Dec 20 12:57:46.117: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 12:57:46.543: INFO: namespace projected-231 deletion completed in 6.447953736s • [SLOW TEST:18.990 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 12:57:46.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode Dec 20 12:57:46.769: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-9985" to be "success or failure" Dec 20 12:57:46.797: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 27.740704ms Dec 20 12:57:48.807: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038338233s Dec 20 12:57:50.815: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046544721s Dec 20 12:57:52.823: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054238572s Dec 20 12:57:54.843: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.07403318s Dec 20 12:57:56.874: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.104848425s Dec 20 12:57:58.889: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 12.119595076s Dec 20 12:58:00.924: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 14.15548173s Dec 20 12:58:03.049: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.280413392s STEP: Saw pod success Dec 20 12:58:03.049: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Dec 20 12:58:03.061: INFO: Trying to get logs from node iruya-node pod pod-host-path-test container test-container-1: STEP: delete the pod Dec 20 12:58:03.254: INFO: Waiting for pod pod-host-path-test to disappear Dec 20 12:58:03.264: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 12:58:03.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-9985" for this suite. Dec 20 12:58:09.300: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 12:58:09.422: INFO: namespace hostpath-9985 deletion completed in 6.152149639s • [SLOW TEST:22.879 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 12:58:09.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-4e733ff2-dac5-4f71-8194-827acaf79844 STEP: Creating a pod to test consume secrets Dec 20 12:58:09.545: INFO: Waiting up to 5m0s for pod "pod-secrets-99bcd693-a8cd-4c36-8463-3db4a5d026fa" in namespace "secrets-6845" to be "success or failure" Dec 20 12:58:09.579: INFO: Pod "pod-secrets-99bcd693-a8cd-4c36-8463-3db4a5d026fa": Phase="Pending", Reason="", readiness=false. Elapsed: 33.868336ms Dec 20 12:58:11.586: INFO: Pod "pod-secrets-99bcd693-a8cd-4c36-8463-3db4a5d026fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0406938s Dec 20 12:58:13.596: INFO: Pod "pod-secrets-99bcd693-a8cd-4c36-8463-3db4a5d026fa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050730761s Dec 20 12:58:15.602: INFO: Pod "pod-secrets-99bcd693-a8cd-4c36-8463-3db4a5d026fa": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05719112s Dec 20 12:58:17.612: INFO: Pod "pod-secrets-99bcd693-a8cd-4c36-8463-3db4a5d026fa": Phase="Pending", Reason="", readiness=false. Elapsed: 8.066725717s Dec 20 12:58:19.625: INFO: Pod "pod-secrets-99bcd693-a8cd-4c36-8463-3db4a5d026fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.079503745s STEP: Saw pod success Dec 20 12:58:19.625: INFO: Pod "pod-secrets-99bcd693-a8cd-4c36-8463-3db4a5d026fa" satisfied condition "success or failure" Dec 20 12:58:19.630: INFO: Trying to get logs from node iruya-node pod pod-secrets-99bcd693-a8cd-4c36-8463-3db4a5d026fa container secret-volume-test: STEP: delete the pod Dec 20 12:58:19.922: INFO: Waiting for pod pod-secrets-99bcd693-a8cd-4c36-8463-3db4a5d026fa to disappear Dec 20 12:58:19.939: INFO: Pod pod-secrets-99bcd693-a8cd-4c36-8463-3db4a5d026fa no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 12:58:19.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6845" for this suite. Dec 20 12:58:26.184: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 12:58:26.290: INFO: namespace secrets-6845 deletion completed in 6.340489807s • [SLOW TEST:16.867 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 12:58:26.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-29a4b480-8ce5-4b6c-a847-f21ff0c3dd2f [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 12:58:26.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9795" for this suite. Dec 20 12:58:32.397: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 12:58:32.571: INFO: namespace secrets-9795 deletion completed in 6.191413748s • [SLOW TEST:6.281 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 12:58:32.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 20 12:58:32.736: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bdbf1bb5-b071-4a5c-9324-b28c822b1e6f" in namespace "projected-8832" to be "success or failure" Dec 20 12:58:32.745: INFO: Pod "downwardapi-volume-bdbf1bb5-b071-4a5c-9324-b28c822b1e6f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.67795ms Dec 20 12:58:34.804: INFO: Pod "downwardapi-volume-bdbf1bb5-b071-4a5c-9324-b28c822b1e6f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068162695s Dec 20 12:58:36.812: INFO: Pod "downwardapi-volume-bdbf1bb5-b071-4a5c-9324-b28c822b1e6f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076260814s Dec 20 12:58:39.804: INFO: Pod "downwardapi-volume-bdbf1bb5-b071-4a5c-9324-b28c822b1e6f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.068121593s Dec 20 12:58:41.815: INFO: Pod "downwardapi-volume-bdbf1bb5-b071-4a5c-9324-b28c822b1e6f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.0786773s Dec 20 12:58:43.828: INFO: Pod "downwardapi-volume-bdbf1bb5-b071-4a5c-9324-b28c822b1e6f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.091940583s STEP: Saw pod success Dec 20 12:58:43.828: INFO: Pod "downwardapi-volume-bdbf1bb5-b071-4a5c-9324-b28c822b1e6f" satisfied condition "success or failure" Dec 20 12:58:43.833: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-bdbf1bb5-b071-4a5c-9324-b28c822b1e6f container client-container: STEP: delete the pod Dec 20 12:58:44.207: INFO: Waiting for pod downwardapi-volume-bdbf1bb5-b071-4a5c-9324-b28c822b1e6f to disappear Dec 20 12:58:44.212: INFO: Pod downwardapi-volume-bdbf1bb5-b071-4a5c-9324-b28c822b1e6f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 12:58:44.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8832" for this suite. Dec 20 12:58:50.255: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 12:58:50.368: INFO: namespace projected-8832 deletion completed in 6.151057717s • [SLOW TEST:17.796 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 12:58:50.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-05a8182d-cb1a-4384-99d3-3f3f508dd843 STEP: Creating secret with name s-test-opt-upd-cadb2e9f-7cdf-424e-81ad-f88ebcad8d62 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-05a8182d-cb1a-4384-99d3-3f3f508dd843 STEP: Updating secret s-test-opt-upd-cadb2e9f-7cdf-424e-81ad-f88ebcad8d62 STEP: Creating secret with name s-test-opt-create-a0859f34-fc97-4f51-838a-d818b71222dc STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:00:14.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4136" for this suite. Dec 20 13:00:39.006: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:00:39.123: INFO: namespace secrets-4136 deletion completed in 24.148978572s • [SLOW TEST:108.754 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:00:39.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 20 13:00:39.222: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d83e5a5f-a2ce-4d56-83b8-02a23e3a067b" in namespace "downward-api-7932" to be "success or failure" Dec 20 13:00:39.230: INFO: Pod "downwardapi-volume-d83e5a5f-a2ce-4d56-83b8-02a23e3a067b": Phase="Pending", Reason="", readiness=false. Elapsed: 7.682578ms Dec 20 13:00:41.239: INFO: Pod "downwardapi-volume-d83e5a5f-a2ce-4d56-83b8-02a23e3a067b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016977253s Dec 20 13:00:43.247: INFO: Pod "downwardapi-volume-d83e5a5f-a2ce-4d56-83b8-02a23e3a067b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024991264s Dec 20 13:00:45.597: INFO: Pod "downwardapi-volume-d83e5a5f-a2ce-4d56-83b8-02a23e3a067b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.374471663s Dec 20 13:00:47.606: INFO: Pod "downwardapi-volume-d83e5a5f-a2ce-4d56-83b8-02a23e3a067b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.384153181s Dec 20 13:00:49.615: INFO: Pod "downwardapi-volume-d83e5a5f-a2ce-4d56-83b8-02a23e3a067b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.392998354s STEP: Saw pod success Dec 20 13:00:49.615: INFO: Pod "downwardapi-volume-d83e5a5f-a2ce-4d56-83b8-02a23e3a067b" satisfied condition "success or failure" Dec 20 13:00:49.619: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-d83e5a5f-a2ce-4d56-83b8-02a23e3a067b container client-container: STEP: delete the pod Dec 20 13:00:49.922: INFO: Waiting for pod downwardapi-volume-d83e5a5f-a2ce-4d56-83b8-02a23e3a067b to disappear Dec 20 13:00:49.930: INFO: Pod downwardapi-volume-d83e5a5f-a2ce-4d56-83b8-02a23e3a067b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:00:49.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7932" for this suite. Dec 20 13:00:56.042: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:00:56.172: INFO: namespace downward-api-7932 deletion completed in 6.235201998s • [SLOW TEST:17.048 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:00:56.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-ba180687-ad70-4f83-a923-6d7c8be1eb07 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-ba180687-ad70-4f83-a923-6d7c8be1eb07 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:01:08.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7356" for this suite. Dec 20 13:01:30.823: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:01:30.959: INFO: namespace configmap-7356 deletion completed in 22.162969913s • [SLOW TEST:34.786 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:01:30.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Dec 20 13:01:53.145: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2228 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 20 13:01:53.145: INFO: >>> kubeConfig: /root/.kube/config Dec 20 13:01:53.532: INFO: Exec stderr: "" Dec 20 13:01:53.533: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2228 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 20 13:01:53.533: INFO: >>> kubeConfig: /root/.kube/config Dec 20 13:01:54.207: INFO: Exec stderr: "" Dec 20 13:01:54.208: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2228 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 20 13:01:54.208: INFO: >>> kubeConfig: /root/.kube/config Dec 20 13:01:54.665: INFO: Exec stderr: "" Dec 20 13:01:54.665: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2228 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 20 13:01:54.665: INFO: >>> kubeConfig: /root/.kube/config Dec 20 13:01:54.976: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Dec 20 13:01:54.976: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2228 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 20 13:01:54.976: INFO: >>> kubeConfig: /root/.kube/config Dec 20 13:01:55.326: INFO: Exec stderr: "" Dec 20 13:01:55.327: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2228 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 20 13:01:55.327: INFO: >>> kubeConfig: /root/.kube/config Dec 20 13:01:55.632: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Dec 20 13:01:55.633: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2228 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 20 13:01:55.633: INFO: >>> kubeConfig: /root/.kube/config Dec 20 13:01:55.994: INFO: Exec stderr: "" Dec 20 13:01:55.994: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2228 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 20 13:01:55.994: INFO: >>> kubeConfig: /root/.kube/config Dec 20 13:01:56.339: INFO: Exec stderr: "" Dec 20 13:01:56.340: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2228 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 20 13:01:56.340: INFO: >>> kubeConfig: /root/.kube/config Dec 20 13:01:56.715: INFO: Exec stderr: "" Dec 20 13:01:56.715: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2228 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 20 13:01:56.715: INFO: >>> kubeConfig: /root/.kube/config Dec 20 13:01:57.104: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:01:57.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-2228" for this suite. Dec 20 13:02:49.137: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:02:49.296: INFO: namespace e2e-kubelet-etc-hosts-2228 deletion completed in 52.183170321s • [SLOW TEST:78.337 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:02:49.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Dec 20 13:02:49.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7726' Dec 20 13:02:52.047: INFO: stderr: "" Dec 20 13:02:52.048: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 20 13:02:52.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7726' Dec 20 13:02:52.354: INFO: stderr: "" Dec 20 13:02:52.355: INFO: stdout: "update-demo-nautilus-85jkg update-demo-nautilus-wr7k7 " Dec 20 13:02:52.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-85jkg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7726' Dec 20 13:02:52.632: INFO: stderr: "" Dec 20 13:02:52.632: INFO: stdout: "" Dec 20 13:02:52.632: INFO: update-demo-nautilus-85jkg is created but not running Dec 20 13:02:57.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7726' Dec 20 13:02:57.754: INFO: stderr: "" Dec 20 13:02:57.754: INFO: stdout: "update-demo-nautilus-85jkg update-demo-nautilus-wr7k7 " Dec 20 13:02:57.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-85jkg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7726' Dec 20 13:02:57.882: INFO: stderr: "" Dec 20 13:02:57.883: INFO: stdout: "" Dec 20 13:02:57.883: INFO: update-demo-nautilus-85jkg is created but not running Dec 20 13:03:02.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7726' Dec 20 13:03:03.736: INFO: stderr: "" Dec 20 13:03:03.736: INFO: stdout: "update-demo-nautilus-85jkg update-demo-nautilus-wr7k7 " Dec 20 13:03:03.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-85jkg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7726' Dec 20 13:03:03.816: INFO: stderr: "" Dec 20 13:03:03.816: INFO: stdout: "" Dec 20 13:03:03.816: INFO: update-demo-nautilus-85jkg is created but not running Dec 20 13:03:08.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7726' Dec 20 13:03:08.947: INFO: stderr: "" Dec 20 13:03:08.947: INFO: stdout: "update-demo-nautilus-85jkg update-demo-nautilus-wr7k7 " Dec 20 13:03:08.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-85jkg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7726' Dec 20 13:03:09.035: INFO: stderr: "" Dec 20 13:03:09.035: INFO: stdout: "true" Dec 20 13:03:09.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-85jkg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7726' Dec 20 13:03:09.143: INFO: stderr: "" Dec 20 13:03:09.143: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 20 13:03:09.143: INFO: validating pod update-demo-nautilus-85jkg Dec 20 13:03:09.156: INFO: got data: { "image": "nautilus.jpg" } Dec 20 13:03:09.157: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 20 13:03:09.157: INFO: update-demo-nautilus-85jkg is verified up and running Dec 20 13:03:09.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wr7k7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7726' Dec 20 13:03:09.223: INFO: stderr: "" Dec 20 13:03:09.223: INFO: stdout: "true" Dec 20 13:03:09.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wr7k7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7726' Dec 20 13:03:09.303: INFO: stderr: "" Dec 20 13:03:09.303: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 20 13:03:09.303: INFO: validating pod update-demo-nautilus-wr7k7 Dec 20 13:03:09.311: INFO: got data: { "image": "nautilus.jpg" } Dec 20 13:03:09.311: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 20 13:03:09.311: INFO: update-demo-nautilus-wr7k7 is verified up and running STEP: scaling down the replication controller Dec 20 13:03:09.313: INFO: scanned /root for discovery docs: Dec 20 13:03:09.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-7726' Dec 20 13:03:10.421: INFO: stderr: "" Dec 20 13:03:10.421: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 20 13:03:10.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7726' Dec 20 13:03:10.678: INFO: stderr: "" Dec 20 13:03:10.678: INFO: stdout: "update-demo-nautilus-85jkg update-demo-nautilus-wr7k7 " STEP: Replicas for name=update-demo: expected=1 actual=2 Dec 20 13:03:15.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7726' Dec 20 13:03:15.877: INFO: stderr: "" Dec 20 13:03:15.878: INFO: stdout: "update-demo-nautilus-85jkg " Dec 20 13:03:15.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-85jkg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7726' Dec 20 13:03:15.984: INFO: stderr: "" Dec 20 13:03:15.984: INFO: stdout: "true" Dec 20 13:03:15.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-85jkg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7726' Dec 20 13:03:16.118: INFO: stderr: "" Dec 20 13:03:16.118: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 20 13:03:16.118: INFO: validating pod update-demo-nautilus-85jkg Dec 20 13:03:16.123: INFO: got data: { "image": "nautilus.jpg" } Dec 20 13:03:16.123: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 20 13:03:16.123: INFO: update-demo-nautilus-85jkg is verified up and running STEP: scaling up the replication controller Dec 20 13:03:16.127: INFO: scanned /root for discovery docs: Dec 20 13:03:16.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-7726' Dec 20 13:03:17.305: INFO: stderr: "" Dec 20 13:03:17.305: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 20 13:03:17.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7726' Dec 20 13:03:17.423: INFO: stderr: "" Dec 20 13:03:17.423: INFO: stdout: "update-demo-nautilus-85jkg update-demo-nautilus-b6xbf " Dec 20 13:03:17.423: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-85jkg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7726' Dec 20 13:03:17.600: INFO: stderr: "" Dec 20 13:03:17.600: INFO: stdout: "true" Dec 20 13:03:17.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-85jkg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7726' Dec 20 13:03:17.697: INFO: stderr: "" Dec 20 13:03:17.697: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 20 13:03:17.698: INFO: validating pod update-demo-nautilus-85jkg Dec 20 13:03:17.703: INFO: got data: { "image": "nautilus.jpg" } Dec 20 13:03:17.703: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 20 13:03:17.703: INFO: update-demo-nautilus-85jkg is verified up and running Dec 20 13:03:17.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b6xbf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7726' Dec 20 13:03:17.869: INFO: stderr: "" Dec 20 13:03:17.869: INFO: stdout: "" Dec 20 13:03:17.870: INFO: update-demo-nautilus-b6xbf is created but not running Dec 20 13:03:22.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7726' Dec 20 13:03:23.076: INFO: stderr: "" Dec 20 13:03:23.076: INFO: stdout: "update-demo-nautilus-85jkg update-demo-nautilus-b6xbf " Dec 20 13:03:23.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-85jkg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7726' Dec 20 13:03:23.315: INFO: stderr: "" Dec 20 13:03:23.316: INFO: stdout: "true" Dec 20 13:03:23.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-85jkg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7726' Dec 20 13:03:23.400: INFO: stderr: "" Dec 20 13:03:23.401: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 20 13:03:23.401: INFO: validating pod update-demo-nautilus-85jkg Dec 20 13:03:23.432: INFO: got data: { "image": "nautilus.jpg" } Dec 20 13:03:23.432: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 20 13:03:23.433: INFO: update-demo-nautilus-85jkg is verified up and running Dec 20 13:03:23.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b6xbf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7726' Dec 20 13:03:23.532: INFO: stderr: "" Dec 20 13:03:23.532: INFO: stdout: "" Dec 20 13:03:23.532: INFO: update-demo-nautilus-b6xbf is created but not running Dec 20 13:03:28.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7726' Dec 20 13:03:28.686: INFO: stderr: "" Dec 20 13:03:28.686: INFO: stdout: "update-demo-nautilus-85jkg update-demo-nautilus-b6xbf " Dec 20 13:03:28.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-85jkg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7726' Dec 20 13:03:28.790: INFO: stderr: "" Dec 20 13:03:28.790: INFO: stdout: "true" Dec 20 13:03:28.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-85jkg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7726' Dec 20 13:03:28.902: INFO: stderr: "" Dec 20 13:03:28.902: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 20 13:03:28.902: INFO: validating pod update-demo-nautilus-85jkg Dec 20 13:03:28.912: INFO: got data: { "image": "nautilus.jpg" } Dec 20 13:03:28.912: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 20 13:03:28.912: INFO: update-demo-nautilus-85jkg is verified up and running Dec 20 13:03:28.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b6xbf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7726' Dec 20 13:03:29.048: INFO: stderr: "" Dec 20 13:03:29.048: INFO: stdout: "true" Dec 20 13:03:29.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b6xbf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7726' Dec 20 13:03:29.157: INFO: stderr: "" Dec 20 13:03:29.157: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 20 13:03:29.158: INFO: validating pod update-demo-nautilus-b6xbf Dec 20 13:03:29.176: INFO: got data: { "image": "nautilus.jpg" } Dec 20 13:03:29.176: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 20 13:03:29.176: INFO: update-demo-nautilus-b6xbf is verified up and running STEP: using delete to clean up resources Dec 20 13:03:29.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7726' Dec 20 13:03:29.285: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 20 13:03:29.285: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Dec 20 13:03:29.286: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7726' Dec 20 13:03:29.358: INFO: stderr: "No resources found.\n" Dec 20 13:03:29.359: INFO: stdout: "" Dec 20 13:03:29.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7726 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Dec 20 13:03:29.465: INFO: stderr: "" Dec 20 13:03:29.465: INFO: stdout: "update-demo-nautilus-85jkg\nupdate-demo-nautilus-b6xbf\n" Dec 20 13:03:29.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7726' Dec 20 13:03:31.076: INFO: stderr: "No resources found.\n" Dec 20 13:03:31.076: INFO: stdout: "" Dec 20 13:03:31.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7726 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Dec 20 13:03:31.373: INFO: stderr: "" Dec 20 13:03:31.373: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:03:31.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7726" for this suite. Dec 20 13:03:55.501: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:03:55.641: INFO: namespace kubectl-7726 deletion completed in 24.245642569s • [SLOW TEST:66.345 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:03:55.641: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W1220 13:04:27.129700 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 20 13:04:27.129: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:04:27.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4765" for this suite. Dec 20 13:04:34.255: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:04:34.965: INFO: namespace gc-4765 deletion completed in 7.830563902s • [SLOW TEST:39.324 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:04:34.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-3455 I1220 13:04:35.193755 8 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-3455, replica count: 1 I1220 13:04:36.245395 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1220 13:04:37.246018 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1220 13:04:38.246660 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1220 13:04:39.247120 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1220 13:04:40.247642 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1220 13:04:41.248773 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1220 13:04:42.249616 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1220 13:04:43.250112 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1220 13:04:44.250677 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1220 13:04:45.251274 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1220 13:04:46.252072 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1220 13:04:47.252969 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Dec 20 13:04:47.438: INFO: Created: latency-svc-mdbgx Dec 20 13:04:47.461: INFO: Got endpoints: latency-svc-mdbgx [108.044555ms] Dec 20 13:04:47.661: INFO: Created: latency-svc-gpc7m Dec 20 13:04:47.670: INFO: Got endpoints: latency-svc-gpc7m [207.928348ms] Dec 20 13:04:47.835: INFO: Created: latency-svc-4h5wt Dec 20 13:04:47.887: INFO: Got endpoints: latency-svc-4h5wt [425.441067ms] Dec 20 13:04:47.890: INFO: Created: latency-svc-ks7hz Dec 20 13:04:47.908: INFO: Got endpoints: latency-svc-ks7hz [445.291108ms] Dec 20 13:04:48.010: INFO: Created: latency-svc-8vc7z Dec 20 13:04:48.019: INFO: Got endpoints: latency-svc-8vc7z [555.815845ms] Dec 20 13:04:48.093: INFO: Created: latency-svc-8fscw Dec 20 13:04:48.213: INFO: Got endpoints: latency-svc-8fscw [750.020232ms] Dec 20 13:04:48.218: INFO: Created: latency-svc-jlxbc Dec 20 13:04:48.225: INFO: Got endpoints: latency-svc-jlxbc [762.341232ms] Dec 20 13:04:48.286: INFO: Created: latency-svc-rpf54 Dec 20 13:04:48.302: INFO: Got endpoints: latency-svc-rpf54 [840.734074ms] Dec 20 13:04:48.484: INFO: Created: latency-svc-9cwnm Dec 20 13:04:48.526: INFO: Got endpoints: latency-svc-9cwnm [1.062939466s] Dec 20 13:04:48.539: INFO: Created: latency-svc-rmzbx Dec 20 13:04:48.629: INFO: Got endpoints: latency-svc-rmzbx [1.166365058s] Dec 20 13:04:48.646: INFO: Created: latency-svc-qgk2f Dec 20 13:04:51.299: INFO: Got endpoints: latency-svc-qgk2f [3.835330054s] Dec 20 13:04:51.526: INFO: Created: latency-svc-ll7xg Dec 20 13:04:51.536: INFO: Got endpoints: latency-svc-ll7xg [4.072955511s] Dec 20 13:04:51.579: INFO: Created: latency-svc-z4bt7 Dec 20 13:04:51.584: INFO: Got endpoints: latency-svc-z4bt7 [4.121114591s] Dec 20 13:04:51.701: INFO: Created: latency-svc-4kr55 Dec 20 13:04:51.722: INFO: Got endpoints: latency-svc-4kr55 [4.258347602s] Dec 20 13:04:51.780: INFO: Created: latency-svc-ckpm9 Dec 20 13:04:51.899: INFO: Got endpoints: latency-svc-ckpm9 [4.435604788s] Dec 20 13:04:51.903: INFO: Created: latency-svc-wf96m Dec 20 13:04:51.925: INFO: Got endpoints: latency-svc-wf96m [4.462984244s] Dec 20 13:04:51.987: INFO: Created: latency-svc-6x585 Dec 20 13:04:51.988: INFO: Got endpoints: latency-svc-6x585 [4.317545579s] Dec 20 13:04:52.127: INFO: Created: latency-svc-92tsj Dec 20 13:04:52.143: INFO: Got endpoints: latency-svc-92tsj [4.255230611s] Dec 20 13:04:52.261: INFO: Created: latency-svc-dk9g6 Dec 20 13:04:52.280: INFO: Got endpoints: latency-svc-dk9g6 [4.371970658s] Dec 20 13:04:52.345: INFO: Created: latency-svc-j72h5 Dec 20 13:04:52.458: INFO: Got endpoints: latency-svc-j72h5 [4.438475568s] Dec 20 13:04:52.470: INFO: Created: latency-svc-vb7kx Dec 20 13:04:52.480: INFO: Got endpoints: latency-svc-vb7kx [4.266606506s] Dec 20 13:04:52.532: INFO: Created: latency-svc-njs8r Dec 20 13:04:52.687: INFO: Got endpoints: latency-svc-njs8r [4.461226162s] Dec 20 13:04:52.735: INFO: Created: latency-svc-8g7d9 Dec 20 13:04:52.775: INFO: Got endpoints: latency-svc-8g7d9 [4.472406843s] Dec 20 13:04:52.852: INFO: Created: latency-svc-kf97c Dec 20 13:04:52.869: INFO: Got endpoints: latency-svc-kf97c [4.342338991s] Dec 20 13:04:52.913: INFO: Created: latency-svc-2d8x6 Dec 20 13:04:53.002: INFO: Got endpoints: latency-svc-2d8x6 [4.372462949s] Dec 20 13:04:53.009: INFO: Created: latency-svc-9c7gh Dec 20 13:04:53.010: INFO: Got endpoints: latency-svc-9c7gh [1.711555042s] Dec 20 13:04:53.054: INFO: Created: latency-svc-dtgmb Dec 20 13:04:53.056: INFO: Got endpoints: latency-svc-dtgmb [1.519613207s] Dec 20 13:04:53.097: INFO: Created: latency-svc-5czlc Dec 20 13:04:53.159: INFO: Got endpoints: latency-svc-5czlc [1.575397794s] Dec 20 13:04:53.191: INFO: Created: latency-svc-qgt2s Dec 20 13:04:53.217: INFO: Got endpoints: latency-svc-qgt2s [1.495669657s] Dec 20 13:04:53.265: INFO: Created: latency-svc-qpdmj Dec 20 13:04:53.438: INFO: Got endpoints: latency-svc-qpdmj [278.932504ms] Dec 20 13:04:53.465: INFO: Created: latency-svc-75ld6 Dec 20 13:04:53.481: INFO: Got endpoints: latency-svc-75ld6 [1.581630905s] Dec 20 13:04:53.513: INFO: Created: latency-svc-9w5gh Dec 20 13:04:53.528: INFO: Got endpoints: latency-svc-9w5gh [1.602424979s] Dec 20 13:04:53.674: INFO: Created: latency-svc-nfj86 Dec 20 13:04:53.688: INFO: Got endpoints: latency-svc-nfj86 [1.699502279s] Dec 20 13:04:53.728: INFO: Created: latency-svc-dxns9 Dec 20 13:04:53.747: INFO: Got endpoints: latency-svc-dxns9 [1.603678901s] Dec 20 13:04:53.882: INFO: Created: latency-svc-krfv6 Dec 20 13:04:53.923: INFO: Got endpoints: latency-svc-krfv6 [1.64331946s] Dec 20 13:04:53.967: INFO: Created: latency-svc-7x45d Dec 20 13:04:54.071: INFO: Got endpoints: latency-svc-7x45d [1.613234195s] Dec 20 13:04:54.080: INFO: Created: latency-svc-cp6dh Dec 20 13:04:54.090: INFO: Got endpoints: latency-svc-cp6dh [1.610065623s] Dec 20 13:04:54.132: INFO: Created: latency-svc-hk958 Dec 20 13:04:54.146: INFO: Got endpoints: latency-svc-hk958 [1.458929781s] Dec 20 13:04:54.222: INFO: Created: latency-svc-rx7zm Dec 20 13:04:54.229: INFO: Got endpoints: latency-svc-rx7zm [1.453630216s] Dec 20 13:04:54.300: INFO: Created: latency-svc-sm5nz Dec 20 13:04:54.312: INFO: Got endpoints: latency-svc-sm5nz [1.442542552s] Dec 20 13:04:54.412: INFO: Created: latency-svc-nhr92 Dec 20 13:04:54.420: INFO: Got endpoints: latency-svc-nhr92 [1.417181269s] Dec 20 13:04:54.554: INFO: Created: latency-svc-bmz9w Dec 20 13:04:54.575: INFO: Got endpoints: latency-svc-bmz9w [1.563973305s] Dec 20 13:04:54.637: INFO: Created: latency-svc-s4s4z Dec 20 13:04:54.703: INFO: Got endpoints: latency-svc-s4s4z [1.646275965s] Dec 20 13:04:54.747: INFO: Created: latency-svc-ldxwm Dec 20 13:04:54.757: INFO: Got endpoints: latency-svc-ldxwm [1.53883051s] Dec 20 13:04:54.871: INFO: Created: latency-svc-p74gc Dec 20 13:04:54.872: INFO: Got endpoints: latency-svc-p74gc [1.432362956s] Dec 20 13:04:54.921: INFO: Created: latency-svc-p9rlc Dec 20 13:04:54.943: INFO: Got endpoints: latency-svc-p9rlc [1.46199995s] Dec 20 13:04:55.048: INFO: Created: latency-svc-xfvvs Dec 20 13:04:55.048: INFO: Got endpoints: latency-svc-xfvvs [1.519869895s] Dec 20 13:04:55.081: INFO: Created: latency-svc-d6dp5 Dec 20 13:04:55.084: INFO: Got endpoints: latency-svc-d6dp5 [1.396394254s] Dec 20 13:04:55.130: INFO: Created: latency-svc-5q9dx Dec 20 13:04:55.191: INFO: Got endpoints: latency-svc-5q9dx [1.443246288s] Dec 20 13:04:55.249: INFO: Created: latency-svc-5twxb Dec 20 13:04:55.269: INFO: Got endpoints: latency-svc-5twxb [1.344916823s] Dec 20 13:04:55.362: INFO: Created: latency-svc-27b45 Dec 20 13:04:55.372: INFO: Got endpoints: latency-svc-27b45 [1.299785829s] Dec 20 13:04:55.420: INFO: Created: latency-svc-p8npb Dec 20 13:04:55.426: INFO: Got endpoints: latency-svc-p8npb [1.335835788s] Dec 20 13:04:55.606: INFO: Created: latency-svc-bk2d4 Dec 20 13:04:55.619: INFO: Got endpoints: latency-svc-bk2d4 [1.471900332s] Dec 20 13:04:55.703: INFO: Created: latency-svc-84xql Dec 20 13:04:55.794: INFO: Got endpoints: latency-svc-84xql [1.564883168s] Dec 20 13:04:55.840: INFO: Created: latency-svc-mrf6q Dec 20 13:04:55.862: INFO: Got endpoints: latency-svc-mrf6q [1.549782286s] Dec 20 13:04:56.016: INFO: Created: latency-svc-tjkpn Dec 20 13:04:56.031: INFO: Got endpoints: latency-svc-tjkpn [1.61130613s] Dec 20 13:04:56.083: INFO: Created: latency-svc-cnq5r Dec 20 13:04:56.097: INFO: Got endpoints: latency-svc-cnq5r [1.521521659s] Dec 20 13:04:56.195: INFO: Created: latency-svc-rcg4n Dec 20 13:04:56.205: INFO: Got endpoints: latency-svc-rcg4n [1.501712851s] Dec 20 13:04:56.258: INFO: Created: latency-svc-hcx66 Dec 20 13:04:56.272: INFO: Got endpoints: latency-svc-hcx66 [1.515817835s] Dec 20 13:04:56.422: INFO: Created: latency-svc-p6ggr Dec 20 13:04:56.451: INFO: Got endpoints: latency-svc-p6ggr [1.579393817s] Dec 20 13:04:56.538: INFO: Created: latency-svc-h2w5c Dec 20 13:04:56.610: INFO: Got endpoints: latency-svc-h2w5c [1.666399744s] Dec 20 13:04:56.662: INFO: Created: latency-svc-9xqnh Dec 20 13:04:56.726: INFO: Got endpoints: latency-svc-9xqnh [1.677652396s] Dec 20 13:04:56.758: INFO: Created: latency-svc-fpwcd Dec 20 13:04:56.786: INFO: Got endpoints: latency-svc-fpwcd [1.701750784s] Dec 20 13:04:56.822: INFO: Created: latency-svc-znc9l Dec 20 13:04:56.906: INFO: Got endpoints: latency-svc-znc9l [1.714500852s] Dec 20 13:04:56.920: INFO: Created: latency-svc-88nkn Dec 20 13:04:56.956: INFO: Got endpoints: latency-svc-88nkn [1.686611168s] Dec 20 13:04:56.983: INFO: Created: latency-svc-bdzt8 Dec 20 13:04:57.075: INFO: Got endpoints: latency-svc-bdzt8 [1.703377296s] Dec 20 13:04:57.092: INFO: Created: latency-svc-x7c9w Dec 20 13:04:57.180: INFO: Created: latency-svc-ctp4d Dec 20 13:04:57.180: INFO: Got endpoints: latency-svc-x7c9w [1.753522958s] Dec 20 13:04:57.185: INFO: Got endpoints: latency-svc-ctp4d [1.565429836s] Dec 20 13:04:57.325: INFO: Created: latency-svc-z6xv5 Dec 20 13:04:57.398: INFO: Got endpoints: latency-svc-z6xv5 [1.603739212s] Dec 20 13:04:57.405: INFO: Created: latency-svc-g4cws Dec 20 13:04:57.564: INFO: Got endpoints: latency-svc-g4cws [1.701875846s] Dec 20 13:04:57.572: INFO: Created: latency-svc-ctc4d Dec 20 13:04:57.575: INFO: Got endpoints: latency-svc-ctc4d [1.543466113s] Dec 20 13:04:57.641: INFO: Created: latency-svc-wfl9l Dec 20 13:04:57.753: INFO: Got endpoints: latency-svc-wfl9l [1.656532272s] Dec 20 13:04:57.804: INFO: Created: latency-svc-k6bjx Dec 20 13:04:57.848: INFO: Got endpoints: latency-svc-k6bjx [1.643183041s] Dec 20 13:04:58.012: INFO: Created: latency-svc-bjsdv Dec 20 13:04:58.047: INFO: Got endpoints: latency-svc-bjsdv [1.774628763s] Dec 20 13:04:58.113: INFO: Created: latency-svc-wk6hn Dec 20 13:04:58.297: INFO: Got endpoints: latency-svc-wk6hn [1.846064221s] Dec 20 13:04:58.320: INFO: Created: latency-svc-r9q5m Dec 20 13:04:58.331: INFO: Got endpoints: latency-svc-r9q5m [1.720897941s] Dec 20 13:04:58.526: INFO: Created: latency-svc-vplwt Dec 20 13:04:58.542: INFO: Got endpoints: latency-svc-vplwt [1.81577257s] Dec 20 13:04:58.596: INFO: Created: latency-svc-wjgdp Dec 20 13:04:58.607: INFO: Got endpoints: latency-svc-wjgdp [1.819943248s] Dec 20 13:04:58.698: INFO: Created: latency-svc-p72kf Dec 20 13:04:58.705: INFO: Got endpoints: latency-svc-p72kf [1.799615446s] Dec 20 13:04:58.755: INFO: Created: latency-svc-898xk Dec 20 13:04:58.865: INFO: Got endpoints: latency-svc-898xk [1.908192861s] Dec 20 13:04:58.868: INFO: Created: latency-svc-c6jtj Dec 20 13:04:58.873: INFO: Got endpoints: latency-svc-c6jtj [1.798046498s] Dec 20 13:04:58.932: INFO: Created: latency-svc-hpdn9 Dec 20 13:04:58.941: INFO: Got endpoints: latency-svc-hpdn9 [1.760924228s] Dec 20 13:04:59.211: INFO: Created: latency-svc-fd2x4 Dec 20 13:04:59.223: INFO: Got endpoints: latency-svc-fd2x4 [2.037894053s] Dec 20 13:04:59.280: INFO: Created: latency-svc-tghpk Dec 20 13:04:59.429: INFO: Got endpoints: latency-svc-tghpk [2.03064372s] Dec 20 13:04:59.485: INFO: Created: latency-svc-kjwdw Dec 20 13:04:59.489: INFO: Got endpoints: latency-svc-kjwdw [1.925147368s] Dec 20 13:04:59.673: INFO: Created: latency-svc-nvbxj Dec 20 13:04:59.682: INFO: Got endpoints: latency-svc-nvbxj [2.106985613s] Dec 20 13:04:59.734: INFO: Created: latency-svc-fq2np Dec 20 13:04:59.744: INFO: Got endpoints: latency-svc-fq2np [1.990674735s] Dec 20 13:04:59.937: INFO: Created: latency-svc-dsh6m Dec 20 13:04:59.941: INFO: Got endpoints: latency-svc-dsh6m [2.092533515s] Dec 20 13:05:00.009: INFO: Created: latency-svc-fcwrv Dec 20 13:05:00.009: INFO: Got endpoints: latency-svc-fcwrv [1.961296316s] Dec 20 13:05:00.211: INFO: Created: latency-svc-9xx4p Dec 20 13:05:00.271: INFO: Got endpoints: latency-svc-9xx4p [1.973104789s] Dec 20 13:05:00.509: INFO: Created: latency-svc-xwfbl Dec 20 13:05:00.512: INFO: Got endpoints: latency-svc-xwfbl [2.181082202s] Dec 20 13:05:00.590: INFO: Created: latency-svc-jxtdk Dec 20 13:05:00.795: INFO: Got endpoints: latency-svc-jxtdk [2.252740696s] Dec 20 13:05:00.816: INFO: Created: latency-svc-pft46 Dec 20 13:05:00.817: INFO: Got endpoints: latency-svc-pft46 [2.210152073s] Dec 20 13:05:01.216: INFO: Created: latency-svc-8vvk9 Dec 20 13:05:01.233: INFO: Got endpoints: latency-svc-8vvk9 [2.527883552s] Dec 20 13:05:01.307: INFO: Created: latency-svc-sc8g8 Dec 20 13:05:01.465: INFO: Got endpoints: latency-svc-sc8g8 [2.599828467s] Dec 20 13:05:01.480: INFO: Created: latency-svc-vz6bd Dec 20 13:05:01.486: INFO: Got endpoints: latency-svc-vz6bd [2.612778421s] Dec 20 13:05:01.547: INFO: Created: latency-svc-49v89 Dec 20 13:05:01.568: INFO: Got endpoints: latency-svc-49v89 [2.625902488s] Dec 20 13:05:01.732: INFO: Created: latency-svc-hcbn2 Dec 20 13:05:01.744: INFO: Got endpoints: latency-svc-hcbn2 [2.521262961s] Dec 20 13:05:01.795: INFO: Created: latency-svc-dcpb2 Dec 20 13:05:01.804: INFO: Got endpoints: latency-svc-dcpb2 [2.37474004s] Dec 20 13:05:01.990: INFO: Created: latency-svc-mwhh5 Dec 20 13:05:02.010: INFO: Got endpoints: latency-svc-mwhh5 [2.520543709s] Dec 20 13:05:02.112: INFO: Created: latency-svc-c5256 Dec 20 13:05:02.182: INFO: Got endpoints: latency-svc-c5256 [2.498934898s] Dec 20 13:05:02.220: INFO: Created: latency-svc-dd9bm Dec 20 13:05:02.239: INFO: Got endpoints: latency-svc-dd9bm [2.494600389s] Dec 20 13:05:02.403: INFO: Created: latency-svc-6sqcp Dec 20 13:05:02.404: INFO: Got endpoints: latency-svc-6sqcp [2.462552339s] Dec 20 13:05:02.485: INFO: Created: latency-svc-8v668 Dec 20 13:05:02.583: INFO: Got endpoints: latency-svc-8v668 [2.573988789s] Dec 20 13:05:02.655: INFO: Created: latency-svc-vvqs4 Dec 20 13:05:02.655: INFO: Got endpoints: latency-svc-vvqs4 [2.384000697s] Dec 20 13:05:02.751: INFO: Created: latency-svc-s8k52 Dec 20 13:05:02.791: INFO: Got endpoints: latency-svc-s8k52 [2.278349317s] Dec 20 13:05:02.965: INFO: Created: latency-svc-jpfzv Dec 20 13:05:03.003: INFO: Got endpoints: latency-svc-jpfzv [2.207927283s] Dec 20 13:05:03.037: INFO: Created: latency-svc-l6v86 Dec 20 13:05:03.045: INFO: Got endpoints: latency-svc-l6v86 [2.227372813s] Dec 20 13:05:03.193: INFO: Created: latency-svc-5ws26 Dec 20 13:05:03.249: INFO: Got endpoints: latency-svc-5ws26 [2.014671012s] Dec 20 13:05:03.252: INFO: Created: latency-svc-k7nbf Dec 20 13:05:03.424: INFO: Got endpoints: latency-svc-k7nbf [1.95899197s] Dec 20 13:05:03.432: INFO: Created: latency-svc-qgn52 Dec 20 13:05:03.453: INFO: Got endpoints: latency-svc-qgn52 [1.966500664s] Dec 20 13:05:03.496: INFO: Created: latency-svc-vfmcf Dec 20 13:05:03.509: INFO: Got endpoints: latency-svc-vfmcf [1.940903506s] Dec 20 13:05:03.704: INFO: Created: latency-svc-4xg8n Dec 20 13:05:03.704: INFO: Got endpoints: latency-svc-4xg8n [1.959387215s] Dec 20 13:05:03.750: INFO: Created: latency-svc-v5hs2 Dec 20 13:05:03.755: INFO: Got endpoints: latency-svc-v5hs2 [1.950887063s] Dec 20 13:05:03.961: INFO: Created: latency-svc-l4gkj Dec 20 13:05:04.016: INFO: Got endpoints: latency-svc-l4gkj [2.005358964s] Dec 20 13:05:04.039: INFO: Created: latency-svc-mdvv5 Dec 20 13:05:04.039: INFO: Got endpoints: latency-svc-mdvv5 [1.856536153s] Dec 20 13:05:04.156: INFO: Created: latency-svc-qctcx Dec 20 13:05:04.174: INFO: Got endpoints: latency-svc-qctcx [1.934767333s] Dec 20 13:05:04.241: INFO: Created: latency-svc-hvtjw Dec 20 13:05:04.312: INFO: Got endpoints: latency-svc-hvtjw [1.90833741s] Dec 20 13:05:04.346: INFO: Created: latency-svc-27xbj Dec 20 13:05:04.377: INFO: Got endpoints: latency-svc-27xbj [1.793699827s] Dec 20 13:05:04.544: INFO: Created: latency-svc-tp49j Dec 20 13:05:04.546: INFO: Got endpoints: latency-svc-tp49j [1.890987458s] Dec 20 13:05:04.597: INFO: Created: latency-svc-wmsz6 Dec 20 13:05:04.682: INFO: Got endpoints: latency-svc-wmsz6 [1.891284523s] Dec 20 13:05:04.730: INFO: Created: latency-svc-g58wd Dec 20 13:05:04.738: INFO: Got endpoints: latency-svc-g58wd [1.734569915s] Dec 20 13:05:04.777: INFO: Created: latency-svc-jcx8d Dec 20 13:05:04.874: INFO: Got endpoints: latency-svc-jcx8d [1.829254337s] Dec 20 13:05:04.891: INFO: Created: latency-svc-lbpht Dec 20 13:05:04.904: INFO: Got endpoints: latency-svc-lbpht [1.654873823s] Dec 20 13:05:04.957: INFO: Created: latency-svc-947ww Dec 20 13:05:05.120: INFO: Got endpoints: latency-svc-947ww [1.695797387s] Dec 20 13:05:05.142: INFO: Created: latency-svc-kzg89 Dec 20 13:05:05.142: INFO: Got endpoints: latency-svc-kzg89 [1.688279867s] Dec 20 13:05:05.201: INFO: Created: latency-svc-l5dd8 Dec 20 13:05:05.312: INFO: Got endpoints: latency-svc-l5dd8 [1.802611549s] Dec 20 13:05:05.320: INFO: Created: latency-svc-2q55c Dec 20 13:05:05.334: INFO: Got endpoints: latency-svc-2q55c [1.629804855s] Dec 20 13:05:05.382: INFO: Created: latency-svc-mqb4q Dec 20 13:05:05.385: INFO: Got endpoints: latency-svc-mqb4q [1.629541554s] Dec 20 13:05:05.535: INFO: Created: latency-svc-mlqp4 Dec 20 13:05:05.539: INFO: Got endpoints: latency-svc-mlqp4 [1.52145105s] Dec 20 13:05:05.602: INFO: Created: latency-svc-s98h2 Dec 20 13:05:05.602: INFO: Got endpoints: latency-svc-s98h2 [1.562619289s] Dec 20 13:05:05.715: INFO: Created: latency-svc-wd6x4 Dec 20 13:05:05.715: INFO: Got endpoints: latency-svc-wd6x4 [1.539898511s] Dec 20 13:05:05.775: INFO: Created: latency-svc-gxv49 Dec 20 13:05:05.788: INFO: Got endpoints: latency-svc-gxv49 [1.475711838s] Dec 20 13:05:05.983: INFO: Created: latency-svc-j4755 Dec 20 13:05:06.022: INFO: Got endpoints: latency-svc-j4755 [1.644386167s] Dec 20 13:05:06.031: INFO: Created: latency-svc-9k449 Dec 20 13:05:06.046: INFO: Got endpoints: latency-svc-9k449 [1.49945284s] Dec 20 13:05:06.206: INFO: Created: latency-svc-5pk5z Dec 20 13:05:06.215: INFO: Got endpoints: latency-svc-5pk5z [1.532617673s] Dec 20 13:05:06.240: INFO: Created: latency-svc-kwmt7 Dec 20 13:05:06.247: INFO: Got endpoints: latency-svc-kwmt7 [1.509027947s] Dec 20 13:05:06.399: INFO: Created: latency-svc-h6gw7 Dec 20 13:05:06.404: INFO: Got endpoints: latency-svc-h6gw7 [1.529211593s] Dec 20 13:05:06.478: INFO: Created: latency-svc-grq5q Dec 20 13:05:06.478: INFO: Got endpoints: latency-svc-grq5q [1.574024822s] Dec 20 13:05:06.580: INFO: Created: latency-svc-577tx Dec 20 13:05:06.617: INFO: Got endpoints: latency-svc-577tx [1.495759596s] Dec 20 13:05:06.628: INFO: Created: latency-svc-c5mlv Dec 20 13:05:06.644: INFO: Got endpoints: latency-svc-c5mlv [1.501674122s] Dec 20 13:05:06.712: INFO: Created: latency-svc-kkht7 Dec 20 13:05:06.720: INFO: Got endpoints: latency-svc-kkht7 [1.408154286s] Dec 20 13:05:06.775: INFO: Created: latency-svc-wf877 Dec 20 13:05:06.795: INFO: Got endpoints: latency-svc-wf877 [1.460875476s] Dec 20 13:05:06.885: INFO: Created: latency-svc-fcjgk Dec 20 13:05:06.906: INFO: Got endpoints: latency-svc-fcjgk [1.520832238s] Dec 20 13:05:06.948: INFO: Created: latency-svc-kmqwc Dec 20 13:05:06.965: INFO: Got endpoints: latency-svc-kmqwc [1.426189256s] Dec 20 13:05:07.083: INFO: Created: latency-svc-2l89b Dec 20 13:05:07.114: INFO: Created: latency-svc-t6khs Dec 20 13:05:07.115: INFO: Got endpoints: latency-svc-2l89b [1.513072401s] Dec 20 13:05:07.132: INFO: Got endpoints: latency-svc-t6khs [1.417375087s] Dec 20 13:05:07.218: INFO: Created: latency-svc-p4ftc Dec 20 13:05:08.179: INFO: Got endpoints: latency-svc-p4ftc [2.391005739s] Dec 20 13:05:08.181: INFO: Created: latency-svc-s2kw2 Dec 20 13:05:08.209: INFO: Got endpoints: latency-svc-s2kw2 [2.186846359s] Dec 20 13:05:08.348: INFO: Created: latency-svc-pksnz Dec 20 13:05:08.355: INFO: Got endpoints: latency-svc-pksnz [2.309464616s] Dec 20 13:05:08.426: INFO: Created: latency-svc-h6cpl Dec 20 13:05:08.446: INFO: Got endpoints: latency-svc-h6cpl [2.230131638s] Dec 20 13:05:08.547: INFO: Created: latency-svc-kp4vv Dec 20 13:05:08.566: INFO: Got endpoints: latency-svc-kp4vv [2.318207432s] Dec 20 13:05:08.602: INFO: Created: latency-svc-h259w Dec 20 13:05:08.705: INFO: Got endpoints: latency-svc-h259w [2.300830578s] Dec 20 13:05:08.771: INFO: Created: latency-svc-lb8zr Dec 20 13:05:08.772: INFO: Got endpoints: latency-svc-lb8zr [2.294512841s] Dec 20 13:05:08.904: INFO: Created: latency-svc-hx4hq Dec 20 13:05:08.964: INFO: Got endpoints: latency-svc-hx4hq [2.347646346s] Dec 20 13:05:08.967: INFO: Created: latency-svc-zdckn Dec 20 13:05:09.082: INFO: Got endpoints: latency-svc-zdckn [2.438818901s] Dec 20 13:05:09.100: INFO: Created: latency-svc-s9nbl Dec 20 13:05:09.112: INFO: Got endpoints: latency-svc-s9nbl [2.391463935s] Dec 20 13:05:09.150: INFO: Created: latency-svc-8tvxq Dec 20 13:05:09.163: INFO: Got endpoints: latency-svc-8tvxq [2.367523301s] Dec 20 13:05:09.315: INFO: Created: latency-svc-css5s Dec 20 13:05:09.323: INFO: Got endpoints: latency-svc-css5s [2.416353184s] Dec 20 13:05:09.564: INFO: Created: latency-svc-b48rn Dec 20 13:05:09.575: INFO: Got endpoints: latency-svc-b48rn [2.609851769s] Dec 20 13:05:09.637: INFO: Created: latency-svc-qnv49 Dec 20 13:05:09.781: INFO: Got endpoints: latency-svc-qnv49 [2.665463583s] Dec 20 13:05:09.836: INFO: Created: latency-svc-45lfb Dec 20 13:05:09.844: INFO: Got endpoints: latency-svc-45lfb [2.712057857s] Dec 20 13:05:10.067: INFO: Created: latency-svc-mxtlc Dec 20 13:05:10.074: INFO: Got endpoints: latency-svc-mxtlc [1.893650559s] Dec 20 13:05:10.118: INFO: Created: latency-svc-qqnmh Dec 20 13:05:10.129: INFO: Got endpoints: latency-svc-qqnmh [1.919872102s] Dec 20 13:05:10.272: INFO: Created: latency-svc-89c69 Dec 20 13:05:10.311: INFO: Got endpoints: latency-svc-89c69 [1.9558995s] Dec 20 13:05:10.353: INFO: Created: latency-svc-8whkc Dec 20 13:05:10.363: INFO: Got endpoints: latency-svc-8whkc [1.916811034s] Dec 20 13:05:10.507: INFO: Created: latency-svc-t4p2v Dec 20 13:05:10.511: INFO: Got endpoints: latency-svc-t4p2v [1.944109988s] Dec 20 13:05:10.658: INFO: Created: latency-svc-d9f44 Dec 20 13:05:10.675: INFO: Got endpoints: latency-svc-d9f44 [1.969755749s] Dec 20 13:05:10.755: INFO: Created: latency-svc-snbdw Dec 20 13:05:10.834: INFO: Got endpoints: latency-svc-snbdw [2.061035624s] Dec 20 13:05:10.867: INFO: Created: latency-svc-pc6l2 Dec 20 13:05:10.882: INFO: Got endpoints: latency-svc-pc6l2 [1.91758954s] Dec 20 13:05:10.927: INFO: Created: latency-svc-6qpjn Dec 20 13:05:11.068: INFO: Got endpoints: latency-svc-6qpjn [1.985491856s] Dec 20 13:05:11.105: INFO: Created: latency-svc-9gjmh Dec 20 13:05:11.116: INFO: Got endpoints: latency-svc-9gjmh [2.003945244s] Dec 20 13:05:11.158: INFO: Created: latency-svc-cn9mg Dec 20 13:05:11.285: INFO: Got endpoints: latency-svc-cn9mg [2.121769526s] Dec 20 13:05:11.311: INFO: Created: latency-svc-xzgjj Dec 20 13:05:11.319: INFO: Got endpoints: latency-svc-xzgjj [1.996076373s] Dec 20 13:05:11.372: INFO: Created: latency-svc-cdjqh Dec 20 13:05:11.387: INFO: Got endpoints: latency-svc-cdjqh [1.81140976s] Dec 20 13:05:11.559: INFO: Created: latency-svc-brkl8 Dec 20 13:05:11.570: INFO: Got endpoints: latency-svc-brkl8 [1.789254458s] Dec 20 13:05:11.632: INFO: Created: latency-svc-sqmkh Dec 20 13:05:11.763: INFO: Got endpoints: latency-svc-sqmkh [1.918254184s] Dec 20 13:05:11.782: INFO: Created: latency-svc-rrk59 Dec 20 13:05:11.798: INFO: Got endpoints: latency-svc-rrk59 [1.723723026s] Dec 20 13:05:11.832: INFO: Created: latency-svc-25nnd Dec 20 13:05:11.845: INFO: Got endpoints: latency-svc-25nnd [1.715503998s] Dec 20 13:05:12.005: INFO: Created: latency-svc-76l84 Dec 20 13:05:12.022: INFO: Got endpoints: latency-svc-76l84 [1.710781645s] Dec 20 13:05:12.142: INFO: Created: latency-svc-9ksn7 Dec 20 13:05:12.152: INFO: Got endpoints: latency-svc-9ksn7 [1.788995652s] Dec 20 13:05:12.218: INFO: Created: latency-svc-sxrq5 Dec 20 13:05:12.222: INFO: Got endpoints: latency-svc-sxrq5 [1.710772472s] Dec 20 13:05:12.315: INFO: Created: latency-svc-2dxzn Dec 20 13:05:12.340: INFO: Got endpoints: latency-svc-2dxzn [1.664937421s] Dec 20 13:05:12.388: INFO: Created: latency-svc-d2q8s Dec 20 13:05:12.478: INFO: Got endpoints: latency-svc-d2q8s [1.644106792s] Dec 20 13:05:12.482: INFO: Created: latency-svc-9rll4 Dec 20 13:05:12.510: INFO: Got endpoints: latency-svc-9rll4 [1.62753585s] Dec 20 13:05:12.565: INFO: Created: latency-svc-2vwmq Dec 20 13:05:12.654: INFO: Got endpoints: latency-svc-2vwmq [1.585392071s] Dec 20 13:05:12.712: INFO: Created: latency-svc-6gxrr Dec 20 13:05:12.731: INFO: Got endpoints: latency-svc-6gxrr [1.615116921s] Dec 20 13:05:12.806: INFO: Created: latency-svc-8kzt8 Dec 20 13:05:12.818: INFO: Got endpoints: latency-svc-8kzt8 [1.532575278s] Dec 20 13:05:12.860: INFO: Created: latency-svc-ggp6g Dec 20 13:05:12.862: INFO: Got endpoints: latency-svc-ggp6g [1.542352884s] Dec 20 13:05:12.992: INFO: Created: latency-svc-7pdt2 Dec 20 13:05:13.004: INFO: Got endpoints: latency-svc-7pdt2 [1.616881334s] Dec 20 13:05:13.047: INFO: Created: latency-svc-hptpt Dec 20 13:05:13.058: INFO: Got endpoints: latency-svc-hptpt [1.487456888s] Dec 20 13:05:13.096: INFO: Created: latency-svc-z5bn7 Dec 20 13:05:13.166: INFO: Got endpoints: latency-svc-z5bn7 [1.402761113s] Dec 20 13:05:13.183: INFO: Created: latency-svc-g9hqb Dec 20 13:05:13.183: INFO: Got endpoints: latency-svc-g9hqb [1.385150862s] Dec 20 13:05:13.231: INFO: Created: latency-svc-8dzk2 Dec 20 13:05:13.231: INFO: Got endpoints: latency-svc-8dzk2 [1.385622979s] Dec 20 13:05:13.330: INFO: Created: latency-svc-4tlh4 Dec 20 13:05:13.335: INFO: Got endpoints: latency-svc-4tlh4 [1.312729663s] Dec 20 13:05:13.399: INFO: Created: latency-svc-d6v8d Dec 20 13:05:13.407: INFO: Got endpoints: latency-svc-d6v8d [1.255057162s] Dec 20 13:05:13.559: INFO: Created: latency-svc-kh9lc Dec 20 13:05:13.567: INFO: Got endpoints: latency-svc-kh9lc [1.344804122s] Dec 20 13:05:13.642: INFO: Created: latency-svc-m26sk Dec 20 13:05:13.650: INFO: Got endpoints: latency-svc-m26sk [1.309896898s] Dec 20 13:05:13.797: INFO: Created: latency-svc-tb4hc Dec 20 13:05:13.801: INFO: Got endpoints: latency-svc-tb4hc [1.322019245s] Dec 20 13:05:13.864: INFO: Created: latency-svc-lf5rh Dec 20 13:05:13.966: INFO: Got endpoints: latency-svc-lf5rh [1.45601505s] Dec 20 13:05:13.999: INFO: Created: latency-svc-zxghc Dec 20 13:05:14.006: INFO: Got endpoints: latency-svc-zxghc [1.351164287s] Dec 20 13:05:14.006: INFO: Latencies: [207.928348ms 278.932504ms 425.441067ms 445.291108ms 555.815845ms 750.020232ms 762.341232ms 840.734074ms 1.062939466s 1.166365058s 1.255057162s 1.299785829s 1.309896898s 1.312729663s 1.322019245s 1.335835788s 1.344804122s 1.344916823s 1.351164287s 1.385150862s 1.385622979s 1.396394254s 1.402761113s 1.408154286s 1.417181269s 1.417375087s 1.426189256s 1.432362956s 1.442542552s 1.443246288s 1.453630216s 1.45601505s 1.458929781s 1.460875476s 1.46199995s 1.471900332s 1.475711838s 1.487456888s 1.495669657s 1.495759596s 1.49945284s 1.501674122s 1.501712851s 1.509027947s 1.513072401s 1.515817835s 1.519613207s 1.519869895s 1.520832238s 1.52145105s 1.521521659s 1.529211593s 1.532575278s 1.532617673s 1.53883051s 1.539898511s 1.542352884s 1.543466113s 1.549782286s 1.562619289s 1.563973305s 1.564883168s 1.565429836s 1.574024822s 1.575397794s 1.579393817s 1.581630905s 1.585392071s 1.602424979s 1.603678901s 1.603739212s 1.610065623s 1.61130613s 1.613234195s 1.615116921s 1.616881334s 1.62753585s 1.629541554s 1.629804855s 1.643183041s 1.64331946s 1.644106792s 1.644386167s 1.646275965s 1.654873823s 1.656532272s 1.664937421s 1.666399744s 1.677652396s 1.686611168s 1.688279867s 1.695797387s 1.699502279s 1.701750784s 1.701875846s 1.703377296s 1.710772472s 1.710781645s 1.711555042s 1.714500852s 1.715503998s 1.720897941s 1.723723026s 1.734569915s 1.753522958s 1.760924228s 1.774628763s 1.788995652s 1.789254458s 1.793699827s 1.798046498s 1.799615446s 1.802611549s 1.81140976s 1.81577257s 1.819943248s 1.829254337s 1.846064221s 1.856536153s 1.890987458s 1.891284523s 1.893650559s 1.908192861s 1.90833741s 1.916811034s 1.91758954s 1.918254184s 1.919872102s 1.925147368s 1.934767333s 1.940903506s 1.944109988s 1.950887063s 1.9558995s 1.95899197s 1.959387215s 1.961296316s 1.966500664s 1.969755749s 1.973104789s 1.985491856s 1.990674735s 1.996076373s 2.003945244s 2.005358964s 2.014671012s 2.03064372s 2.037894053s 2.061035624s 2.092533515s 2.106985613s 2.121769526s 2.181082202s 2.186846359s 2.207927283s 2.210152073s 2.227372813s 2.230131638s 2.252740696s 2.278349317s 2.294512841s 2.300830578s 2.309464616s 2.318207432s 2.347646346s 2.367523301s 2.37474004s 2.384000697s 2.391005739s 2.391463935s 2.416353184s 2.438818901s 2.462552339s 2.494600389s 2.498934898s 2.520543709s 2.521262961s 2.527883552s 2.573988789s 2.599828467s 2.609851769s 2.612778421s 2.625902488s 2.665463583s 2.712057857s 3.835330054s 4.072955511s 4.121114591s 4.255230611s 4.258347602s 4.266606506s 4.317545579s 4.342338991s 4.371970658s 4.372462949s 4.435604788s 4.438475568s 4.461226162s 4.462984244s 4.472406843s] Dec 20 13:05:14.007: INFO: 50 %ile: 1.715503998s Dec 20 13:05:14.007: INFO: 90 %ile: 2.609851769s Dec 20 13:05:14.007: INFO: 99 %ile: 4.462984244s Dec 20 13:05:14.007: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:05:14.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-3455" for this suite. Dec 20 13:05:58.105: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:05:58.201: INFO: namespace svc-latency-3455 deletion completed in 44.187402768s • [SLOW TEST:83.235 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:05:58.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 20 13:05:58.408: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-5294' Dec 20 13:05:58.585: INFO: stderr: "" Dec 20 13:05:58.585: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 Dec 20 13:05:58.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-5294' Dec 20 13:06:06.222: INFO: stderr: "" Dec 20 13:06:06.222: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:06:06.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5294" for this suite. Dec 20 13:06:14.288: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:06:14.439: INFO: namespace kubectl-5294 deletion completed in 8.208256315s • [SLOW TEST:16.237 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:06:14.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-8af4afee-aabc-4490-b8f3-712584eefd9f STEP: Creating a pod to test consume secrets Dec 20 13:06:14.567: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-15638f96-4eec-4b6a-8e87-0b0a778083fb" in namespace "projected-6790" to be "success or failure" Dec 20 13:06:14.581: INFO: Pod "pod-projected-secrets-15638f96-4eec-4b6a-8e87-0b0a778083fb": Phase="Pending", Reason="", readiness=false. Elapsed: 13.589233ms Dec 20 13:06:17.769: INFO: Pod "pod-projected-secrets-15638f96-4eec-4b6a-8e87-0b0a778083fb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.202266682s Dec 20 13:06:19.778: INFO: Pod "pod-projected-secrets-15638f96-4eec-4b6a-8e87-0b0a778083fb": Phase="Pending", Reason="", readiness=false. Elapsed: 5.211234181s Dec 20 13:06:21.787: INFO: Pod "pod-projected-secrets-15638f96-4eec-4b6a-8e87-0b0a778083fb": Phase="Pending", Reason="", readiness=false. Elapsed: 7.219379197s Dec 20 13:06:23.804: INFO: Pod "pod-projected-secrets-15638f96-4eec-4b6a-8e87-0b0a778083fb": Phase="Pending", Reason="", readiness=false. Elapsed: 9.236831468s Dec 20 13:06:25.811: INFO: Pod "pod-projected-secrets-15638f96-4eec-4b6a-8e87-0b0a778083fb": Phase="Pending", Reason="", readiness=false. Elapsed: 11.244194906s Dec 20 13:06:27.825: INFO: Pod "pod-projected-secrets-15638f96-4eec-4b6a-8e87-0b0a778083fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.258243529s STEP: Saw pod success Dec 20 13:06:27.825: INFO: Pod "pod-projected-secrets-15638f96-4eec-4b6a-8e87-0b0a778083fb" satisfied condition "success or failure" Dec 20 13:06:27.839: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-15638f96-4eec-4b6a-8e87-0b0a778083fb container projected-secret-volume-test: STEP: delete the pod Dec 20 13:06:28.109: INFO: Waiting for pod pod-projected-secrets-15638f96-4eec-4b6a-8e87-0b0a778083fb to disappear Dec 20 13:06:28.117: INFO: Pod pod-projected-secrets-15638f96-4eec-4b6a-8e87-0b0a778083fb no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:06:28.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6790" for this suite. Dec 20 13:06:34.218: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:06:34.313: INFO: namespace projected-6790 deletion completed in 6.189415632s • [SLOW TEST:19.873 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:06:34.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 20 13:06:34.392: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:06:35.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1649" for this suite. Dec 20 13:06:41.639: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:06:41.805: INFO: namespace custom-resource-definition-1649 deletion completed in 6.275635905s • [SLOW TEST:7.492 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:06:41.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Dec 20 13:06:41.955: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Dec 20 13:06:42.039: INFO: Waiting for terminating namespaces to be deleted... Dec 20 13:06:42.044: INFO: Logging pods the kubelet thinks is on node iruya-node before test Dec 20 13:06:42.062: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded) Dec 20 13:06:42.062: INFO: Container weave ready: true, restart count 0 Dec 20 13:06:42.062: INFO: Container weave-npc ready: true, restart count 0 Dec 20 13:06:42.062: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded) Dec 20 13:06:42.062: INFO: Container kube-proxy ready: true, restart count 0 Dec 20 13:06:42.062: INFO: Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test Dec 20 13:06:42.092: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded) Dec 20 13:06:42.093: INFO: Container kube-scheduler ready: true, restart count 7 Dec 20 13:06:42.093: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Dec 20 13:06:42.093: INFO: Container coredns ready: true, restart count 0 Dec 20 13:06:42.093: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Dec 20 13:06:42.093: INFO: Container coredns ready: true, restart count 0 Dec 20 13:06:42.093: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded) Dec 20 13:06:42.093: INFO: Container etcd ready: true, restart count 0 Dec 20 13:06:42.093: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded) Dec 20 13:06:42.093: INFO: Container weave ready: true, restart count 0 Dec 20 13:06:42.093: INFO: Container weave-npc ready: true, restart count 0 Dec 20 13:06:42.093: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded) Dec 20 13:06:42.093: INFO: Container kube-controller-manager ready: true, restart count 10 Dec 20 13:06:42.093: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded) Dec 20 13:06:42.093: INFO: Container kube-proxy ready: true, restart count 0 Dec 20 13:06:42.093: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded) Dec 20 13:06:42.093: INFO: Container kube-apiserver ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-07d70915-3834-47be-a3c3-eefc47cc251e 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-07d70915-3834-47be-a3c3-eefc47cc251e off the node iruya-node STEP: verifying the node doesn't have the label kubernetes.io/e2e-07d70915-3834-47be-a3c3-eefc47cc251e [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:07:04.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2657" for this suite. Dec 20 13:07:20.671: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:07:20.763: INFO: namespace sched-pred-2657 deletion completed in 16.160693347s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:38.956 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:07:20.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Dec 20 13:07:20.922: INFO: Waiting up to 5m0s for pod "pod-382ac57a-e7e4-44a5-94be-75a0124cdb4c" in namespace "emptydir-5249" to be "success or failure" Dec 20 13:07:20.945: INFO: Pod "pod-382ac57a-e7e4-44a5-94be-75a0124cdb4c": Phase="Pending", Reason="", readiness=false. Elapsed: 23.272433ms Dec 20 13:07:22.956: INFO: Pod "pod-382ac57a-e7e4-44a5-94be-75a0124cdb4c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033429866s Dec 20 13:07:24.973: INFO: Pod "pod-382ac57a-e7e4-44a5-94be-75a0124cdb4c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050967782s Dec 20 13:07:26.986: INFO: Pod "pod-382ac57a-e7e4-44a5-94be-75a0124cdb4c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063813956s Dec 20 13:07:28.996: INFO: Pod "pod-382ac57a-e7e4-44a5-94be-75a0124cdb4c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.073594102s Dec 20 13:07:31.004: INFO: Pod "pod-382ac57a-e7e4-44a5-94be-75a0124cdb4c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.082138472s STEP: Saw pod success Dec 20 13:07:31.004: INFO: Pod "pod-382ac57a-e7e4-44a5-94be-75a0124cdb4c" satisfied condition "success or failure" Dec 20 13:07:31.028: INFO: Trying to get logs from node iruya-node pod pod-382ac57a-e7e4-44a5-94be-75a0124cdb4c container test-container: STEP: delete the pod Dec 20 13:07:31.163: INFO: Waiting for pod pod-382ac57a-e7e4-44a5-94be-75a0124cdb4c to disappear Dec 20 13:07:31.172: INFO: Pod pod-382ac57a-e7e4-44a5-94be-75a0124cdb4c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:07:31.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5249" for this suite. Dec 20 13:07:37.205: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:07:37.352: INFO: namespace emptydir-5249 deletion completed in 6.173039797s • [SLOW TEST:16.589 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:07:37.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-694a5ed3-6b29-4a55-af4b-476fcbaad000 STEP: Creating configMap with name cm-test-opt-upd-cbbaabb3-8712-4a44-95ec-410639931e72 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-694a5ed3-6b29-4a55-af4b-476fcbaad000 STEP: Updating configmap cm-test-opt-upd-cbbaabb3-8712-4a44-95ec-410639931e72 STEP: Creating configMap with name cm-test-opt-create-669b513b-1104-4b7e-904f-bd094b7a7061 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:07:55.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7277" for this suite. Dec 20 13:08:18.085: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:08:18.198: INFO: namespace configmap-7277 deletion completed in 22.228757613s • [SLOW TEST:40.844 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:08:18.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Dec 20 13:08:18.350: INFO: Waiting up to 5m0s for pod "downward-api-98550bfd-2aab-48ac-a617-fe4b7c251951" in namespace "downward-api-6601" to be "success or failure" Dec 20 13:08:18.363: INFO: Pod "downward-api-98550bfd-2aab-48ac-a617-fe4b7c251951": Phase="Pending", Reason="", readiness=false. Elapsed: 12.887459ms Dec 20 13:08:20.373: INFO: Pod "downward-api-98550bfd-2aab-48ac-a617-fe4b7c251951": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022543674s Dec 20 13:08:22.383: INFO: Pod "downward-api-98550bfd-2aab-48ac-a617-fe4b7c251951": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033080346s Dec 20 13:08:24.395: INFO: Pod "downward-api-98550bfd-2aab-48ac-a617-fe4b7c251951": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04481766s Dec 20 13:08:26.412: INFO: Pod "downward-api-98550bfd-2aab-48ac-a617-fe4b7c251951": Phase="Pending", Reason="", readiness=false. Elapsed: 8.062151235s Dec 20 13:08:28.431: INFO: Pod "downward-api-98550bfd-2aab-48ac-a617-fe4b7c251951": Phase="Pending", Reason="", readiness=false. Elapsed: 10.080425298s Dec 20 13:08:30.440: INFO: Pod "downward-api-98550bfd-2aab-48ac-a617-fe4b7c251951": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.089410937s STEP: Saw pod success Dec 20 13:08:30.440: INFO: Pod "downward-api-98550bfd-2aab-48ac-a617-fe4b7c251951" satisfied condition "success or failure" Dec 20 13:08:30.444: INFO: Trying to get logs from node iruya-node pod downward-api-98550bfd-2aab-48ac-a617-fe4b7c251951 container dapi-container: STEP: delete the pod Dec 20 13:08:30.666: INFO: Waiting for pod downward-api-98550bfd-2aab-48ac-a617-fe4b7c251951 to disappear Dec 20 13:08:30.675: INFO: Pod downward-api-98550bfd-2aab-48ac-a617-fe4b7c251951 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:08:30.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6601" for this suite. Dec 20 13:08:38.712: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:08:38.902: INFO: namespace downward-api-6601 deletion completed in 8.217048827s • [SLOW TEST:20.704 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:08:38.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Dec 20 13:08:40.247: INFO: Pod name wrapped-volume-race-a0220919-2b2d-4444-b6d6-84f88d098b43: Found 0 pods out of 5 Dec 20 13:08:45.265: INFO: Pod name wrapped-volume-race-a0220919-2b2d-4444-b6d6-84f88d098b43: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-a0220919-2b2d-4444-b6d6-84f88d098b43 in namespace emptydir-wrapper-9357, will wait for the garbage collector to delete the pods Dec 20 13:09:17.374: INFO: Deleting ReplicationController wrapped-volume-race-a0220919-2b2d-4444-b6d6-84f88d098b43 took: 19.908468ms Dec 20 13:09:17.775: INFO: Terminating ReplicationController wrapped-volume-race-a0220919-2b2d-4444-b6d6-84f88d098b43 pods took: 400.60894ms STEP: Creating RC which spawns configmap-volume pods Dec 20 13:10:07.415: INFO: Pod name wrapped-volume-race-b613acbd-3a32-4a25-8c82-b6cfc2025f64: Found 0 pods out of 5 Dec 20 13:10:12.438: INFO: Pod name wrapped-volume-race-b613acbd-3a32-4a25-8c82-b6cfc2025f64: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-b613acbd-3a32-4a25-8c82-b6cfc2025f64 in namespace emptydir-wrapper-9357, will wait for the garbage collector to delete the pods Dec 20 13:10:46.603: INFO: Deleting ReplicationController wrapped-volume-race-b613acbd-3a32-4a25-8c82-b6cfc2025f64 took: 34.81025ms Dec 20 13:10:47.005: INFO: Terminating ReplicationController wrapped-volume-race-b613acbd-3a32-4a25-8c82-b6cfc2025f64 pods took: 401.608912ms STEP: Creating RC which spawns configmap-volume pods Dec 20 13:11:31.822: INFO: Pod name wrapped-volume-race-5ecdc5cb-dba9-47c7-82ce-9910d99371ff: Found 0 pods out of 5 Dec 20 13:11:36.895: INFO: Pod name wrapped-volume-race-5ecdc5cb-dba9-47c7-82ce-9910d99371ff: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-5ecdc5cb-dba9-47c7-82ce-9910d99371ff in namespace emptydir-wrapper-9357, will wait for the garbage collector to delete the pods Dec 20 13:12:19.110: INFO: Deleting ReplicationController wrapped-volume-race-5ecdc5cb-dba9-47c7-82ce-9910d99371ff took: 10.07851ms Dec 20 13:12:19.511: INFO: Terminating ReplicationController wrapped-volume-race-5ecdc5cb-dba9-47c7-82ce-9910d99371ff pods took: 401.305194ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:13:17.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-9357" for this suite. Dec 20 13:13:27.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:13:27.856: INFO: namespace emptydir-wrapper-9357 deletion completed in 10.195559209s • [SLOW TEST:288.953 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:13:27.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-64618bc1-1df6-4027-b01b-ad0da768b410 STEP: Creating a pod to test consume configMaps Dec 20 13:13:28.016: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e2d4ef44-844a-4a67-a8ad-61dd97719ea3" in namespace "projected-3808" to be "success or failure" Dec 20 13:13:28.034: INFO: Pod "pod-projected-configmaps-e2d4ef44-844a-4a67-a8ad-61dd97719ea3": Phase="Pending", Reason="", readiness=false. Elapsed: 17.271123ms Dec 20 13:13:30.043: INFO: Pod "pod-projected-configmaps-e2d4ef44-844a-4a67-a8ad-61dd97719ea3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026592315s Dec 20 13:13:32.049: INFO: Pod "pod-projected-configmaps-e2d4ef44-844a-4a67-a8ad-61dd97719ea3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032275364s Dec 20 13:13:34.068: INFO: Pod "pod-projected-configmaps-e2d4ef44-844a-4a67-a8ad-61dd97719ea3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051668232s Dec 20 13:13:37.064: INFO: Pod "pod-projected-configmaps-e2d4ef44-844a-4a67-a8ad-61dd97719ea3": Phase="Pending", Reason="", readiness=false. Elapsed: 9.047417244s Dec 20 13:13:39.128: INFO: Pod "pod-projected-configmaps-e2d4ef44-844a-4a67-a8ad-61dd97719ea3": Phase="Pending", Reason="", readiness=false. Elapsed: 11.111703917s Dec 20 13:13:41.136: INFO: Pod "pod-projected-configmaps-e2d4ef44-844a-4a67-a8ad-61dd97719ea3": Phase="Pending", Reason="", readiness=false. Elapsed: 13.119893932s Dec 20 13:13:43.150: INFO: Pod "pod-projected-configmaps-e2d4ef44-844a-4a67-a8ad-61dd97719ea3": Phase="Pending", Reason="", readiness=false. Elapsed: 15.133634771s Dec 20 13:13:45.161: INFO: Pod "pod-projected-configmaps-e2d4ef44-844a-4a67-a8ad-61dd97719ea3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.14489577s STEP: Saw pod success Dec 20 13:13:45.162: INFO: Pod "pod-projected-configmaps-e2d4ef44-844a-4a67-a8ad-61dd97719ea3" satisfied condition "success or failure" Dec 20 13:13:45.166: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-e2d4ef44-844a-4a67-a8ad-61dd97719ea3 container projected-configmap-volume-test: STEP: delete the pod Dec 20 13:13:45.292: INFO: Waiting for pod pod-projected-configmaps-e2d4ef44-844a-4a67-a8ad-61dd97719ea3 to disappear Dec 20 13:13:45.306: INFO: Pod pod-projected-configmaps-e2d4ef44-844a-4a67-a8ad-61dd97719ea3 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:13:45.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3808" for this suite. Dec 20 13:13:51.354: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:13:51.544: INFO: namespace projected-3808 deletion completed in 6.230535681s • [SLOW TEST:23.687 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:13:51.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:13:59.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4971" for this suite. Dec 20 13:14:05.253: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:14:05.378: INFO: namespace watch-4971 deletion completed in 6.164965056s • [SLOW TEST:13.832 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:14:05.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments Dec 20 13:14:05.474: INFO: Waiting up to 5m0s for pod "client-containers-e4f0d692-f5a3-47f4-971f-e42a661d1420" in namespace "containers-3901" to be "success or failure" Dec 20 13:14:05.496: INFO: Pod "client-containers-e4f0d692-f5a3-47f4-971f-e42a661d1420": Phase="Pending", Reason="", readiness=false. Elapsed: 22.230895ms Dec 20 13:14:07.506: INFO: Pod "client-containers-e4f0d692-f5a3-47f4-971f-e42a661d1420": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032093379s Dec 20 13:14:09.514: INFO: Pod "client-containers-e4f0d692-f5a3-47f4-971f-e42a661d1420": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040219767s Dec 20 13:14:11.521: INFO: Pod "client-containers-e4f0d692-f5a3-47f4-971f-e42a661d1420": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046980714s Dec 20 13:14:13.536: INFO: Pod "client-containers-e4f0d692-f5a3-47f4-971f-e42a661d1420": Phase="Running", Reason="", readiness=true. Elapsed: 8.061831443s Dec 20 13:14:15.612: INFO: Pod "client-containers-e4f0d692-f5a3-47f4-971f-e42a661d1420": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.138153324s STEP: Saw pod success Dec 20 13:14:15.612: INFO: Pod "client-containers-e4f0d692-f5a3-47f4-971f-e42a661d1420" satisfied condition "success or failure" Dec 20 13:14:15.616: INFO: Trying to get logs from node iruya-node pod client-containers-e4f0d692-f5a3-47f4-971f-e42a661d1420 container test-container: STEP: delete the pod Dec 20 13:14:15.762: INFO: Waiting for pod client-containers-e4f0d692-f5a3-47f4-971f-e42a661d1420 to disappear Dec 20 13:14:15.773: INFO: Pod client-containers-e4f0d692-f5a3-47f4-971f-e42a661d1420 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:14:15.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3901" for this suite. Dec 20 13:14:21.920: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:14:22.065: INFO: namespace containers-3901 deletion completed in 6.287822189s • [SLOW TEST:16.686 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:14:22.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Dec 20 13:14:22.470: INFO: Number of nodes with available pods: 0 Dec 20 13:14:22.470: INFO: Node iruya-node is running more than one daemon pod Dec 20 13:14:24.057: INFO: Number of nodes with available pods: 0 Dec 20 13:14:24.057: INFO: Node iruya-node is running more than one daemon pod Dec 20 13:14:24.537: INFO: Number of nodes with available pods: 0 Dec 20 13:14:24.537: INFO: Node iruya-node is running more than one daemon pod Dec 20 13:14:25.488: INFO: Number of nodes with available pods: 0 Dec 20 13:14:25.488: INFO: Node iruya-node is running more than one daemon pod Dec 20 13:14:26.498: INFO: Number of nodes with available pods: 0 Dec 20 13:14:26.498: INFO: Node iruya-node is running more than one daemon pod Dec 20 13:14:27.558: INFO: Number of nodes with available pods: 0 Dec 20 13:14:27.558: INFO: Node iruya-node is running more than one daemon pod Dec 20 13:14:29.303: INFO: Number of nodes with available pods: 0 Dec 20 13:14:29.303: INFO: Node iruya-node is running more than one daemon pod Dec 20 13:14:29.653: INFO: Number of nodes with available pods: 0 Dec 20 13:14:29.653: INFO: Node iruya-node is running more than one daemon pod Dec 20 13:14:30.518: INFO: Number of nodes with available pods: 0 Dec 20 13:14:30.518: INFO: Node iruya-node is running more than one daemon pod Dec 20 13:14:31.556: INFO: Number of nodes with available pods: 0 Dec 20 13:14:31.556: INFO: Node iruya-node is running more than one daemon pod Dec 20 13:14:32.491: INFO: Number of nodes with available pods: 0 Dec 20 13:14:32.491: INFO: Node iruya-node is running more than one daemon pod Dec 20 13:14:33.498: INFO: Number of nodes with available pods: 2 Dec 20 13:14:33.498: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Dec 20 13:14:33.754: INFO: Number of nodes with available pods: 1 Dec 20 13:14:33.754: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 20 13:14:34.782: INFO: Number of nodes with available pods: 1 Dec 20 13:14:34.782: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 20 13:14:35.782: INFO: Number of nodes with available pods: 1 Dec 20 13:14:35.783: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 20 13:14:36.766: INFO: Number of nodes with available pods: 1 Dec 20 13:14:36.766: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 20 13:14:37.782: INFO: Number of nodes with available pods: 1 Dec 20 13:14:37.782: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 20 13:14:38.769: INFO: Number of nodes with available pods: 1 Dec 20 13:14:38.769: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 20 13:14:39.781: INFO: Number of nodes with available pods: 1 Dec 20 13:14:39.781: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 20 13:14:40.786: INFO: Number of nodes with available pods: 1 Dec 20 13:14:40.786: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 20 13:14:41.969: INFO: Number of nodes with available pods: 1 Dec 20 13:14:41.969: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 20 13:14:42.768: INFO: Number of nodes with available pods: 1 Dec 20 13:14:42.768: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 20 13:14:43.781: INFO: Number of nodes with available pods: 1 Dec 20 13:14:43.781: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 20 13:14:44.878: INFO: Number of nodes with available pods: 1 Dec 20 13:14:44.878: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 20 13:14:45.986: INFO: Number of nodes with available pods: 1 Dec 20 13:14:45.986: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 20 13:14:46.773: INFO: Number of nodes with available pods: 1 Dec 20 13:14:46.773: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 20 13:14:47.785: INFO: Number of nodes with available pods: 2 Dec 20 13:14:47.785: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9673, will wait for the garbage collector to delete the pods Dec 20 13:14:47.887: INFO: Deleting DaemonSet.extensions daemon-set took: 32.536697ms Dec 20 13:14:48.288: INFO: Terminating DaemonSet.extensions daemon-set pods took: 401.497701ms Dec 20 13:14:56.037: INFO: Number of nodes with available pods: 0 Dec 20 13:14:56.038: INFO: Number of running nodes: 0, number of available pods: 0 Dec 20 13:14:56.046: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9673/daemonsets","resourceVersion":"17389239"},"items":null} Dec 20 13:14:56.049: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9673/pods","resourceVersion":"17389239"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:14:56.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9673" for this suite. Dec 20 13:15:04.088: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:15:04.171: INFO: namespace daemonsets-9673 deletion completed in 8.109181004s • [SLOW TEST:42.105 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:15:04.172: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:15:12.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6448" for this suite. Dec 20 13:15:18.403: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:15:18.598: INFO: namespace kubelet-test-6448 deletion completed in 6.222031292s • [SLOW TEST:14.427 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:15:18.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-2944d5fb-2e0c-44f9-b767-93f5e73832d8 STEP: Creating a pod to test consume configMaps Dec 20 13:15:18.744: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-dc670c43-4bb5-4231-9157-690a99a7fe14" in namespace "projected-2638" to be "success or failure" Dec 20 13:15:18.757: INFO: Pod "pod-projected-configmaps-dc670c43-4bb5-4231-9157-690a99a7fe14": Phase="Pending", Reason="", readiness=false. Elapsed: 13.125095ms Dec 20 13:15:20.767: INFO: Pod "pod-projected-configmaps-dc670c43-4bb5-4231-9157-690a99a7fe14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023174564s Dec 20 13:15:22.777: INFO: Pod "pod-projected-configmaps-dc670c43-4bb5-4231-9157-690a99a7fe14": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033240086s Dec 20 13:15:24.783: INFO: Pod "pod-projected-configmaps-dc670c43-4bb5-4231-9157-690a99a7fe14": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039539254s Dec 20 13:15:26.795: INFO: Pod "pod-projected-configmaps-dc670c43-4bb5-4231-9157-690a99a7fe14": Phase="Running", Reason="", readiness=true. Elapsed: 8.050738589s Dec 20 13:15:28.803: INFO: Pod "pod-projected-configmaps-dc670c43-4bb5-4231-9157-690a99a7fe14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.059072105s STEP: Saw pod success Dec 20 13:15:28.803: INFO: Pod "pod-projected-configmaps-dc670c43-4bb5-4231-9157-690a99a7fe14" satisfied condition "success or failure" Dec 20 13:15:28.806: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-dc670c43-4bb5-4231-9157-690a99a7fe14 container projected-configmap-volume-test: STEP: delete the pod Dec 20 13:15:29.894: INFO: Waiting for pod pod-projected-configmaps-dc670c43-4bb5-4231-9157-690a99a7fe14 to disappear Dec 20 13:15:29.915: INFO: Pod pod-projected-configmaps-dc670c43-4bb5-4231-9157-690a99a7fe14 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:15:29.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2638" for this suite. Dec 20 13:15:36.014: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:15:36.171: INFO: namespace projected-2638 deletion completed in 6.216006842s • [SLOW TEST:17.571 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:15:36.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Dec 20 13:15:36.336: INFO: Pod name pod-release: Found 0 pods out of 1 Dec 20 13:15:41.350: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:15:41.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3395" for this suite. Dec 20 13:15:47.627: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:15:47.737: INFO: namespace replication-controller-3395 deletion completed in 6.190520208s • [SLOW TEST:11.566 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:15:47.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Dec 20 13:16:00.785: INFO: Successfully updated pod "annotationupdate63b2acff-14e6-451f-8991-5016c0c14962" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:16:02.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6222" for this suite. Dec 20 13:16:25.416: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:16:25.552: INFO: namespace projected-6222 deletion completed in 22.649423377s • [SLOW TEST:37.815 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:16:25.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 20 13:16:25.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-5245' Dec 20 13:16:27.695: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Dec 20 13:16:27.695: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Dec 20 13:16:27.714: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Dec 20 13:16:27.735: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Dec 20 13:16:27.793: INFO: scanned /root for discovery docs: Dec 20 13:16:27.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-5245' Dec 20 13:16:51.751: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Dec 20 13:16:51.752: INFO: stdout: "Created e2e-test-nginx-rc-ad8e5492f192b67db97fb76af0551bfb\nScaling up e2e-test-nginx-rc-ad8e5492f192b67db97fb76af0551bfb from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-ad8e5492f192b67db97fb76af0551bfb up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-ad8e5492f192b67db97fb76af0551bfb to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Dec 20 13:16:51.752: INFO: stdout: "Created e2e-test-nginx-rc-ad8e5492f192b67db97fb76af0551bfb\nScaling up e2e-test-nginx-rc-ad8e5492f192b67db97fb76af0551bfb from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-ad8e5492f192b67db97fb76af0551bfb up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-ad8e5492f192b67db97fb76af0551bfb to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Dec 20 13:16:51.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5245' Dec 20 13:16:51.992: INFO: stderr: "" Dec 20 13:16:51.992: INFO: stdout: "e2e-test-nginx-rc-ad8e5492f192b67db97fb76af0551bfb-hd4kt e2e-test-nginx-rc-rsk84 " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Dec 20 13:16:56.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5245' Dec 20 13:16:57.128: INFO: stderr: "" Dec 20 13:16:57.129: INFO: stdout: "e2e-test-nginx-rc-ad8e5492f192b67db97fb76af0551bfb-hd4kt " Dec 20 13:16:57.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-ad8e5492f192b67db97fb76af0551bfb-hd4kt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5245' Dec 20 13:16:57.203: INFO: stderr: "" Dec 20 13:16:57.204: INFO: stdout: "true" Dec 20 13:16:57.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-ad8e5492f192b67db97fb76af0551bfb-hd4kt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5245' Dec 20 13:16:57.292: INFO: stderr: "" Dec 20 13:16:57.292: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Dec 20 13:16:57.292: INFO: e2e-test-nginx-rc-ad8e5492f192b67db97fb76af0551bfb-hd4kt is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 Dec 20 13:16:57.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-5245' Dec 20 13:16:57.378: INFO: stderr: "" Dec 20 13:16:57.378: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:16:57.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5245" for this suite. Dec 20 13:17:19.432: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:17:19.569: INFO: namespace kubectl-5245 deletion completed in 22.158883995s • [SLOW TEST:54.015 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:17:19.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Dec 20 13:17:19.697: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-4822,SelfLink:/api/v1/namespaces/watch-4822/configmaps/e2e-watch-test-resource-version,UID:f44dfed5-ef9a-4b25-82ba-7e05baef177d,ResourceVersion:17389651,Generation:0,CreationTimestamp:2019-12-20 13:17:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 20 13:17:19.697: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-4822,SelfLink:/api/v1/namespaces/watch-4822/configmaps/e2e-watch-test-resource-version,UID:f44dfed5-ef9a-4b25-82ba-7e05baef177d,ResourceVersion:17389652,Generation:0,CreationTimestamp:2019-12-20 13:17:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:17:19.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4822" for this suite. Dec 20 13:17:25.717: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:17:25.898: INFO: namespace watch-4822 deletion completed in 6.196968231s • [SLOW TEST:6.329 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:17:25.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:17:26.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4094" for this suite. Dec 20 13:17:48.140: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:17:48.287: INFO: namespace kubelet-test-4094 deletion completed in 22.178048954s • [SLOW TEST:22.389 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:17:48.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Dec 20 13:17:48.382: INFO: Waiting up to 5m0s for pod "downward-api-e185f443-2452-4b6b-bd71-ba290eee4770" in namespace "downward-api-6776" to be "success or failure" Dec 20 13:17:48.388: INFO: Pod "downward-api-e185f443-2452-4b6b-bd71-ba290eee4770": Phase="Pending", Reason="", readiness=false. Elapsed: 6.241987ms Dec 20 13:17:50.407: INFO: Pod "downward-api-e185f443-2452-4b6b-bd71-ba290eee4770": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025570987s Dec 20 13:17:52.437: INFO: Pod "downward-api-e185f443-2452-4b6b-bd71-ba290eee4770": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055389951s Dec 20 13:17:54.456: INFO: Pod "downward-api-e185f443-2452-4b6b-bd71-ba290eee4770": Phase="Pending", Reason="", readiness=false. Elapsed: 6.074141284s Dec 20 13:17:56.469: INFO: Pod "downward-api-e185f443-2452-4b6b-bd71-ba290eee4770": Phase="Pending", Reason="", readiness=false. Elapsed: 8.087133216s Dec 20 13:17:58.483: INFO: Pod "downward-api-e185f443-2452-4b6b-bd71-ba290eee4770": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.101460316s STEP: Saw pod success Dec 20 13:17:58.483: INFO: Pod "downward-api-e185f443-2452-4b6b-bd71-ba290eee4770" satisfied condition "success or failure" Dec 20 13:17:58.489: INFO: Trying to get logs from node iruya-node pod downward-api-e185f443-2452-4b6b-bd71-ba290eee4770 container dapi-container: STEP: delete the pod Dec 20 13:17:58.668: INFO: Waiting for pod downward-api-e185f443-2452-4b6b-bd71-ba290eee4770 to disappear Dec 20 13:17:58.677: INFO: Pod downward-api-e185f443-2452-4b6b-bd71-ba290eee4770 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:17:58.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6776" for this suite. Dec 20 13:18:04.707: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:18:04.851: INFO: namespace downward-api-6776 deletion completed in 6.164406366s • [SLOW TEST:16.563 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:18:04.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-cj4h STEP: Creating a pod to test atomic-volume-subpath Dec 20 13:18:05.094: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-cj4h" in namespace "subpath-7556" to be "success or failure" Dec 20 13:18:05.102: INFO: Pod "pod-subpath-test-configmap-cj4h": Phase="Pending", Reason="", readiness=false. Elapsed: 7.681325ms Dec 20 13:18:07.113: INFO: Pod "pod-subpath-test-configmap-cj4h": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018798819s Dec 20 13:18:09.118: INFO: Pod "pod-subpath-test-configmap-cj4h": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02378859s Dec 20 13:18:12.106: INFO: Pod "pod-subpath-test-configmap-cj4h": Phase="Pending", Reason="", readiness=false. Elapsed: 7.012369655s Dec 20 13:18:14.120: INFO: Pod "pod-subpath-test-configmap-cj4h": Phase="Pending", Reason="", readiness=false. Elapsed: 9.026102043s Dec 20 13:18:16.142: INFO: Pod "pod-subpath-test-configmap-cj4h": Phase="Running", Reason="", readiness=true. Elapsed: 11.048032773s Dec 20 13:18:18.151: INFO: Pod "pod-subpath-test-configmap-cj4h": Phase="Running", Reason="", readiness=true. Elapsed: 13.057389561s Dec 20 13:18:20.163: INFO: Pod "pod-subpath-test-configmap-cj4h": Phase="Running", Reason="", readiness=true. Elapsed: 15.069006373s Dec 20 13:18:22.171: INFO: Pod "pod-subpath-test-configmap-cj4h": Phase="Running", Reason="", readiness=true. Elapsed: 17.077144386s Dec 20 13:18:24.179: INFO: Pod "pod-subpath-test-configmap-cj4h": Phase="Running", Reason="", readiness=true. Elapsed: 19.085022017s Dec 20 13:18:26.188: INFO: Pod "pod-subpath-test-configmap-cj4h": Phase="Running", Reason="", readiness=true. Elapsed: 21.094519092s Dec 20 13:18:28.197: INFO: Pod "pod-subpath-test-configmap-cj4h": Phase="Running", Reason="", readiness=true. Elapsed: 23.103454119s Dec 20 13:18:30.207: INFO: Pod "pod-subpath-test-configmap-cj4h": Phase="Running", Reason="", readiness=true. Elapsed: 25.112984785s Dec 20 13:18:32.217: INFO: Pod "pod-subpath-test-configmap-cj4h": Phase="Running", Reason="", readiness=true. Elapsed: 27.123225469s Dec 20 13:18:34.228: INFO: Pod "pod-subpath-test-configmap-cj4h": Phase="Running", Reason="", readiness=true. Elapsed: 29.134043541s Dec 20 13:18:36.237: INFO: Pod "pod-subpath-test-configmap-cj4h": Phase="Succeeded", Reason="", readiness=false. Elapsed: 31.143328132s STEP: Saw pod success Dec 20 13:18:36.237: INFO: Pod "pod-subpath-test-configmap-cj4h" satisfied condition "success or failure" Dec 20 13:18:36.242: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-cj4h container test-container-subpath-configmap-cj4h: STEP: delete the pod Dec 20 13:18:36.329: INFO: Waiting for pod pod-subpath-test-configmap-cj4h to disappear Dec 20 13:18:36.338: INFO: Pod pod-subpath-test-configmap-cj4h no longer exists STEP: Deleting pod pod-subpath-test-configmap-cj4h Dec 20 13:18:36.338: INFO: Deleting pod "pod-subpath-test-configmap-cj4h" in namespace "subpath-7556" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:18:36.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7556" for this suite. Dec 20 13:18:42.491: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:18:42.656: INFO: namespace subpath-7556 deletion completed in 6.301415819s • [SLOW TEST:37.803 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:18:42.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Dec 20 13:18:42.804: INFO: Waiting up to 5m0s for pod "downward-api-7d8145e6-c511-4f06-8220-fb3e6a53d110" in namespace "downward-api-7340" to be "success or failure" Dec 20 13:18:42.809: INFO: Pod "downward-api-7d8145e6-c511-4f06-8220-fb3e6a53d110": Phase="Pending", Reason="", readiness=false. Elapsed: 4.614815ms Dec 20 13:18:44.831: INFO: Pod "downward-api-7d8145e6-c511-4f06-8220-fb3e6a53d110": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026476652s Dec 20 13:18:46.866: INFO: Pod "downward-api-7d8145e6-c511-4f06-8220-fb3e6a53d110": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061400457s Dec 20 13:18:48.876: INFO: Pod "downward-api-7d8145e6-c511-4f06-8220-fb3e6a53d110": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071522432s Dec 20 13:18:52.211: INFO: Pod "downward-api-7d8145e6-c511-4f06-8220-fb3e6a53d110": Phase="Pending", Reason="", readiness=false. Elapsed: 9.406952763s Dec 20 13:18:54.221: INFO: Pod "downward-api-7d8145e6-c511-4f06-8220-fb3e6a53d110": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.417095774s STEP: Saw pod success Dec 20 13:18:54.221: INFO: Pod "downward-api-7d8145e6-c511-4f06-8220-fb3e6a53d110" satisfied condition "success or failure" Dec 20 13:18:54.225: INFO: Trying to get logs from node iruya-node pod downward-api-7d8145e6-c511-4f06-8220-fb3e6a53d110 container dapi-container: STEP: delete the pod Dec 20 13:18:54.408: INFO: Waiting for pod downward-api-7d8145e6-c511-4f06-8220-fb3e6a53d110 to disappear Dec 20 13:18:54.414: INFO: Pod downward-api-7d8145e6-c511-4f06-8220-fb3e6a53d110 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:18:54.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7340" for this suite. Dec 20 13:19:00.562: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:19:00.766: INFO: namespace downward-api-7340 deletion completed in 6.343041201s • [SLOW TEST:18.108 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:19:00.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 20 13:19:00.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-2848' Dec 20 13:19:01.009: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Dec 20 13:19:01.010: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 Dec 20 13:19:01.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-2848' Dec 20 13:19:01.332: INFO: stderr: "" Dec 20 13:19:01.332: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:19:01.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2848" for this suite. Dec 20 13:19:07.387: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:19:07.526: INFO: namespace kubectl-2848 deletion completed in 6.170385552s • [SLOW TEST:6.759 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:19:07.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition Dec 20 13:19:07.640: INFO: Waiting up to 5m0s for pod "var-expansion-8509a16b-d493-47ee-9c1a-a3ce223b6e73" in namespace "var-expansion-4878" to be "success or failure" Dec 20 13:19:07.645: INFO: Pod "var-expansion-8509a16b-d493-47ee-9c1a-a3ce223b6e73": Phase="Pending", Reason="", readiness=false. Elapsed: 5.318744ms Dec 20 13:19:09.751: INFO: Pod "var-expansion-8509a16b-d493-47ee-9c1a-a3ce223b6e73": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111246393s Dec 20 13:19:11.766: INFO: Pod "var-expansion-8509a16b-d493-47ee-9c1a-a3ce223b6e73": Phase="Pending", Reason="", readiness=false. Elapsed: 4.126673288s Dec 20 13:19:13.785: INFO: Pod "var-expansion-8509a16b-d493-47ee-9c1a-a3ce223b6e73": Phase="Pending", Reason="", readiness=false. Elapsed: 6.145777314s Dec 20 13:19:15.791: INFO: Pod "var-expansion-8509a16b-d493-47ee-9c1a-a3ce223b6e73": Phase="Pending", Reason="", readiness=false. Elapsed: 8.151113302s Dec 20 13:19:17.803: INFO: Pod "var-expansion-8509a16b-d493-47ee-9c1a-a3ce223b6e73": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.163667486s STEP: Saw pod success Dec 20 13:19:17.804: INFO: Pod "var-expansion-8509a16b-d493-47ee-9c1a-a3ce223b6e73" satisfied condition "success or failure" Dec 20 13:19:17.823: INFO: Trying to get logs from node iruya-node pod var-expansion-8509a16b-d493-47ee-9c1a-a3ce223b6e73 container dapi-container: STEP: delete the pod Dec 20 13:19:18.114: INFO: Waiting for pod var-expansion-8509a16b-d493-47ee-9c1a-a3ce223b6e73 to disappear Dec 20 13:19:18.119: INFO: Pod var-expansion-8509a16b-d493-47ee-9c1a-a3ce223b6e73 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:19:18.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4878" for this suite. Dec 20 13:19:24.151: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:19:24.258: INFO: namespace var-expansion-4878 deletion completed in 6.130564443s • [SLOW TEST:16.732 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:19:24.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Dec 20 13:19:44.480: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 20 13:19:44.501: INFO: Pod pod-with-prestop-exec-hook still exists Dec 20 13:19:46.501: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 20 13:19:46.533: INFO: Pod pod-with-prestop-exec-hook still exists Dec 20 13:19:48.501: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 20 13:19:48.529: INFO: Pod pod-with-prestop-exec-hook still exists Dec 20 13:19:50.502: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 20 13:19:50.520: INFO: Pod pod-with-prestop-exec-hook still exists Dec 20 13:19:52.501: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 20 13:19:52.513: INFO: Pod pod-with-prestop-exec-hook still exists Dec 20 13:19:54.501: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 20 13:19:54.513: INFO: Pod pod-with-prestop-exec-hook still exists Dec 20 13:19:56.501: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 20 13:19:56.518: INFO: Pod pod-with-prestop-exec-hook still exists Dec 20 13:19:58.501: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 20 13:19:58.517: INFO: Pod pod-with-prestop-exec-hook still exists Dec 20 13:20:00.501: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 20 13:20:00.518: INFO: Pod pod-with-prestop-exec-hook still exists Dec 20 13:20:02.501: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 20 13:20:02.518: INFO: Pod pod-with-prestop-exec-hook still exists Dec 20 13:20:04.501: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 20 13:20:04.520: INFO: Pod pod-with-prestop-exec-hook still exists Dec 20 13:20:06.501: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 20 13:20:06.512: INFO: Pod pod-with-prestop-exec-hook still exists Dec 20 13:20:08.501: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 20 13:20:08.517: INFO: Pod pod-with-prestop-exec-hook still exists Dec 20 13:20:10.501: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 20 13:20:10.544: INFO: Pod pod-with-prestop-exec-hook still exists Dec 20 13:20:12.501: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 20 13:20:12.517: INFO: Pod pod-with-prestop-exec-hook still exists Dec 20 13:20:14.501: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 20 13:20:14.513: INFO: Pod pod-with-prestop-exec-hook still exists Dec 20 13:20:16.501: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 20 13:20:16.514: INFO: Pod pod-with-prestop-exec-hook still exists Dec 20 13:20:18.502: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 20 13:20:18.541: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:20:18.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9641" for this suite. Dec 20 13:20:40.685: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:20:40.877: INFO: namespace container-lifecycle-hook-9641 deletion completed in 22.260537605s • [SLOW TEST:76.619 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:20:40.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 20 13:20:41.008: INFO: Waiting up to 5m0s for pod "downwardapi-volume-58ec2f6a-c018-490f-a6a6-90082652660f" in namespace "projected-8385" to be "success or failure" Dec 20 13:20:41.013: INFO: Pod "downwardapi-volume-58ec2f6a-c018-490f-a6a6-90082652660f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.953487ms Dec 20 13:20:43.023: INFO: Pod "downwardapi-volume-58ec2f6a-c018-490f-a6a6-90082652660f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014917855s Dec 20 13:20:45.040: INFO: Pod "downwardapi-volume-58ec2f6a-c018-490f-a6a6-90082652660f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032058315s Dec 20 13:20:47.054: INFO: Pod "downwardapi-volume-58ec2f6a-c018-490f-a6a6-90082652660f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04608378s Dec 20 13:20:49.197: INFO: Pod "downwardapi-volume-58ec2f6a-c018-490f-a6a6-90082652660f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.189121284s Dec 20 13:20:51.203: INFO: Pod "downwardapi-volume-58ec2f6a-c018-490f-a6a6-90082652660f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.19519123s STEP: Saw pod success Dec 20 13:20:51.203: INFO: Pod "downwardapi-volume-58ec2f6a-c018-490f-a6a6-90082652660f" satisfied condition "success or failure" Dec 20 13:20:51.207: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-58ec2f6a-c018-490f-a6a6-90082652660f container client-container: STEP: delete the pod Dec 20 13:20:51.336: INFO: Waiting for pod downwardapi-volume-58ec2f6a-c018-490f-a6a6-90082652660f to disappear Dec 20 13:20:51.348: INFO: Pod downwardapi-volume-58ec2f6a-c018-490f-a6a6-90082652660f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:20:51.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8385" for this suite. Dec 20 13:20:57.377: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:20:57.520: INFO: namespace projected-8385 deletion completed in 6.166798922s • [SLOW TEST:16.641 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:20:57.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 20 13:20:57.608: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e92b7e2f-da6d-4013-9b2e-290f91854079" in namespace "projected-1843" to be "success or failure" Dec 20 13:20:57.613: INFO: Pod "downwardapi-volume-e92b7e2f-da6d-4013-9b2e-290f91854079": Phase="Pending", Reason="", readiness=false. Elapsed: 4.83597ms Dec 20 13:20:59.621: INFO: Pod "downwardapi-volume-e92b7e2f-da6d-4013-9b2e-290f91854079": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012773855s Dec 20 13:21:01.635: INFO: Pod "downwardapi-volume-e92b7e2f-da6d-4013-9b2e-290f91854079": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026641135s Dec 20 13:21:03.646: INFO: Pod "downwardapi-volume-e92b7e2f-da6d-4013-9b2e-290f91854079": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037535113s Dec 20 13:21:05.658: INFO: Pod "downwardapi-volume-e92b7e2f-da6d-4013-9b2e-290f91854079": Phase="Pending", Reason="", readiness=false. Elapsed: 8.050163074s Dec 20 13:21:07.667: INFO: Pod "downwardapi-volume-e92b7e2f-da6d-4013-9b2e-290f91854079": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.058843248s STEP: Saw pod success Dec 20 13:21:07.667: INFO: Pod "downwardapi-volume-e92b7e2f-da6d-4013-9b2e-290f91854079" satisfied condition "success or failure" Dec 20 13:21:07.673: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-e92b7e2f-da6d-4013-9b2e-290f91854079 container client-container: STEP: delete the pod Dec 20 13:21:07.729: INFO: Waiting for pod downwardapi-volume-e92b7e2f-da6d-4013-9b2e-290f91854079 to disappear Dec 20 13:21:07.745: INFO: Pod downwardapi-volume-e92b7e2f-da6d-4013-9b2e-290f91854079 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:21:07.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1843" for this suite. Dec 20 13:21:13.805: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:21:14.021: INFO: namespace projected-1843 deletion completed in 6.255181865s • [SLOW TEST:16.500 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:21:14.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 20 13:21:14.155: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cb9027b8-2971-4041-b0ef-c904557f3c4d" in namespace "projected-6815" to be "success or failure" Dec 20 13:21:14.181: INFO: Pod "downwardapi-volume-cb9027b8-2971-4041-b0ef-c904557f3c4d": Phase="Pending", Reason="", readiness=false. Elapsed: 25.658708ms Dec 20 13:21:16.190: INFO: Pod "downwardapi-volume-cb9027b8-2971-4041-b0ef-c904557f3c4d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035216661s Dec 20 13:21:18.208: INFO: Pod "downwardapi-volume-cb9027b8-2971-4041-b0ef-c904557f3c4d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053210643s Dec 20 13:21:20.219: INFO: Pod "downwardapi-volume-cb9027b8-2971-4041-b0ef-c904557f3c4d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064371494s Dec 20 13:21:22.232: INFO: Pod "downwardapi-volume-cb9027b8-2971-4041-b0ef-c904557f3c4d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.076796889s STEP: Saw pod success Dec 20 13:21:22.232: INFO: Pod "downwardapi-volume-cb9027b8-2971-4041-b0ef-c904557f3c4d" satisfied condition "success or failure" Dec 20 13:21:22.240: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-cb9027b8-2971-4041-b0ef-c904557f3c4d container client-container: STEP: delete the pod Dec 20 13:21:22.392: INFO: Waiting for pod downwardapi-volume-cb9027b8-2971-4041-b0ef-c904557f3c4d to disappear Dec 20 13:21:22.443: INFO: Pod downwardapi-volume-cb9027b8-2971-4041-b0ef-c904557f3c4d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:21:22.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6815" for this suite. Dec 20 13:21:28.491: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:21:28.642: INFO: namespace projected-6815 deletion completed in 6.191128283s • [SLOW TEST:14.620 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:21:28.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 20 13:21:28.725: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c70135d1-35b4-4754-b536-1afe3aa6622a" in namespace "projected-6898" to be "success or failure" Dec 20 13:21:28.734: INFO: Pod "downwardapi-volume-c70135d1-35b4-4754-b536-1afe3aa6622a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.79416ms Dec 20 13:21:30.741: INFO: Pod "downwardapi-volume-c70135d1-35b4-4754-b536-1afe3aa6622a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016240318s Dec 20 13:21:32.753: INFO: Pod "downwardapi-volume-c70135d1-35b4-4754-b536-1afe3aa6622a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028222063s Dec 20 13:21:34.769: INFO: Pod "downwardapi-volume-c70135d1-35b4-4754-b536-1afe3aa6622a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043994465s Dec 20 13:21:36.797: INFO: Pod "downwardapi-volume-c70135d1-35b4-4754-b536-1afe3aa6622a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.071993306s STEP: Saw pod success Dec 20 13:21:36.797: INFO: Pod "downwardapi-volume-c70135d1-35b4-4754-b536-1afe3aa6622a" satisfied condition "success or failure" Dec 20 13:21:36.817: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-c70135d1-35b4-4754-b536-1afe3aa6622a container client-container: STEP: delete the pod Dec 20 13:21:36.979: INFO: Waiting for pod downwardapi-volume-c70135d1-35b4-4754-b536-1afe3aa6622a to disappear Dec 20 13:21:37.029: INFO: Pod downwardapi-volume-c70135d1-35b4-4754-b536-1afe3aa6622a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:21:37.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6898" for this suite. Dec 20 13:21:43.079: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:21:43.187: INFO: namespace projected-6898 deletion completed in 6.149080318s • [SLOW TEST:14.544 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:21:43.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 20 13:21:43.261: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2b0c3ada-a714-4293-98ec-755f2a84cd1d" in namespace "projected-8501" to be "success or failure" Dec 20 13:21:43.304: INFO: Pod "downwardapi-volume-2b0c3ada-a714-4293-98ec-755f2a84cd1d": Phase="Pending", Reason="", readiness=false. Elapsed: 42.691999ms Dec 20 13:21:45.312: INFO: Pod "downwardapi-volume-2b0c3ada-a714-4293-98ec-755f2a84cd1d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051112484s Dec 20 13:21:47.318: INFO: Pod "downwardapi-volume-2b0c3ada-a714-4293-98ec-755f2a84cd1d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05669458s Dec 20 13:21:49.329: INFO: Pod "downwardapi-volume-2b0c3ada-a714-4293-98ec-755f2a84cd1d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068303673s Dec 20 13:21:51.342: INFO: Pod "downwardapi-volume-2b0c3ada-a714-4293-98ec-755f2a84cd1d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.080711022s Dec 20 13:21:53.351: INFO: Pod "downwardapi-volume-2b0c3ada-a714-4293-98ec-755f2a84cd1d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.090501035s STEP: Saw pod success Dec 20 13:21:53.351: INFO: Pod "downwardapi-volume-2b0c3ada-a714-4293-98ec-755f2a84cd1d" satisfied condition "success or failure" Dec 20 13:21:53.356: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-2b0c3ada-a714-4293-98ec-755f2a84cd1d container client-container: STEP: delete the pod Dec 20 13:21:53.489: INFO: Waiting for pod downwardapi-volume-2b0c3ada-a714-4293-98ec-755f2a84cd1d to disappear Dec 20 13:21:53.503: INFO: Pod downwardapi-volume-2b0c3ada-a714-4293-98ec-755f2a84cd1d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:21:53.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8501" for this suite. Dec 20 13:21:59.539: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:21:59.729: INFO: namespace projected-8501 deletion completed in 6.219664789s • [SLOW TEST:16.541 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:21:59.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 20 13:21:59.840: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3779b990-3b36-4797-a6b6-6dfbd5b2df6b" in namespace "projected-8696" to be "success or failure" Dec 20 13:21:59.844: INFO: Pod "downwardapi-volume-3779b990-3b36-4797-a6b6-6dfbd5b2df6b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.948632ms Dec 20 13:22:01.861: INFO: Pod "downwardapi-volume-3779b990-3b36-4797-a6b6-6dfbd5b2df6b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020675716s Dec 20 13:22:03.881: INFO: Pod "downwardapi-volume-3779b990-3b36-4797-a6b6-6dfbd5b2df6b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041422686s Dec 20 13:22:05.891: INFO: Pod "downwardapi-volume-3779b990-3b36-4797-a6b6-6dfbd5b2df6b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050547424s Dec 20 13:22:07.906: INFO: Pod "downwardapi-volume-3779b990-3b36-4797-a6b6-6dfbd5b2df6b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.065759914s Dec 20 13:22:09.918: INFO: Pod "downwardapi-volume-3779b990-3b36-4797-a6b6-6dfbd5b2df6b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.078223406s STEP: Saw pod success Dec 20 13:22:09.918: INFO: Pod "downwardapi-volume-3779b990-3b36-4797-a6b6-6dfbd5b2df6b" satisfied condition "success or failure" Dec 20 13:22:09.924: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-3779b990-3b36-4797-a6b6-6dfbd5b2df6b container client-container: STEP: delete the pod Dec 20 13:22:09.992: INFO: Waiting for pod downwardapi-volume-3779b990-3b36-4797-a6b6-6dfbd5b2df6b to disappear Dec 20 13:22:09.999: INFO: Pod downwardapi-volume-3779b990-3b36-4797-a6b6-6dfbd5b2df6b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:22:10.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8696" for this suite. Dec 20 13:22:16.134: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:22:16.226: INFO: namespace projected-8696 deletion completed in 6.219130842s • [SLOW TEST:16.497 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:22:16.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Dec 20 13:22:16.347: INFO: Waiting up to 5m0s for pod "pod-adaf5440-0d34-4176-bc7f-660a7dfaa1d6" in namespace "emptydir-9886" to be "success or failure" Dec 20 13:22:16.378: INFO: Pod "pod-adaf5440-0d34-4176-bc7f-660a7dfaa1d6": Phase="Pending", Reason="", readiness=false. Elapsed: 30.491734ms Dec 20 13:22:18.387: INFO: Pod "pod-adaf5440-0d34-4176-bc7f-660a7dfaa1d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040171969s Dec 20 13:22:20.396: INFO: Pod "pod-adaf5440-0d34-4176-bc7f-660a7dfaa1d6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048969546s Dec 20 13:22:22.429: INFO: Pod "pod-adaf5440-0d34-4176-bc7f-660a7dfaa1d6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.082407134s Dec 20 13:22:24.440: INFO: Pod "pod-adaf5440-0d34-4176-bc7f-660a7dfaa1d6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.093195154s Dec 20 13:22:26.455: INFO: Pod "pod-adaf5440-0d34-4176-bc7f-660a7dfaa1d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.107770931s STEP: Saw pod success Dec 20 13:22:26.455: INFO: Pod "pod-adaf5440-0d34-4176-bc7f-660a7dfaa1d6" satisfied condition "success or failure" Dec 20 13:22:26.463: INFO: Trying to get logs from node iruya-node pod pod-adaf5440-0d34-4176-bc7f-660a7dfaa1d6 container test-container: STEP: delete the pod Dec 20 13:22:26.630: INFO: Waiting for pod pod-adaf5440-0d34-4176-bc7f-660a7dfaa1d6 to disappear Dec 20 13:22:26.642: INFO: Pod pod-adaf5440-0d34-4176-bc7f-660a7dfaa1d6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:22:26.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9886" for this suite. Dec 20 13:22:32.685: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:22:33.122: INFO: namespace emptydir-9886 deletion completed in 6.472708358s • [SLOW TEST:16.895 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:22:33.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Dec 20 13:22:33.213: INFO: namespace kubectl-1891 Dec 20 13:22:33.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1891' Dec 20 13:22:33.537: INFO: stderr: "" Dec 20 13:22:33.537: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Dec 20 13:22:34.551: INFO: Selector matched 1 pods for map[app:redis] Dec 20 13:22:34.551: INFO: Found 0 / 1 Dec 20 13:22:35.550: INFO: Selector matched 1 pods for map[app:redis] Dec 20 13:22:35.550: INFO: Found 0 / 1 Dec 20 13:22:36.562: INFO: Selector matched 1 pods for map[app:redis] Dec 20 13:22:36.563: INFO: Found 0 / 1 Dec 20 13:22:37.557: INFO: Selector matched 1 pods for map[app:redis] Dec 20 13:22:37.557: INFO: Found 0 / 1 Dec 20 13:22:38.579: INFO: Selector matched 1 pods for map[app:redis] Dec 20 13:22:38.579: INFO: Found 0 / 1 Dec 20 13:22:39.550: INFO: Selector matched 1 pods for map[app:redis] Dec 20 13:22:39.550: INFO: Found 0 / 1 Dec 20 13:22:40.557: INFO: Selector matched 1 pods for map[app:redis] Dec 20 13:22:40.557: INFO: Found 1 / 1 Dec 20 13:22:40.557: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Dec 20 13:22:40.563: INFO: Selector matched 1 pods for map[app:redis] Dec 20 13:22:40.563: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Dec 20 13:22:40.563: INFO: wait on redis-master startup in kubectl-1891 Dec 20 13:22:40.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-4f26l redis-master --namespace=kubectl-1891' Dec 20 13:22:40.695: INFO: stderr: "" Dec 20 13:22:40.695: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 20 Dec 13:22:40.054 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 20 Dec 13:22:40.054 # Server started, Redis version 3.2.12\n1:M 20 Dec 13:22:40.054 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 20 Dec 13:22:40.054 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Dec 20 13:22:40.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-1891' Dec 20 13:22:40.845: INFO: stderr: "" Dec 20 13:22:40.845: INFO: stdout: "service/rm2 exposed\n" Dec 20 13:22:40.858: INFO: Service rm2 in namespace kubectl-1891 found. STEP: exposing service Dec 20 13:22:42.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-1891' Dec 20 13:22:43.215: INFO: stderr: "" Dec 20 13:22:43.215: INFO: stdout: "service/rm3 exposed\n" Dec 20 13:22:43.226: INFO: Service rm3 in namespace kubectl-1891 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:22:45.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1891" for this suite. Dec 20 13:23:09.303: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:23:09.516: INFO: namespace kubectl-1891 deletion completed in 24.265390081s • [SLOW TEST:36.393 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:23:09.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 20 13:23:09.583: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Dec 20 13:23:12.036: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:23:12.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8979" for this suite. Dec 20 13:23:24.566: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:23:24.744: INFO: namespace replication-controller-8979 deletion completed in 12.241031249s • [SLOW TEST:15.226 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:23:24.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Dec 20 13:23:33.593: INFO: Successfully updated pod "labelsupdate849d73fb-7495-4ace-b75b-c24f9f997e67" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:23:35.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9651" for this suite. Dec 20 13:23:57.722: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:23:57.901: INFO: namespace downward-api-9651 deletion completed in 22.213557984s • [SLOW TEST:33.157 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:23:57.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod Dec 20 13:24:08.167: INFO: Pod pod-hostip-d1cd3658-6b60-4319-b830-7611832c2e8d has hostIP: 10.96.3.65 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:24:08.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1479" for this suite. Dec 20 13:24:30.205: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:24:30.353: INFO: namespace pods-1479 deletion completed in 22.176605675s • [SLOW TEST:32.452 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:24:30.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info Dec 20 13:24:30.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Dec 20 13:24:30.555: INFO: stderr: "" Dec 20 13:24:30.556: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:24:30.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5294" for this suite. Dec 20 13:24:36.612: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:24:36.744: INFO: namespace kubectl-5294 deletion completed in 6.174891961s • [SLOW TEST:6.389 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:24:36.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller Dec 20 13:24:36.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4326' Dec 20 13:24:37.102: INFO: stderr: "" Dec 20 13:24:37.102: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 20 13:24:37.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4326' Dec 20 13:24:37.195: INFO: stderr: "" Dec 20 13:24:37.195: INFO: stdout: "update-demo-nautilus-8n4v8 update-demo-nautilus-lmvmr " Dec 20 13:24:37.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8n4v8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4326' Dec 20 13:24:37.277: INFO: stderr: "" Dec 20 13:24:37.277: INFO: stdout: "" Dec 20 13:24:37.277: INFO: update-demo-nautilus-8n4v8 is created but not running Dec 20 13:24:42.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4326' Dec 20 13:24:43.186: INFO: stderr: "" Dec 20 13:24:43.186: INFO: stdout: "update-demo-nautilus-8n4v8 update-demo-nautilus-lmvmr " Dec 20 13:24:43.186: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8n4v8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4326' Dec 20 13:24:43.531: INFO: stderr: "" Dec 20 13:24:43.532: INFO: stdout: "" Dec 20 13:24:43.532: INFO: update-demo-nautilus-8n4v8 is created but not running Dec 20 13:24:48.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4326' Dec 20 13:24:48.645: INFO: stderr: "" Dec 20 13:24:48.645: INFO: stdout: "update-demo-nautilus-8n4v8 update-demo-nautilus-lmvmr " Dec 20 13:24:48.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8n4v8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4326' Dec 20 13:24:48.729: INFO: stderr: "" Dec 20 13:24:48.729: INFO: stdout: "true" Dec 20 13:24:48.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8n4v8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4326' Dec 20 13:24:48.809: INFO: stderr: "" Dec 20 13:24:48.809: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 20 13:24:48.809: INFO: validating pod update-demo-nautilus-8n4v8 Dec 20 13:24:48.844: INFO: got data: { "image": "nautilus.jpg" } Dec 20 13:24:48.844: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 20 13:24:48.844: INFO: update-demo-nautilus-8n4v8 is verified up and running Dec 20 13:24:48.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lmvmr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4326' Dec 20 13:24:48.938: INFO: stderr: "" Dec 20 13:24:48.938: INFO: stdout: "true" Dec 20 13:24:48.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lmvmr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4326' Dec 20 13:24:49.012: INFO: stderr: "" Dec 20 13:24:49.012: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 20 13:24:49.012: INFO: validating pod update-demo-nautilus-lmvmr Dec 20 13:24:49.021: INFO: got data: { "image": "nautilus.jpg" } Dec 20 13:24:49.021: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 20 13:24:49.021: INFO: update-demo-nautilus-lmvmr is verified up and running STEP: rolling-update to new replication controller Dec 20 13:24:49.024: INFO: scanned /root for discovery docs: Dec 20 13:24:49.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-4326' Dec 20 13:25:19.078: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Dec 20 13:25:19.079: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 20 13:25:19.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4326' Dec 20 13:25:19.227: INFO: stderr: "" Dec 20 13:25:19.227: INFO: stdout: "update-demo-kitten-j8mdw update-demo-kitten-qv7qw " Dec 20 13:25:19.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-j8mdw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4326' Dec 20 13:25:19.349: INFO: stderr: "" Dec 20 13:25:19.349: INFO: stdout: "true" Dec 20 13:25:19.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-j8mdw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4326' Dec 20 13:25:19.445: INFO: stderr: "" Dec 20 13:25:19.445: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Dec 20 13:25:19.445: INFO: validating pod update-demo-kitten-j8mdw Dec 20 13:25:19.463: INFO: got data: { "image": "kitten.jpg" } Dec 20 13:25:19.463: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Dec 20 13:25:19.463: INFO: update-demo-kitten-j8mdw is verified up and running Dec 20 13:25:19.463: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-qv7qw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4326' Dec 20 13:25:19.592: INFO: stderr: "" Dec 20 13:25:19.592: INFO: stdout: "true" Dec 20 13:25:19.593: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-qv7qw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4326' Dec 20 13:25:19.670: INFO: stderr: "" Dec 20 13:25:19.670: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Dec 20 13:25:19.670: INFO: validating pod update-demo-kitten-qv7qw Dec 20 13:25:19.690: INFO: got data: { "image": "kitten.jpg" } Dec 20 13:25:19.690: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Dec 20 13:25:19.690: INFO: update-demo-kitten-qv7qw is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:25:19.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4326" for this suite. Dec 20 13:25:43.727: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:25:43.840: INFO: namespace kubectl-4326 deletion completed in 24.14426666s • [SLOW TEST:67.096 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:25:43.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Dec 20 13:25:43.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6296' Dec 20 13:25:44.295: INFO: stderr: "" Dec 20 13:25:44.295: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Dec 20 13:25:45.307: INFO: Selector matched 1 pods for map[app:redis] Dec 20 13:25:45.308: INFO: Found 0 / 1 Dec 20 13:25:46.303: INFO: Selector matched 1 pods for map[app:redis] Dec 20 13:25:46.303: INFO: Found 0 / 1 Dec 20 13:25:47.313: INFO: Selector matched 1 pods for map[app:redis] Dec 20 13:25:47.313: INFO: Found 0 / 1 Dec 20 13:25:48.311: INFO: Selector matched 1 pods for map[app:redis] Dec 20 13:25:48.311: INFO: Found 0 / 1 Dec 20 13:25:49.306: INFO: Selector matched 1 pods for map[app:redis] Dec 20 13:25:49.307: INFO: Found 0 / 1 Dec 20 13:25:50.303: INFO: Selector matched 1 pods for map[app:redis] Dec 20 13:25:50.303: INFO: Found 0 / 1 Dec 20 13:25:51.310: INFO: Selector matched 1 pods for map[app:redis] Dec 20 13:25:51.310: INFO: Found 0 / 1 Dec 20 13:25:52.303: INFO: Selector matched 1 pods for map[app:redis] Dec 20 13:25:52.303: INFO: Found 0 / 1 Dec 20 13:25:53.339: INFO: Selector matched 1 pods for map[app:redis] Dec 20 13:25:53.340: INFO: Found 1 / 1 Dec 20 13:25:53.340: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Dec 20 13:25:53.383: INFO: Selector matched 1 pods for map[app:redis] Dec 20 13:25:53.383: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Dec 20 13:25:53.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-jjlss --namespace=kubectl-6296 -p {"metadata":{"annotations":{"x":"y"}}}' Dec 20 13:25:53.510: INFO: stderr: "" Dec 20 13:25:53.510: INFO: stdout: "pod/redis-master-jjlss patched\n" STEP: checking annotations Dec 20 13:25:53.532: INFO: Selector matched 1 pods for map[app:redis] Dec 20 13:25:53.532: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:25:53.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6296" for this suite. Dec 20 13:26:15.639: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:26:15.761: INFO: namespace kubectl-6296 deletion completed in 22.224198583s • [SLOW TEST:31.920 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:26:15.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Dec 20 13:26:15.926: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Dec 20 13:26:15.937: INFO: Waiting for terminating namespaces to be deleted... Dec 20 13:26:15.942: INFO: Logging pods the kubelet thinks is on node iruya-node before test Dec 20 13:26:15.977: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded) Dec 20 13:26:15.977: INFO: Container kube-proxy ready: true, restart count 0 Dec 20 13:26:15.977: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded) Dec 20 13:26:15.977: INFO: Container weave ready: true, restart count 0 Dec 20 13:26:15.977: INFO: Container weave-npc ready: true, restart count 0 Dec 20 13:26:15.977: INFO: Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test Dec 20 13:26:15.985: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded) Dec 20 13:26:15.985: INFO: Container kube-apiserver ready: true, restart count 0 Dec 20 13:26:15.985: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded) Dec 20 13:26:15.985: INFO: Container kube-scheduler ready: true, restart count 7 Dec 20 13:26:15.985: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Dec 20 13:26:15.985: INFO: Container coredns ready: true, restart count 0 Dec 20 13:26:15.985: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded) Dec 20 13:26:15.985: INFO: Container etcd ready: true, restart count 0 Dec 20 13:26:15.985: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded) Dec 20 13:26:15.985: INFO: Container weave ready: true, restart count 0 Dec 20 13:26:15.985: INFO: Container weave-npc ready: true, restart count 0 Dec 20 13:26:15.985: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Dec 20 13:26:15.985: INFO: Container coredns ready: true, restart count 0 Dec 20 13:26:15.985: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded) Dec 20 13:26:15.985: INFO: Container kube-controller-manager ready: true, restart count 10 Dec 20 13:26:15.985: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded) Dec 20 13:26:15.985: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-node STEP: verifying the node has the label node iruya-server-sfge57q7djm7 Dec 20 13:26:16.242: INFO: Pod coredns-5c98db65d4-bm4gs requesting resource cpu=100m on Node iruya-server-sfge57q7djm7 Dec 20 13:26:16.243: INFO: Pod coredns-5c98db65d4-xx8w8 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7 Dec 20 13:26:16.243: INFO: Pod etcd-iruya-server-sfge57q7djm7 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7 Dec 20 13:26:16.243: INFO: Pod kube-apiserver-iruya-server-sfge57q7djm7 requesting resource cpu=250m on Node iruya-server-sfge57q7djm7 Dec 20 13:26:16.243: INFO: Pod kube-controller-manager-iruya-server-sfge57q7djm7 requesting resource cpu=200m on Node iruya-server-sfge57q7djm7 Dec 20 13:26:16.243: INFO: Pod kube-proxy-58v95 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7 Dec 20 13:26:16.243: INFO: Pod kube-proxy-976zl requesting resource cpu=0m on Node iruya-node Dec 20 13:26:16.243: INFO: Pod kube-scheduler-iruya-server-sfge57q7djm7 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7 Dec 20 13:26:16.243: INFO: Pod weave-net-bzl4d requesting resource cpu=20m on Node iruya-server-sfge57q7djm7 Dec 20 13:26:16.243: INFO: Pod weave-net-rlp57 requesting resource cpu=20m on Node iruya-node STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-a73a7b60-c994-4cf1-bde0-7a4ce37772dc.15e2173e69b67c32], Reason = [Scheduled], Message = [Successfully assigned sched-pred-152/filler-pod-a73a7b60-c994-4cf1-bde0-7a4ce37772dc to iruya-server-sfge57q7djm7] STEP: Considering event: Type = [Normal], Name = [filler-pod-a73a7b60-c994-4cf1-bde0-7a4ce37772dc.15e2173f99ca2d0b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-a73a7b60-c994-4cf1-bde0-7a4ce37772dc.15e21740a524028c], Reason = [Created], Message = [Created container filler-pod-a73a7b60-c994-4cf1-bde0-7a4ce37772dc] STEP: Considering event: Type = [Normal], Name = [filler-pod-a73a7b60-c994-4cf1-bde0-7a4ce37772dc.15e21740c6309103], Reason = [Started], Message = [Started container filler-pod-a73a7b60-c994-4cf1-bde0-7a4ce37772dc] STEP: Considering event: Type = [Normal], Name = [filler-pod-d63daa1d-14ec-4835-b4a6-bc3ec7443b13.15e2173e65666285], Reason = [Scheduled], Message = [Successfully assigned sched-pred-152/filler-pod-d63daa1d-14ec-4835-b4a6-bc3ec7443b13 to iruya-node] STEP: Considering event: Type = [Normal], Name = [filler-pod-d63daa1d-14ec-4835-b4a6-bc3ec7443b13.15e2173f9a2d8c84], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-d63daa1d-14ec-4835-b4a6-bc3ec7443b13.15e2174090c8fdd8], Reason = [Created], Message = [Created container filler-pod-d63daa1d-14ec-4835-b4a6-bc3ec7443b13] STEP: Considering event: Type = [Normal], Name = [filler-pod-d63daa1d-14ec-4835-b4a6-bc3ec7443b13.15e21740b5d278dd], Reason = [Started], Message = [Started container filler-pod-d63daa1d-14ec-4835-b4a6-bc3ec7443b13] STEP: Considering event: Type = [Warning], Name = [additional-pod.15e21741339c78e3], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.] STEP: removing the label node off the node iruya-node STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-server-sfge57q7djm7 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:26:29.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-152" for this suite. Dec 20 13:26:36.747: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:26:37.212: INFO: namespace sched-pred-152 deletion completed in 7.768799511s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:21.450 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:26:37.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-190cd800-7580-47bf-a005-30f2c0acd344 STEP: Creating a pod to test consume configMaps Dec 20 13:26:37.478: INFO: Waiting up to 5m0s for pod "pod-configmaps-66565b0c-5e9c-447a-b282-dc3dc9fc9c58" in namespace "configmap-1776" to be "success or failure" Dec 20 13:26:37.493: INFO: Pod "pod-configmaps-66565b0c-5e9c-447a-b282-dc3dc9fc9c58": Phase="Pending", Reason="", readiness=false. Elapsed: 15.299709ms Dec 20 13:26:39.665: INFO: Pod "pod-configmaps-66565b0c-5e9c-447a-b282-dc3dc9fc9c58": Phase="Pending", Reason="", readiness=false. Elapsed: 2.187528347s Dec 20 13:26:41.676: INFO: Pod "pod-configmaps-66565b0c-5e9c-447a-b282-dc3dc9fc9c58": Phase="Pending", Reason="", readiness=false. Elapsed: 4.197991425s Dec 20 13:26:43.683: INFO: Pod "pod-configmaps-66565b0c-5e9c-447a-b282-dc3dc9fc9c58": Phase="Pending", Reason="", readiness=false. Elapsed: 6.205169011s Dec 20 13:26:45.690: INFO: Pod "pod-configmaps-66565b0c-5e9c-447a-b282-dc3dc9fc9c58": Phase="Pending", Reason="", readiness=false. Elapsed: 8.211795296s Dec 20 13:26:47.698: INFO: Pod "pod-configmaps-66565b0c-5e9c-447a-b282-dc3dc9fc9c58": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.220367951s STEP: Saw pod success Dec 20 13:26:47.698: INFO: Pod "pod-configmaps-66565b0c-5e9c-447a-b282-dc3dc9fc9c58" satisfied condition "success or failure" Dec 20 13:26:47.703: INFO: Trying to get logs from node iruya-node pod pod-configmaps-66565b0c-5e9c-447a-b282-dc3dc9fc9c58 container configmap-volume-test: STEP: delete the pod Dec 20 13:26:47.854: INFO: Waiting for pod pod-configmaps-66565b0c-5e9c-447a-b282-dc3dc9fc9c58 to disappear Dec 20 13:26:47.870: INFO: Pod pod-configmaps-66565b0c-5e9c-447a-b282-dc3dc9fc9c58 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:26:47.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1776" for this suite. Dec 20 13:26:53.912: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:26:54.033: INFO: namespace configmap-1776 deletion completed in 6.149076481s • [SLOW TEST:16.820 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:26:54.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-9c50417e-7d71-4930-a3a6-08cd5c999089 STEP: Creating configMap with name cm-test-opt-upd-138f80e5-1952-415f-821e-672a114ac344 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-9c50417e-7d71-4930-a3a6-08cd5c999089 STEP: Updating configmap cm-test-opt-upd-138f80e5-1952-415f-821e-672a114ac344 STEP: Creating configMap with name cm-test-opt-create-23806b4e-54ab-4647-84ff-2041cf47c84d STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:27:10.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4737" for this suite. Dec 20 13:27:32.704: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:27:32.806: INFO: namespace projected-4737 deletion completed in 22.1274592s • [SLOW TEST:38.773 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:27:32.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Dec 20 13:27:32.917: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:27:50.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2078" for this suite. Dec 20 13:28:14.808: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:28:14.896: INFO: namespace init-container-2078 deletion completed in 24.123731578s • [SLOW TEST:42.088 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:28:14.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Dec 20 13:28:15.100: INFO: Waiting up to 5m0s for pod "pod-ba67d08d-7c68-4c63-8e08-06cfac5a19c8" in namespace "emptydir-5154" to be "success or failure" Dec 20 13:28:15.120: INFO: Pod "pod-ba67d08d-7c68-4c63-8e08-06cfac5a19c8": Phase="Pending", Reason="", readiness=false. Elapsed: 19.802366ms Dec 20 13:28:17.132: INFO: Pod "pod-ba67d08d-7c68-4c63-8e08-06cfac5a19c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031609554s Dec 20 13:28:19.140: INFO: Pod "pod-ba67d08d-7c68-4c63-8e08-06cfac5a19c8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039594731s Dec 20 13:28:23.670: INFO: Pod "pod-ba67d08d-7c68-4c63-8e08-06cfac5a19c8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.569848425s Dec 20 13:28:25.681: INFO: Pod "pod-ba67d08d-7c68-4c63-8e08-06cfac5a19c8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.580116952s Dec 20 13:28:27.689: INFO: Pod "pod-ba67d08d-7c68-4c63-8e08-06cfac5a19c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.588862291s STEP: Saw pod success Dec 20 13:28:27.690: INFO: Pod "pod-ba67d08d-7c68-4c63-8e08-06cfac5a19c8" satisfied condition "success or failure" Dec 20 13:28:27.694: INFO: Trying to get logs from node iruya-node pod pod-ba67d08d-7c68-4c63-8e08-06cfac5a19c8 container test-container: STEP: delete the pod Dec 20 13:28:27.853: INFO: Waiting for pod pod-ba67d08d-7c68-4c63-8e08-06cfac5a19c8 to disappear Dec 20 13:28:27.865: INFO: Pod pod-ba67d08d-7c68-4c63-8e08-06cfac5a19c8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:28:27.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5154" for this suite. Dec 20 13:28:34.015: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:28:34.412: INFO: namespace emptydir-5154 deletion completed in 6.538814571s • [SLOW TEST:19.517 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:28:34.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs Dec 20 13:28:34.686: INFO: Waiting up to 5m0s for pod "pod-e5c03255-890d-4cf5-b793-b729a0e92391" in namespace "emptydir-2287" to be "success or failure" Dec 20 13:28:34.713: INFO: Pod "pod-e5c03255-890d-4cf5-b793-b729a0e92391": Phase="Pending", Reason="", readiness=false. Elapsed: 27.213689ms Dec 20 13:28:36.722: INFO: Pod "pod-e5c03255-890d-4cf5-b793-b729a0e92391": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035976944s Dec 20 13:28:38.738: INFO: Pod "pod-e5c03255-890d-4cf5-b793-b729a0e92391": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052476162s Dec 20 13:28:40.761: INFO: Pod "pod-e5c03255-890d-4cf5-b793-b729a0e92391": Phase="Pending", Reason="", readiness=false. Elapsed: 6.075045926s Dec 20 13:28:42.779: INFO: Pod "pod-e5c03255-890d-4cf5-b793-b729a0e92391": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.093051848s STEP: Saw pod success Dec 20 13:28:42.779: INFO: Pod "pod-e5c03255-890d-4cf5-b793-b729a0e92391" satisfied condition "success or failure" Dec 20 13:28:42.789: INFO: Trying to get logs from node iruya-node pod pod-e5c03255-890d-4cf5-b793-b729a0e92391 container test-container: STEP: delete the pod Dec 20 13:28:42.848: INFO: Waiting for pod pod-e5c03255-890d-4cf5-b793-b729a0e92391 to disappear Dec 20 13:28:42.854: INFO: Pod pod-e5c03255-890d-4cf5-b793-b729a0e92391 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:28:42.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2287" for this suite. Dec 20 13:28:48.894: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:28:48.995: INFO: namespace emptydir-2287 deletion completed in 6.134574796s • [SLOW TEST:14.580 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:28:48.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-cf3d26aa-ce58-4b17-97d6-4129ee80bc24 STEP: Creating a pod to test consume secrets Dec 20 13:28:49.080: INFO: Waiting up to 5m0s for pod "pod-secrets-d52b15f8-c7a4-49f2-896f-48c2f67570bb" in namespace "secrets-5194" to be "success or failure" Dec 20 13:28:49.211: INFO: Pod "pod-secrets-d52b15f8-c7a4-49f2-896f-48c2f67570bb": Phase="Pending", Reason="", readiness=false. Elapsed: 130.872944ms Dec 20 13:28:51.221: INFO: Pod "pod-secrets-d52b15f8-c7a4-49f2-896f-48c2f67570bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.141216414s Dec 20 13:28:53.236: INFO: Pod "pod-secrets-d52b15f8-c7a4-49f2-896f-48c2f67570bb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.156371165s Dec 20 13:28:55.249: INFO: Pod "pod-secrets-d52b15f8-c7a4-49f2-896f-48c2f67570bb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.168717972s Dec 20 13:28:57.258: INFO: Pod "pod-secrets-d52b15f8-c7a4-49f2-896f-48c2f67570bb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.177886107s Dec 20 13:28:59.267: INFO: Pod "pod-secrets-d52b15f8-c7a4-49f2-896f-48c2f67570bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.187343141s STEP: Saw pod success Dec 20 13:28:59.267: INFO: Pod "pod-secrets-d52b15f8-c7a4-49f2-896f-48c2f67570bb" satisfied condition "success or failure" Dec 20 13:28:59.271: INFO: Trying to get logs from node iruya-node pod pod-secrets-d52b15f8-c7a4-49f2-896f-48c2f67570bb container secret-volume-test: STEP: delete the pod Dec 20 13:28:59.317: INFO: Waiting for pod pod-secrets-d52b15f8-c7a4-49f2-896f-48c2f67570bb to disappear Dec 20 13:28:59.326: INFO: Pod pod-secrets-d52b15f8-c7a4-49f2-896f-48c2f67570bb no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:28:59.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5194" for this suite. Dec 20 13:29:05.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:29:05.518: INFO: namespace secrets-5194 deletion completed in 6.180503447s • [SLOW TEST:16.523 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:29:05.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-6336 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Dec 20 13:29:05.615: INFO: Found 0 stateful pods, waiting for 3 Dec 20 13:29:15.626: INFO: Found 2 stateful pods, waiting for 3 Dec 20 13:29:25.630: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 20 13:29:25.630: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 20 13:29:25.630: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Dec 20 13:29:35.628: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 20 13:29:35.628: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 20 13:29:35.628: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Dec 20 13:29:35.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6336 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 20 13:29:38.454: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Dec 20 13:29:38.454: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 20 13:29:38.454: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Dec 20 13:29:48.544: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Dec 20 13:29:58.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6336 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 20 13:29:59.153: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" Dec 20 13:29:59.153: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 20 13:29:59.153: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 20 13:30:09.258: INFO: Waiting for StatefulSet statefulset-6336/ss2 to complete update Dec 20 13:30:09.258: INFO: Waiting for Pod statefulset-6336/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 20 13:30:09.258: INFO: Waiting for Pod statefulset-6336/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 20 13:30:19.269: INFO: Waiting for StatefulSet statefulset-6336/ss2 to complete update Dec 20 13:30:19.269: INFO: Waiting for Pod statefulset-6336/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 20 13:30:19.269: INFO: Waiting for Pod statefulset-6336/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 20 13:30:29.417: INFO: Waiting for StatefulSet statefulset-6336/ss2 to complete update Dec 20 13:30:29.417: INFO: Waiting for Pod statefulset-6336/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 20 13:30:39.277: INFO: Waiting for StatefulSet statefulset-6336/ss2 to complete update Dec 20 13:30:39.277: INFO: Waiting for Pod statefulset-6336/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 20 13:30:49.271: INFO: Waiting for StatefulSet statefulset-6336/ss2 to complete update STEP: Rolling back to a previous revision Dec 20 13:30:59.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6336 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 20 13:30:59.785: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Dec 20 13:30:59.785: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 20 13:30:59.785: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 20 13:31:09.892: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Dec 20 13:31:20.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6336 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 20 13:31:20.434: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" Dec 20 13:31:20.434: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 20 13:31:20.434: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 20 13:31:30.488: INFO: Waiting for StatefulSet statefulset-6336/ss2 to complete update Dec 20 13:31:30.488: INFO: Waiting for Pod statefulset-6336/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Dec 20 13:31:30.488: INFO: Waiting for Pod statefulset-6336/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Dec 20 13:31:40.515: INFO: Waiting for StatefulSet statefulset-6336/ss2 to complete update Dec 20 13:31:40.515: INFO: Waiting for Pod statefulset-6336/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Dec 20 13:31:40.515: INFO: Waiting for Pod statefulset-6336/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Dec 20 13:31:50.514: INFO: Waiting for StatefulSet statefulset-6336/ss2 to complete update Dec 20 13:31:50.514: INFO: Waiting for Pod statefulset-6336/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Dec 20 13:32:00.522: INFO: Waiting for StatefulSet statefulset-6336/ss2 to complete update Dec 20 13:32:00.522: INFO: Waiting for Pod statefulset-6336/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Dec 20 13:32:10.506: INFO: Deleting all statefulset in ns statefulset-6336 Dec 20 13:32:10.511: INFO: Scaling statefulset ss2 to 0 Dec 20 13:32:50.564: INFO: Waiting for statefulset status.replicas updated to 0 Dec 20 13:32:50.577: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:32:50.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6336" for this suite. Dec 20 13:32:58.719: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:32:58.894: INFO: namespace statefulset-6336 deletion completed in 8.250320727s • [SLOW TEST:233.375 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:32:58.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc Dec 20 13:32:59.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9413' Dec 20 13:32:59.508: INFO: stderr: "" Dec 20 13:32:59.508: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. Dec 20 13:33:00.548: INFO: Selector matched 1 pods for map[app:redis] Dec 20 13:33:00.548: INFO: Found 0 / 1 Dec 20 13:33:01.522: INFO: Selector matched 1 pods for map[app:redis] Dec 20 13:33:01.522: INFO: Found 0 / 1 Dec 20 13:33:02.534: INFO: Selector matched 1 pods for map[app:redis] Dec 20 13:33:02.535: INFO: Found 0 / 1 Dec 20 13:33:03.515: INFO: Selector matched 1 pods for map[app:redis] Dec 20 13:33:03.515: INFO: Found 0 / 1 Dec 20 13:33:04.523: INFO: Selector matched 1 pods for map[app:redis] Dec 20 13:33:04.523: INFO: Found 0 / 1 Dec 20 13:33:05.517: INFO: Selector matched 1 pods for map[app:redis] Dec 20 13:33:05.517: INFO: Found 0 / 1 Dec 20 13:33:06.518: INFO: Selector matched 1 pods for map[app:redis] Dec 20 13:33:06.518: INFO: Found 0 / 1 Dec 20 13:33:07.519: INFO: Selector matched 1 pods for map[app:redis] Dec 20 13:33:07.519: INFO: Found 0 / 1 Dec 20 13:33:08.521: INFO: Selector matched 1 pods for map[app:redis] Dec 20 13:33:08.521: INFO: Found 0 / 1 Dec 20 13:33:09.517: INFO: Selector matched 1 pods for map[app:redis] Dec 20 13:33:09.517: INFO: Found 1 / 1 Dec 20 13:33:09.517: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Dec 20 13:33:09.522: INFO: Selector matched 1 pods for map[app:redis] Dec 20 13:33:09.522: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Dec 20 13:33:09.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-sn6b8 redis-master --namespace=kubectl-9413' Dec 20 13:33:09.754: INFO: stderr: "" Dec 20 13:33:09.754: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 20 Dec 13:33:07.547 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 20 Dec 13:33:07.548 # Server started, Redis version 3.2.12\n1:M 20 Dec 13:33:07.548 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 20 Dec 13:33:07.548 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Dec 20 13:33:09.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-sn6b8 redis-master --namespace=kubectl-9413 --tail=1' Dec 20 13:33:09.948: INFO: stderr: "" Dec 20 13:33:09.948: INFO: stdout: "1:M 20 Dec 13:33:07.548 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Dec 20 13:33:09.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-sn6b8 redis-master --namespace=kubectl-9413 --limit-bytes=1' Dec 20 13:33:10.078: INFO: stderr: "" Dec 20 13:33:10.078: INFO: stdout: " " STEP: exposing timestamps Dec 20 13:33:10.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-sn6b8 redis-master --namespace=kubectl-9413 --tail=1 --timestamps' Dec 20 13:33:10.186: INFO: stderr: "" Dec 20 13:33:10.186: INFO: stdout: "2019-12-20T13:33:07.549200893Z 1:M 20 Dec 13:33:07.548 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Dec 20 13:33:12.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-sn6b8 redis-master --namespace=kubectl-9413 --since=1s' Dec 20 13:33:12.856: INFO: stderr: "" Dec 20 13:33:12.856: INFO: stdout: "" Dec 20 13:33:12.857: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-sn6b8 redis-master --namespace=kubectl-9413 --since=24h' Dec 20 13:33:13.024: INFO: stderr: "" Dec 20 13:33:13.025: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 20 Dec 13:33:07.547 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 20 Dec 13:33:07.548 # Server started, Redis version 3.2.12\n1:M 20 Dec 13:33:07.548 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 20 Dec 13:33:07.548 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources Dec 20 13:33:13.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9413' Dec 20 13:33:13.163: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 20 13:33:13.163: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Dec 20 13:33:13.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-9413' Dec 20 13:33:13.286: INFO: stderr: "No resources found.\n" Dec 20 13:33:13.286: INFO: stdout: "" Dec 20 13:33:13.286: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-9413 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Dec 20 13:33:13.358: INFO: stderr: "" Dec 20 13:33:13.358: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:33:13.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9413" for this suite. Dec 20 13:33:35.395: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:33:35.596: INFO: namespace kubectl-9413 deletion completed in 22.223171812s • [SLOW TEST:36.699 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:33:35.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-2603 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Dec 20 13:33:35.852: INFO: Found 0 stateful pods, waiting for 3 Dec 20 13:33:46.180: INFO: Found 2 stateful pods, waiting for 3 Dec 20 13:33:55.863: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 20 13:33:55.863: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 20 13:33:55.863: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Dec 20 13:34:05.867: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 20 13:34:05.867: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 20 13:34:05.867: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Dec 20 13:34:05.906: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Dec 20 13:34:15.972: INFO: Updating stateful set ss2 Dec 20 13:34:15.987: INFO: Waiting for Pod statefulset-2603/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 20 13:34:26.003: INFO: Waiting for Pod statefulset-2603/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Dec 20 13:34:36.238: INFO: Found 2 stateful pods, waiting for 3 Dec 20 13:34:46.250: INFO: Found 2 stateful pods, waiting for 3 Dec 20 13:34:56.251: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 20 13:34:56.251: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 20 13:34:56.251: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Dec 20 13:35:06.248: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 20 13:35:06.248: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 20 13:35:06.248: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Dec 20 13:35:06.279: INFO: Updating stateful set ss2 Dec 20 13:35:06.306: INFO: Waiting for Pod statefulset-2603/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 20 13:35:16.320: INFO: Waiting for Pod statefulset-2603/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 20 13:35:26.355: INFO: Updating stateful set ss2 Dec 20 13:35:26.470: INFO: Waiting for StatefulSet statefulset-2603/ss2 to complete update Dec 20 13:35:26.471: INFO: Waiting for Pod statefulset-2603/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 20 13:35:36.491: INFO: Waiting for StatefulSet statefulset-2603/ss2 to complete update Dec 20 13:35:36.492: INFO: Waiting for Pod statefulset-2603/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 20 13:35:47.737: INFO: Waiting for StatefulSet statefulset-2603/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Dec 20 13:35:56.516: INFO: Deleting all statefulset in ns statefulset-2603 Dec 20 13:35:56.525: INFO: Scaling statefulset ss2 to 0 Dec 20 13:36:26.625: INFO: Waiting for statefulset status.replicas updated to 0 Dec 20 13:36:26.635: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:36:26.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2603" for this suite. Dec 20 13:36:34.748: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:36:34.858: INFO: namespace statefulset-2603 deletion completed in 8.148745761s • [SLOW TEST:179.261 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:36:34.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 20 13:36:34.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Dec 20 13:36:35.446: INFO: stderr: "" Dec 20 13:36:35.446: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2019-12-14T21:37:43Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.1\", GitCommit:\"4485c6f18cee9a5d3c3b4e523bd27972b1b53892\", GitTreeState:\"clean\", BuildDate:\"2019-07-18T09:09:21Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:36:35.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6480" for this suite. Dec 20 13:36:43.497: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:36:43.684: INFO: namespace kubectl-6480 deletion completed in 8.224459325s • [SLOW TEST:8.825 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:36:43.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy Dec 20 13:36:43.792: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix831611887/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:36:43.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9367" for this suite. Dec 20 13:36:49.916: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:36:50.050: INFO: namespace kubectl-9367 deletion completed in 6.164656083s • [SLOW TEST:6.364 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:36:50.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Dec 20 13:36:50.140: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:37:05.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9213" for this suite. Dec 20 13:37:11.610: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:37:11.750: INFO: namespace init-container-9213 deletion completed in 6.169009045s • [SLOW TEST:21.700 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:37:11.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod Dec 20 13:37:11.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-327' Dec 20 13:37:12.597: INFO: stderr: "" Dec 20 13:37:12.597: INFO: stdout: "pod/pause created\n" Dec 20 13:37:12.597: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Dec 20 13:37:12.598: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-327" to be "running and ready" Dec 20 13:37:13.700: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 1.10273674s Dec 20 13:37:15.710: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 3.112444891s Dec 20 13:37:17.719: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 5.121130932s Dec 20 13:37:19.728: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 7.130258358s Dec 20 13:37:21.737: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 9.139827946s Dec 20 13:37:23.750: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 11.152654005s Dec 20 13:37:23.750: INFO: Pod "pause" satisfied condition "running and ready" Dec 20 13:37:23.750: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod Dec 20 13:37:23.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-327' Dec 20 13:37:23.942: INFO: stderr: "" Dec 20 13:37:23.943: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Dec 20 13:37:23.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-327' Dec 20 13:37:24.133: INFO: stderr: "" Dec 20 13:37:24.133: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 12s testing-label-value\n" STEP: removing the label testing-label of a pod Dec 20 13:37:24.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-327' Dec 20 13:37:24.277: INFO: stderr: "" Dec 20 13:37:24.277: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Dec 20 13:37:24.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-327' Dec 20 13:37:24.390: INFO: stderr: "" Dec 20 13:37:24.390: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 12s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources Dec 20 13:37:24.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-327' Dec 20 13:37:24.505: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 20 13:37:24.506: INFO: stdout: "pod \"pause\" force deleted\n" Dec 20 13:37:24.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-327' Dec 20 13:37:24.628: INFO: stderr: "No resources found.\n" Dec 20 13:37:24.629: INFO: stdout: "" Dec 20 13:37:24.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-327 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Dec 20 13:37:24.853: INFO: stderr: "" Dec 20 13:37:24.853: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:37:24.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-327" for this suite. Dec 20 13:37:30.900: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:37:30.992: INFO: namespace kubectl-327 deletion completed in 6.130656031s • [SLOW TEST:19.241 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:37:30.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 20 13:37:31.159: INFO: Pod name rollover-pod: Found 0 pods out of 1 Dec 20 13:37:36.172: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Dec 20 13:37:40.182: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Dec 20 13:37:42.190: INFO: Creating deployment "test-rollover-deployment" Dec 20 13:37:42.213: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Dec 20 13:37:44.228: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Dec 20 13:37:44.235: INFO: Ensure that both replica sets have 1 created replica Dec 20 13:37:44.240: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Dec 20 13:37:44.250: INFO: Updating deployment test-rollover-deployment Dec 20 13:37:44.250: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Dec 20 13:37:46.276: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Dec 20 13:37:46.285: INFO: Make sure deployment "test-rollover-deployment" is complete Dec 20 13:37:46.291: INFO: all replica sets need to contain the pod-template-hash label Dec 20 13:37:46.291: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712445862, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712445862, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712445865, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712445862, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 20 13:37:48.304: INFO: all replica sets need to contain the pod-template-hash label Dec 20 13:37:48.305: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712445862, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712445862, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712445865, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712445862, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 20 13:37:50.304: INFO: all replica sets need to contain the pod-template-hash label Dec 20 13:37:50.304: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712445862, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712445862, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712445865, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712445862, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 20 13:37:52.315: INFO: all replica sets need to contain the pod-template-hash label Dec 20 13:37:52.315: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712445862, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712445862, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712445865, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712445862, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 20 13:37:54.304: INFO: all replica sets need to contain the pod-template-hash label Dec 20 13:37:54.305: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712445862, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712445862, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712445865, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712445862, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 20 13:37:56.303: INFO: all replica sets need to contain the pod-template-hash label Dec 20 13:37:56.303: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712445862, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712445862, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712445875, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712445862, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 20 13:37:58.308: INFO: all replica sets need to contain the pod-template-hash label Dec 20 13:37:58.308: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712445862, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712445862, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712445875, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712445862, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 20 13:38:00.305: INFO: all replica sets need to contain the pod-template-hash label Dec 20 13:38:00.305: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712445862, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712445862, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712445875, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712445862, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 20 13:38:02.311: INFO: all replica sets need to contain the pod-template-hash label Dec 20 13:38:02.311: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712445862, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712445862, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712445875, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712445862, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 20 13:38:04.304: INFO: all replica sets need to contain the pod-template-hash label Dec 20 13:38:04.305: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712445862, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712445862, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712445875, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712445862, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 20 13:38:06.328: INFO: Dec 20 13:38:06.328: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Dec 20 13:38:06.342: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-3222,SelfLink:/apis/apps/v1/namespaces/deployment-3222/deployments/test-rollover-deployment,UID:f856d0b5-2799-4443-a035-97a24e329e5c,ResourceVersion:17392976,Generation:2,CreationTimestamp:2019-12-20 13:37:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-20 13:37:42 +0000 UTC 2019-12-20 13:37:42 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-20 13:38:05 +0000 UTC 2019-12-20 13:37:42 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Dec 20 13:38:06.348: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-3222,SelfLink:/apis/apps/v1/namespaces/deployment-3222/replicasets/test-rollover-deployment-854595fc44,UID:4f41fb3d-3fc9-47fa-9dfe-6bf991183832,ResourceVersion:17392966,Generation:2,CreationTimestamp:2019-12-20 13:37:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment f856d0b5-2799-4443-a035-97a24e329e5c 0xc002b90477 0xc002b90478}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Dec 20 13:38:06.348: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Dec 20 13:38:06.348: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-3222,SelfLink:/apis/apps/v1/namespaces/deployment-3222/replicasets/test-rollover-controller,UID:29a0c5af-cf61-4066-b94d-568d4313b7d2,ResourceVersion:17392975,Generation:2,CreationTimestamp:2019-12-20 13:37:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment f856d0b5-2799-4443-a035-97a24e329e5c 0xc002b903a7 0xc002b903a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Dec 20 13:38:06.349: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-3222,SelfLink:/apis/apps/v1/namespaces/deployment-3222/replicasets/test-rollover-deployment-9b8b997cf,UID:a08b5f93-f86c-42cf-849a-656384699a28,ResourceVersion:17392930,Generation:2,CreationTimestamp:2019-12-20 13:37:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment f856d0b5-2799-4443-a035-97a24e329e5c 0xc002b90540 0xc002b90541}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Dec 20 13:38:06.357: INFO: Pod "test-rollover-deployment-854595fc44-hbsm4" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-hbsm4,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-3222,SelfLink:/api/v1/namespaces/deployment-3222/pods/test-rollover-deployment-854595fc44-hbsm4,UID:9738371e-f9fa-4080-a2e2-923855b3668d,ResourceVersion:17392949,Generation:0,CreationTimestamp:2019-12-20 13:37:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 4f41fb3d-3fc9-47fa-9dfe-6bf991183832 0xc002b960f7 0xc002b960f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-m9xvp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-m9xvp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-m9xvp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b96170} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b96190}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:37:45 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:37:55 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:37:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:37:45 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2019-12-20 13:37:45 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-20 13:37:54 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://402c5ffd7ff92a73eaf233ea87e3ce18ccfbe020fbb0c700bd3ea52675dbbffe}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:38:06.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3222" for this suite. Dec 20 13:38:12.420: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:38:12.633: INFO: namespace deployment-3222 deletion completed in 6.263353392s • [SLOW TEST:41.641 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:38:12.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Dec 20 13:38:24.833: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Dec 20 13:38:40.007: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:38:40.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6637" for this suite. Dec 20 13:38:46.094: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:38:46.259: INFO: namespace pods-6637 deletion completed in 6.227214842s • [SLOW TEST:33.625 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:38:46.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-4351 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-4351 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4351 Dec 20 13:38:46.485: INFO: Found 0 stateful pods, waiting for 1 Dec 20 13:38:56.498: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Dec 20 13:38:56.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4351 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 20 13:38:57.139: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Dec 20 13:38:57.139: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 20 13:38:57.139: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 20 13:38:57.149: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Dec 20 13:39:07.158: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Dec 20 13:39:07.158: INFO: Waiting for statefulset status.replicas updated to 0 Dec 20 13:39:07.200: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999491s Dec 20 13:39:08.211: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.984497246s Dec 20 13:39:09.222: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.972974596s Dec 20 13:39:10.232: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.962236098s Dec 20 13:39:11.246: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.952476411s Dec 20 13:39:12.259: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.93811232s Dec 20 13:39:13.271: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.925791527s Dec 20 13:39:14.280: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.913576032s Dec 20 13:39:15.289: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.904760778s Dec 20 13:39:16.301: INFO: Verifying statefulset ss doesn't scale past 1 for another 895.253969ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4351 Dec 20 13:39:17.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4351 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 20 13:39:18.130: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" Dec 20 13:39:18.130: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 20 13:39:18.130: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 20 13:39:18.145: INFO: Found 1 stateful pods, waiting for 3 Dec 20 13:39:28.158: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Dec 20 13:39:28.158: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Dec 20 13:39:28.158: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Dec 20 13:39:38.156: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Dec 20 13:39:38.156: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Dec 20 13:39:38.156: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Dec 20 13:39:38.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4351 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 20 13:39:41.168: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Dec 20 13:39:41.168: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 20 13:39:41.168: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 20 13:39:41.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4351 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 20 13:39:41.799: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Dec 20 13:39:41.799: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 20 13:39:41.799: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 20 13:39:41.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4351 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 20 13:39:42.896: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Dec 20 13:39:42.896: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 20 13:39:42.896: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 20 13:39:42.896: INFO: Waiting for statefulset status.replicas updated to 0 Dec 20 13:39:42.947: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Dec 20 13:39:42.947: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Dec 20 13:39:42.947: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Dec 20 13:39:42.973: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999548s Dec 20 13:39:44.105: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.986411876s Dec 20 13:39:45.114: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.854401417s Dec 20 13:39:46.122: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.845630403s Dec 20 13:39:47.137: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.837139429s Dec 20 13:39:48.146: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.822298001s Dec 20 13:39:49.154: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.813962909s Dec 20 13:39:50.165: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.805644187s Dec 20 13:39:51.176: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.794309279s Dec 20 13:39:52.188: INFO: Verifying statefulset ss doesn't scale past 3 for another 783.305011ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4351 Dec 20 13:39:53.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4351 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 20 13:39:53.723: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" Dec 20 13:39:53.723: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 20 13:39:53.723: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 20 13:39:53.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4351 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 20 13:39:54.234: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" Dec 20 13:39:54.235: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 20 13:39:54.235: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 20 13:39:54.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4351 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 20 13:39:55.013: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" Dec 20 13:39:55.013: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 20 13:39:55.014: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 20 13:39:55.014: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Dec 20 13:40:15.100: INFO: Deleting all statefulset in ns statefulset-4351 Dec 20 13:40:15.105: INFO: Scaling statefulset ss to 0 Dec 20 13:40:15.122: INFO: Waiting for statefulset status.replicas updated to 0 Dec 20 13:40:15.128: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:40:15.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4351" for this suite. Dec 20 13:40:23.263: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:40:23.405: INFO: namespace statefulset-4351 deletion completed in 8.220736669s • [SLOW TEST:97.146 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:40:23.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 20 13:40:23.688: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"8ba9f7bb-24a6-4f0d-b41e-ba0f97ce6481", Controller:(*bool)(0xc00219257a), BlockOwnerDeletion:(*bool)(0xc00219257b)}} Dec 20 13:40:23.704: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"0bdd73f7-823a-45ad-9fde-111d0310f506", Controller:(*bool)(0xc00219271a), BlockOwnerDeletion:(*bool)(0xc00219271b)}} Dec 20 13:40:23.785: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"be301436-6c57-4f58-89e7-6b8567c9204a", Controller:(*bool)(0xc0027076b2), BlockOwnerDeletion:(*bool)(0xc0027076b3)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:40:28.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1467" for this suite. Dec 20 13:40:34.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:40:35.098: INFO: namespace gc-1467 deletion completed in 6.213026074s • [SLOW TEST:11.689 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:40:35.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2763.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2763.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2763.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2763.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2763.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-2763.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2763.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-2763.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2763.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-2763.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2763.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-2763.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2763.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 217.94.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.94.217_udp@PTR;check="$$(dig +tcp +noall +answer +search 217.94.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.94.217_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2763.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2763.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2763.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2763.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2763.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-2763.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2763.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-2763.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2763.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-2763.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2763.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-2763.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2763.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 217.94.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.94.217_udp@PTR;check="$$(dig +tcp +noall +answer +search 217.94.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.94.217_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Dec 20 13:40:47.570: INFO: Unable to read wheezy_udp@dns-test-service.dns-2763.svc.cluster.local from pod dns-2763/dns-test-1409ae99-9677-4751-9c3f-ff4f80dd94fd: the server could not find the requested resource (get pods dns-test-1409ae99-9677-4751-9c3f-ff4f80dd94fd) Dec 20 13:40:47.637: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2763.svc.cluster.local from pod dns-2763/dns-test-1409ae99-9677-4751-9c3f-ff4f80dd94fd: the server could not find the requested resource (get pods dns-test-1409ae99-9677-4751-9c3f-ff4f80dd94fd) Dec 20 13:40:47.648: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2763.svc.cluster.local from pod dns-2763/dns-test-1409ae99-9677-4751-9c3f-ff4f80dd94fd: the server could not find the requested resource (get pods dns-test-1409ae99-9677-4751-9c3f-ff4f80dd94fd) Dec 20 13:40:47.654: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2763.svc.cluster.local from pod dns-2763/dns-test-1409ae99-9677-4751-9c3f-ff4f80dd94fd: the server could not find the requested resource (get pods dns-test-1409ae99-9677-4751-9c3f-ff4f80dd94fd) Dec 20 13:40:47.662: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-2763.svc.cluster.local from pod dns-2763/dns-test-1409ae99-9677-4751-9c3f-ff4f80dd94fd: the server could not find the requested resource (get pods dns-test-1409ae99-9677-4751-9c3f-ff4f80dd94fd) Dec 20 13:40:47.674: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-2763.svc.cluster.local from pod dns-2763/dns-test-1409ae99-9677-4751-9c3f-ff4f80dd94fd: the server could not find the requested resource (get pods dns-test-1409ae99-9677-4751-9c3f-ff4f80dd94fd) Dec 20 13:40:47.681: INFO: Unable to read wheezy_udp@PodARecord from pod dns-2763/dns-test-1409ae99-9677-4751-9c3f-ff4f80dd94fd: the server could not find the requested resource (get pods dns-test-1409ae99-9677-4751-9c3f-ff4f80dd94fd) Dec 20 13:40:47.690: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-2763/dns-test-1409ae99-9677-4751-9c3f-ff4f80dd94fd: the server could not find the requested resource (get pods dns-test-1409ae99-9677-4751-9c3f-ff4f80dd94fd) Dec 20 13:40:47.697: INFO: Unable to read 10.105.94.217_udp@PTR from pod dns-2763/dns-test-1409ae99-9677-4751-9c3f-ff4f80dd94fd: the server could not find the requested resource (get pods dns-test-1409ae99-9677-4751-9c3f-ff4f80dd94fd) Dec 20 13:40:47.702: INFO: Unable to read 10.105.94.217_tcp@PTR from pod dns-2763/dns-test-1409ae99-9677-4751-9c3f-ff4f80dd94fd: the server could not find the requested resource (get pods dns-test-1409ae99-9677-4751-9c3f-ff4f80dd94fd) Dec 20 13:40:47.734: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2763.svc.cluster.local from pod dns-2763/dns-test-1409ae99-9677-4751-9c3f-ff4f80dd94fd: the server could not find the requested resource (get pods dns-test-1409ae99-9677-4751-9c3f-ff4f80dd94fd) Dec 20 13:40:47.740: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2763.svc.cluster.local from pod dns-2763/dns-test-1409ae99-9677-4751-9c3f-ff4f80dd94fd: the server could not find the requested resource (get pods dns-test-1409ae99-9677-4751-9c3f-ff4f80dd94fd) Dec 20 13:40:47.746: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-2763.svc.cluster.local from pod dns-2763/dns-test-1409ae99-9677-4751-9c3f-ff4f80dd94fd: the server could not find the requested resource (get pods dns-test-1409ae99-9677-4751-9c3f-ff4f80dd94fd) Dec 20 13:40:47.752: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-2763.svc.cluster.local from pod dns-2763/dns-test-1409ae99-9677-4751-9c3f-ff4f80dd94fd: the server could not find the requested resource (get pods dns-test-1409ae99-9677-4751-9c3f-ff4f80dd94fd) Dec 20 13:40:47.756: INFO: Unable to read jessie_udp@PodARecord from pod dns-2763/dns-test-1409ae99-9677-4751-9c3f-ff4f80dd94fd: the server could not find the requested resource (get pods dns-test-1409ae99-9677-4751-9c3f-ff4f80dd94fd) Dec 20 13:40:47.759: INFO: Unable to read jessie_tcp@PodARecord from pod dns-2763/dns-test-1409ae99-9677-4751-9c3f-ff4f80dd94fd: the server could not find the requested resource (get pods dns-test-1409ae99-9677-4751-9c3f-ff4f80dd94fd) Dec 20 13:40:47.765: INFO: Unable to read 10.105.94.217_udp@PTR from pod dns-2763/dns-test-1409ae99-9677-4751-9c3f-ff4f80dd94fd: the server could not find the requested resource (get pods dns-test-1409ae99-9677-4751-9c3f-ff4f80dd94fd) Dec 20 13:40:47.771: INFO: Unable to read 10.105.94.217_tcp@PTR from pod dns-2763/dns-test-1409ae99-9677-4751-9c3f-ff4f80dd94fd: the server could not find the requested resource (get pods dns-test-1409ae99-9677-4751-9c3f-ff4f80dd94fd) Dec 20 13:40:47.771: INFO: Lookups using dns-2763/dns-test-1409ae99-9677-4751-9c3f-ff4f80dd94fd failed for: [wheezy_udp@dns-test-service.dns-2763.svc.cluster.local wheezy_tcp@dns-test-service.dns-2763.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2763.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2763.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-2763.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-2763.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.105.94.217_udp@PTR 10.105.94.217_tcp@PTR jessie_udp@_http._tcp.dns-test-service.dns-2763.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2763.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-2763.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-2763.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.105.94.217_udp@PTR 10.105.94.217_tcp@PTR] Dec 20 13:40:53.036: INFO: DNS probes using dns-2763/dns-test-1409ae99-9677-4751-9c3f-ff4f80dd94fd succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:40:53.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2763" for this suite. Dec 20 13:40:59.523: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:40:59.643: INFO: namespace dns-2763 deletion completed in 6.140525638s • [SLOW TEST:24.544 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:40:59.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 20 13:40:59.750: INFO: Create a RollingUpdate DaemonSet Dec 20 13:40:59.756: INFO: Check that daemon pods launch on every node of the cluster Dec 20 13:40:59.771: INFO: Number of nodes with available pods: 0 Dec 20 13:40:59.771: INFO: Node iruya-node is running more than one daemon pod Dec 20 13:41:01.524: INFO: Number of nodes with available pods: 0 Dec 20 13:41:01.524: INFO: Node iruya-node is running more than one daemon pod Dec 20 13:41:01.945: INFO: Number of nodes with available pods: 0 Dec 20 13:41:01.945: INFO: Node iruya-node is running more than one daemon pod Dec 20 13:41:02.806: INFO: Number of nodes with available pods: 0 Dec 20 13:41:02.806: INFO: Node iruya-node is running more than one daemon pod Dec 20 13:41:03.794: INFO: Number of nodes with available pods: 0 Dec 20 13:41:03.794: INFO: Node iruya-node is running more than one daemon pod Dec 20 13:41:04.784: INFO: Number of nodes with available pods: 0 Dec 20 13:41:04.784: INFO: Node iruya-node is running more than one daemon pod Dec 20 13:41:07.138: INFO: Number of nodes with available pods: 0 Dec 20 13:41:07.138: INFO: Node iruya-node is running more than one daemon pod Dec 20 13:41:08.238: INFO: Number of nodes with available pods: 0 Dec 20 13:41:08.238: INFO: Node iruya-node is running more than one daemon pod Dec 20 13:41:09.100: INFO: Number of nodes with available pods: 0 Dec 20 13:41:09.100: INFO: Node iruya-node is running more than one daemon pod Dec 20 13:41:09.802: INFO: Number of nodes with available pods: 0 Dec 20 13:41:09.802: INFO: Node iruya-node is running more than one daemon pod Dec 20 13:41:10.789: INFO: Number of nodes with available pods: 1 Dec 20 13:41:10.789: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 20 13:41:11.800: INFO: Number of nodes with available pods: 2 Dec 20 13:41:11.800: INFO: Number of running nodes: 2, number of available pods: 2 Dec 20 13:41:11.800: INFO: Update the DaemonSet to trigger a rollout Dec 20 13:41:11.834: INFO: Updating DaemonSet daemon-set Dec 20 13:41:20.605: INFO: Roll back the DaemonSet before rollout is complete Dec 20 13:41:20.657: INFO: Updating DaemonSet daemon-set Dec 20 13:41:20.657: INFO: Make sure DaemonSet rollback is complete Dec 20 13:41:20.679: INFO: Wrong image for pod: daemon-set-xh69n. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Dec 20 13:41:20.679: INFO: Pod daemon-set-xh69n is not available Dec 20 13:41:21.708: INFO: Wrong image for pod: daemon-set-xh69n. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Dec 20 13:41:21.708: INFO: Pod daemon-set-xh69n is not available Dec 20 13:41:22.707: INFO: Wrong image for pod: daemon-set-xh69n. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Dec 20 13:41:22.707: INFO: Pod daemon-set-xh69n is not available Dec 20 13:41:23.709: INFO: Wrong image for pod: daemon-set-xh69n. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Dec 20 13:41:23.709: INFO: Pod daemon-set-xh69n is not available Dec 20 13:41:25.854: INFO: Pod daemon-set-6n662 is not available [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2250, will wait for the garbage collector to delete the pods Dec 20 13:41:26.452: INFO: Deleting DaemonSet.extensions daemon-set took: 76.573926ms Dec 20 13:41:27.253: INFO: Terminating DaemonSet.extensions daemon-set pods took: 801.034577ms Dec 20 13:41:36.575: INFO: Number of nodes with available pods: 0 Dec 20 13:41:36.575: INFO: Number of running nodes: 0, number of available pods: 0 Dec 20 13:41:36.584: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2250/daemonsets","resourceVersion":"17393666"},"items":null} Dec 20 13:41:36.594: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2250/pods","resourceVersion":"17393666"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:41:36.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2250" for this suite. Dec 20 13:41:42.685: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:41:42.777: INFO: namespace daemonsets-2250 deletion completed in 6.150717322s • [SLOW TEST:43.133 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:41:42.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Dec 20 13:41:42.918: INFO: Waiting up to 5m0s for pod "pod-8e5dc32d-8556-4af2-995f-a57da3cdaa4f" in namespace "emptydir-4116" to be "success or failure" Dec 20 13:41:42.938: INFO: Pod "pod-8e5dc32d-8556-4af2-995f-a57da3cdaa4f": Phase="Pending", Reason="", readiness=false. Elapsed: 19.557097ms Dec 20 13:41:44.951: INFO: Pod "pod-8e5dc32d-8556-4af2-995f-a57da3cdaa4f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032364229s Dec 20 13:41:46.960: INFO: Pod "pod-8e5dc32d-8556-4af2-995f-a57da3cdaa4f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041337869s Dec 20 13:41:48.972: INFO: Pod "pod-8e5dc32d-8556-4af2-995f-a57da3cdaa4f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053647358s Dec 20 13:41:50.981: INFO: Pod "pod-8e5dc32d-8556-4af2-995f-a57da3cdaa4f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.063260722s STEP: Saw pod success Dec 20 13:41:50.982: INFO: Pod "pod-8e5dc32d-8556-4af2-995f-a57da3cdaa4f" satisfied condition "success or failure" Dec 20 13:41:50.984: INFO: Trying to get logs from node iruya-node pod pod-8e5dc32d-8556-4af2-995f-a57da3cdaa4f container test-container: STEP: delete the pod Dec 20 13:41:51.026: INFO: Waiting for pod pod-8e5dc32d-8556-4af2-995f-a57da3cdaa4f to disappear Dec 20 13:41:51.030: INFO: Pod pod-8e5dc32d-8556-4af2-995f-a57da3cdaa4f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:41:51.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4116" for this suite. Dec 20 13:41:57.059: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:41:57.141: INFO: namespace emptydir-4116 deletion completed in 6.106152786s • [SLOW TEST:14.362 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:41:57.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 20 13:41:57.223: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:42:05.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-111" for this suite. Dec 20 13:43:07.469: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:43:07.653: INFO: namespace pods-111 deletion completed in 1m2.210706009s • [SLOW TEST:70.512 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:43:07.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:44:07.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7758" for this suite. Dec 20 13:44:29.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:44:29.933: INFO: namespace container-probe-7758 deletion completed in 22.198282304s • [SLOW TEST:82.280 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:44:29.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Dec 20 13:44:30.116: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-4669,SelfLink:/api/v1/namespaces/watch-4669/configmaps/e2e-watch-test-label-changed,UID:9f2441ed-6afe-486e-b2be-f357d30640a6,ResourceVersion:17394003,Generation:0,CreationTimestamp:2019-12-20 13:44:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Dec 20 13:44:30.117: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-4669,SelfLink:/api/v1/namespaces/watch-4669/configmaps/e2e-watch-test-label-changed,UID:9f2441ed-6afe-486e-b2be-f357d30640a6,ResourceVersion:17394004,Generation:0,CreationTimestamp:2019-12-20 13:44:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Dec 20 13:44:30.117: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-4669,SelfLink:/api/v1/namespaces/watch-4669/configmaps/e2e-watch-test-label-changed,UID:9f2441ed-6afe-486e-b2be-f357d30640a6,ResourceVersion:17394005,Generation:0,CreationTimestamp:2019-12-20 13:44:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Dec 20 13:44:40.300: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-4669,SelfLink:/api/v1/namespaces/watch-4669/configmaps/e2e-watch-test-label-changed,UID:9f2441ed-6afe-486e-b2be-f357d30640a6,ResourceVersion:17394020,Generation:0,CreationTimestamp:2019-12-20 13:44:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 20 13:44:40.301: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-4669,SelfLink:/api/v1/namespaces/watch-4669/configmaps/e2e-watch-test-label-changed,UID:9f2441ed-6afe-486e-b2be-f357d30640a6,ResourceVersion:17394021,Generation:0,CreationTimestamp:2019-12-20 13:44:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Dec 20 13:44:40.301: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-4669,SelfLink:/api/v1/namespaces/watch-4669/configmaps/e2e-watch-test-label-changed,UID:9f2441ed-6afe-486e-b2be-f357d30640a6,ResourceVersion:17394022,Generation:0,CreationTimestamp:2019-12-20 13:44:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:44:40.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4669" for this suite. Dec 20 13:44:46.354: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:44:46.475: INFO: namespace watch-4669 deletion completed in 6.15893051s • [SLOW TEST:16.541 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:44:46.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Dec 20 13:44:56.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-b3ad8c93-b40f-49d3-b1ed-e5391c61f0ed -c busybox-main-container --namespace=emptydir-4009 -- cat /usr/share/volumeshare/shareddata.txt' Dec 20 13:44:57.168: INFO: stderr: "" Dec 20 13:44:57.168: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:44:57.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4009" for this suite. Dec 20 13:45:03.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:45:03.403: INFO: namespace emptydir-4009 deletion completed in 6.225824993s • [SLOW TEST:16.927 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:45:03.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Dec 20 13:45:03.457: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. Dec 20 13:45:04.629: INFO: new replicaset for deployment "sample-apiserver-deployment" is yet to be created Dec 20 13:45:07.800: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712446304, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712446304, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712446304, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712446304, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 20 13:45:09.811: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712446304, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712446304, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712446304, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712446304, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 20 13:45:11.811: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712446304, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712446304, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712446304, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712446304, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 20 13:45:13.818: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712446304, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712446304, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712446304, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712446304, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 20 13:45:15.807: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712446304, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712446304, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712446304, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712446304, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 20 13:45:17.813: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712446304, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712446304, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712446304, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712446304, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 20 13:45:23.372: INFO: Waited 3.546357374s for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:45:24.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-7648" for this suite. Dec 20 13:45:30.603: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:45:30.768: INFO: namespace aggregator-7648 deletion completed in 6.2753197s • [SLOW TEST:27.365 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:45:30.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 20 13:45:30.922: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d27367f5-6064-41c2-a0ce-b45e2b280819" in namespace "downward-api-8145" to be "success or failure" Dec 20 13:45:30.931: INFO: Pod "downwardapi-volume-d27367f5-6064-41c2-a0ce-b45e2b280819": Phase="Pending", Reason="", readiness=false. Elapsed: 8.750278ms Dec 20 13:45:32.941: INFO: Pod "downwardapi-volume-d27367f5-6064-41c2-a0ce-b45e2b280819": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018687003s Dec 20 13:45:34.947: INFO: Pod "downwardapi-volume-d27367f5-6064-41c2-a0ce-b45e2b280819": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025569363s Dec 20 13:45:36.957: INFO: Pod "downwardapi-volume-d27367f5-6064-41c2-a0ce-b45e2b280819": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034988589s Dec 20 13:45:39.412: INFO: Pod "downwardapi-volume-d27367f5-6064-41c2-a0ce-b45e2b280819": Phase="Running", Reason="", readiness=true. Elapsed: 8.490475946s Dec 20 13:45:41.419: INFO: Pod "downwardapi-volume-d27367f5-6064-41c2-a0ce-b45e2b280819": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.497540419s STEP: Saw pod success Dec 20 13:45:41.419: INFO: Pod "downwardapi-volume-d27367f5-6064-41c2-a0ce-b45e2b280819" satisfied condition "success or failure" Dec 20 13:45:41.422: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-d27367f5-6064-41c2-a0ce-b45e2b280819 container client-container: STEP: delete the pod Dec 20 13:45:41.478: INFO: Waiting for pod downwardapi-volume-d27367f5-6064-41c2-a0ce-b45e2b280819 to disappear Dec 20 13:45:41.485: INFO: Pod downwardapi-volume-d27367f5-6064-41c2-a0ce-b45e2b280819 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:45:41.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8145" for this suite. Dec 20 13:45:47.534: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:45:47.702: INFO: namespace downward-api-8145 deletion completed in 6.213850204s • [SLOW TEST:16.933 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:45:47.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Dec 20 13:45:47.885: INFO: Number of nodes with available pods: 0 Dec 20 13:45:47.886: INFO: Node iruya-node is running more than one daemon pod Dec 20 13:45:49.719: INFO: Number of nodes with available pods: 0 Dec 20 13:45:49.719: INFO: Node iruya-node is running more than one daemon pod Dec 20 13:45:49.903: INFO: Number of nodes with available pods: 0 Dec 20 13:45:49.903: INFO: Node iruya-node is running more than one daemon pod Dec 20 13:45:50.942: INFO: Number of nodes with available pods: 0 Dec 20 13:45:50.942: INFO: Node iruya-node is running more than one daemon pod Dec 20 13:45:51.918: INFO: Number of nodes with available pods: 0 Dec 20 13:45:51.918: INFO: Node iruya-node is running more than one daemon pod Dec 20 13:45:52.923: INFO: Number of nodes with available pods: 0 Dec 20 13:45:52.924: INFO: Node iruya-node is running more than one daemon pod Dec 20 13:45:55.827: INFO: Number of nodes with available pods: 0 Dec 20 13:45:55.827: INFO: Node iruya-node is running more than one daemon pod Dec 20 13:45:55.908: INFO: Number of nodes with available pods: 0 Dec 20 13:45:55.908: INFO: Node iruya-node is running more than one daemon pod Dec 20 13:45:58.018: INFO: Number of nodes with available pods: 0 Dec 20 13:45:58.018: INFO: Node iruya-node is running more than one daemon pod Dec 20 13:45:58.910: INFO: Number of nodes with available pods: 1 Dec 20 13:45:58.910: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 20 13:45:59.932: INFO: Number of nodes with available pods: 2 Dec 20 13:45:59.932: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Dec 20 13:46:00.008: INFO: Number of nodes with available pods: 2 Dec 20 13:46:00.009: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8665, will wait for the garbage collector to delete the pods Dec 20 13:46:01.165: INFO: Deleting DaemonSet.extensions daemon-set took: 13.81861ms Dec 20 13:46:01.466: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.894514ms Dec 20 13:46:17.876: INFO: Number of nodes with available pods: 0 Dec 20 13:46:17.876: INFO: Number of running nodes: 0, number of available pods: 0 Dec 20 13:46:17.882: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8665/daemonsets","resourceVersion":"17394334"},"items":null} Dec 20 13:46:17.920: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8665/pods","resourceVersion":"17394334"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:46:17.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8665" for this suite. Dec 20 13:46:23.975: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:46:24.046: INFO: namespace daemonsets-8665 deletion completed in 6.094927354s • [SLOW TEST:36.342 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:46:24.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 20 13:46:24.230: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Dec 20 13:46:24.243: INFO: Number of nodes with available pods: 0 Dec 20 13:46:24.243: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Dec 20 13:46:24.280: INFO: Number of nodes with available pods: 0 Dec 20 13:46:24.280: INFO: Node iruya-node is running more than one daemon pod Dec 20 13:46:25.291: INFO: Number of nodes with available pods: 0 Dec 20 13:46:25.291: INFO: Node iruya-node is running more than one daemon pod Dec 20 13:46:26.289: INFO: Number of nodes with available pods: 0 Dec 20 13:46:26.289: INFO: Node iruya-node is running more than one daemon pod Dec 20 13:46:27.291: INFO: Number of nodes with available pods: 0 Dec 20 13:46:27.291: INFO: Node iruya-node is running more than one daemon pod Dec 20 13:46:28.291: INFO: Number of nodes with available pods: 0 Dec 20 13:46:28.291: INFO: Node iruya-node is running more than one daemon pod Dec 20 13:46:29.307: INFO: Number of nodes with available pods: 0 Dec 20 13:46:29.307: INFO: Node iruya-node is running more than one daemon pod Dec 20 13:46:30.301: INFO: Number of nodes with available pods: 0 Dec 20 13:46:30.301: INFO: Node iruya-node is running more than one daemon pod Dec 20 13:46:31.291: INFO: Number of nodes with available pods: 0 Dec 20 13:46:31.291: INFO: Node iruya-node is running more than one daemon pod Dec 20 13:46:32.291: INFO: Number of nodes with available pods: 0 Dec 20 13:46:32.292: INFO: Node iruya-node is running more than one daemon pod Dec 20 13:46:33.306: INFO: Number of nodes with available pods: 1 Dec 20 13:46:33.306: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Dec 20 13:46:33.350: INFO: Number of nodes with available pods: 1 Dec 20 13:46:33.350: INFO: Number of running nodes: 0, number of available pods: 1 Dec 20 13:46:34.361: INFO: Number of nodes with available pods: 0 Dec 20 13:46:34.361: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Dec 20 13:46:34.410: INFO: Number of nodes with available pods: 0 Dec 20 13:46:34.410: INFO: Node iruya-node is running more than one daemon pod Dec 20 13:46:35.421: INFO: Number of nodes with available pods: 0 Dec 20 13:46:35.421: INFO: Node iruya-node is running more than one daemon pod Dec 20 13:46:36.421: INFO: Number of nodes with available pods: 0 Dec 20 13:46:36.421: INFO: Node iruya-node is running more than one daemon pod Dec 20 13:46:37.419: INFO: Number of nodes with available pods: 0 Dec 20 13:46:37.419: INFO: Node iruya-node is running more than one daemon pod Dec 20 13:46:38.425: INFO: Number of nodes with available pods: 0 Dec 20 13:46:38.425: INFO: Node iruya-node is running more than one daemon pod Dec 20 13:46:39.440: INFO: Number of nodes with available pods: 0 Dec 20 13:46:39.440: INFO: Node iruya-node is running more than one daemon pod Dec 20 13:46:40.421: INFO: Number of nodes with available pods: 0 Dec 20 13:46:40.421: INFO: Node iruya-node is running more than one daemon pod Dec 20 13:46:41.690: INFO: Number of nodes with available pods: 0 Dec 20 13:46:41.690: INFO: Node iruya-node is running more than one daemon pod Dec 20 13:46:42.427: INFO: Number of nodes with available pods: 0 Dec 20 13:46:42.427: INFO: Node iruya-node is running more than one daemon pod Dec 20 13:46:43.422: INFO: Number of nodes with available pods: 0 Dec 20 13:46:43.422: INFO: Node iruya-node is running more than one daemon pod Dec 20 13:46:44.420: INFO: Number of nodes with available pods: 0 Dec 20 13:46:44.420: INFO: Node iruya-node is running more than one daemon pod Dec 20 13:46:45.419: INFO: Number of nodes with available pods: 0 Dec 20 13:46:45.419: INFO: Node iruya-node is running more than one daemon pod Dec 20 13:46:46.420: INFO: Number of nodes with available pods: 0 Dec 20 13:46:46.420: INFO: Node iruya-node is running more than one daemon pod Dec 20 13:46:47.423: INFO: Number of nodes with available pods: 0 Dec 20 13:46:47.423: INFO: Node iruya-node is running more than one daemon pod Dec 20 13:46:48.432: INFO: Number of nodes with available pods: 0 Dec 20 13:46:48.432: INFO: Node iruya-node is running more than one daemon pod Dec 20 13:46:49.423: INFO: Number of nodes with available pods: 0 Dec 20 13:46:49.423: INFO: Node iruya-node is running more than one daemon pod Dec 20 13:46:50.421: INFO: Number of nodes with available pods: 0 Dec 20 13:46:50.421: INFO: Node iruya-node is running more than one daemon pod Dec 20 13:46:51.418: INFO: Number of nodes with available pods: 0 Dec 20 13:46:51.418: INFO: Node iruya-node is running more than one daemon pod Dec 20 13:46:52.424: INFO: Number of nodes with available pods: 0 Dec 20 13:46:52.424: INFO: Node iruya-node is running more than one daemon pod Dec 20 13:46:53.421: INFO: Number of nodes with available pods: 1 Dec 20 13:46:53.421: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4876, will wait for the garbage collector to delete the pods Dec 20 13:46:53.497: INFO: Deleting DaemonSet.extensions daemon-set took: 12.565697ms Dec 20 13:46:53.797: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.460487ms Dec 20 13:47:06.705: INFO: Number of nodes with available pods: 0 Dec 20 13:47:06.705: INFO: Number of running nodes: 0, number of available pods: 0 Dec 20 13:47:06.708: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4876/daemonsets","resourceVersion":"17394473"},"items":null} Dec 20 13:47:06.710: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4876/pods","resourceVersion":"17394473"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:47:06.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4876" for this suite. Dec 20 13:47:12.849: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:47:13.011: INFO: namespace daemonsets-4876 deletion completed in 6.198937166s • [SLOW TEST:48.965 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:47:13.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Dec 20 13:47:33.384: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 20 13:47:33.429: INFO: Pod pod-with-poststart-http-hook still exists Dec 20 13:47:35.429: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 20 13:47:35.439: INFO: Pod pod-with-poststart-http-hook still exists Dec 20 13:47:37.429: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 20 13:47:37.440: INFO: Pod pod-with-poststart-http-hook still exists Dec 20 13:47:39.429: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 20 13:47:39.441: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:47:39.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5343" for this suite. Dec 20 13:48:01.487: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:48:01.565: INFO: namespace container-lifecycle-hook-5343 deletion completed in 22.116565227s • [SLOW TEST:48.553 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:48:01.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-27d4d6c3-afdd-4834-977e-b63bf3b8857c STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:48:13.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5796" for this suite. Dec 20 13:48:35.866: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:48:36.003: INFO: namespace configmap-5796 deletion completed in 22.171148823s • [SLOW TEST:34.437 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:48:36.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components Dec 20 13:48:36.100: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Dec 20 13:48:36.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9645' Dec 20 13:48:36.640: INFO: stderr: "" Dec 20 13:48:36.640: INFO: stdout: "service/redis-slave created\n" Dec 20 13:48:36.641: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Dec 20 13:48:36.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9645' Dec 20 13:48:36.951: INFO: stderr: "" Dec 20 13:48:36.951: INFO: stdout: "service/redis-master created\n" Dec 20 13:48:36.951: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Dec 20 13:48:36.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9645' Dec 20 13:48:37.622: INFO: stderr: "" Dec 20 13:48:37.622: INFO: stdout: "service/frontend created\n" Dec 20 13:48:37.623: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Dec 20 13:48:37.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9645' Dec 20 13:48:38.086: INFO: stderr: "" Dec 20 13:48:38.086: INFO: stdout: "deployment.apps/frontend created\n" Dec 20 13:48:38.086: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Dec 20 13:48:38.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9645' Dec 20 13:48:38.763: INFO: stderr: "" Dec 20 13:48:38.763: INFO: stdout: "deployment.apps/redis-master created\n" Dec 20 13:48:38.764: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Dec 20 13:48:38.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9645' Dec 20 13:48:40.151: INFO: stderr: "" Dec 20 13:48:40.151: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app Dec 20 13:48:40.151: INFO: Waiting for all frontend pods to be Running. Dec 20 13:49:10.204: INFO: Waiting for frontend to serve content. Dec 20 13:49:10.316: INFO: Trying to add a new entry to the guestbook. Dec 20 13:49:10.443: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Dec 20 13:49:10.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9645' Dec 20 13:49:10.706: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 20 13:49:10.706: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Dec 20 13:49:10.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9645' Dec 20 13:49:10.904: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 20 13:49:10.904: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Dec 20 13:49:10.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9645' Dec 20 13:49:11.222: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 20 13:49:11.223: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Dec 20 13:49:11.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9645' Dec 20 13:49:11.345: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 20 13:49:11.345: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Dec 20 13:49:11.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9645' Dec 20 13:49:11.523: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 20 13:49:11.523: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources Dec 20 13:49:11.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9645' Dec 20 13:49:11.714: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 20 13:49:11.715: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:49:11.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9645" for this suite. Dec 20 13:49:51.843: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:49:51.999: INFO: namespace kubectl-9645 deletion completed in 40.268504841s • [SLOW TEST:75.996 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:49:51.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 20 13:49:52.133: INFO: Waiting up to 5m0s for pod "downwardapi-volume-81f44d47-32a7-4915-ad2c-99aa94b3b0ba" in namespace "downward-api-2913" to be "success or failure" Dec 20 13:49:52.143: INFO: Pod "downwardapi-volume-81f44d47-32a7-4915-ad2c-99aa94b3b0ba": Phase="Pending", Reason="", readiness=false. Elapsed: 9.690035ms Dec 20 13:49:54.154: INFO: Pod "downwardapi-volume-81f44d47-32a7-4915-ad2c-99aa94b3b0ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020816925s Dec 20 13:49:56.162: INFO: Pod "downwardapi-volume-81f44d47-32a7-4915-ad2c-99aa94b3b0ba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028873927s Dec 20 13:49:58.168: INFO: Pod "downwardapi-volume-81f44d47-32a7-4915-ad2c-99aa94b3b0ba": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035157358s Dec 20 13:50:00.180: INFO: Pod "downwardapi-volume-81f44d47-32a7-4915-ad2c-99aa94b3b0ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.0465355s STEP: Saw pod success Dec 20 13:50:00.180: INFO: Pod "downwardapi-volume-81f44d47-32a7-4915-ad2c-99aa94b3b0ba" satisfied condition "success or failure" Dec 20 13:50:00.185: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-81f44d47-32a7-4915-ad2c-99aa94b3b0ba container client-container: STEP: delete the pod Dec 20 13:50:00.248: INFO: Waiting for pod downwardapi-volume-81f44d47-32a7-4915-ad2c-99aa94b3b0ba to disappear Dec 20 13:50:00.255: INFO: Pod downwardapi-volume-81f44d47-32a7-4915-ad2c-99aa94b3b0ba no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:50:00.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2913" for this suite. Dec 20 13:50:06.325: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:50:06.570: INFO: namespace downward-api-2913 deletion completed in 6.308923193s • [SLOW TEST:14.572 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:50:06.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Dec 20 13:50:06.717: INFO: Waiting up to 5m0s for pod "pod-4b99588a-f4b0-499f-ad3c-33af03ce2098" in namespace "emptydir-7963" to be "success or failure" Dec 20 13:50:06.734: INFO: Pod "pod-4b99588a-f4b0-499f-ad3c-33af03ce2098": Phase="Pending", Reason="", readiness=false. Elapsed: 17.456517ms Dec 20 13:50:08.751: INFO: Pod "pod-4b99588a-f4b0-499f-ad3c-33af03ce2098": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033726491s Dec 20 13:50:10.775: INFO: Pod "pod-4b99588a-f4b0-499f-ad3c-33af03ce2098": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057967686s Dec 20 13:50:12.790: INFO: Pod "pod-4b99588a-f4b0-499f-ad3c-33af03ce2098": Phase="Pending", Reason="", readiness=false. Elapsed: 6.072901828s Dec 20 13:50:14.801: INFO: Pod "pod-4b99588a-f4b0-499f-ad3c-33af03ce2098": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.083835767s STEP: Saw pod success Dec 20 13:50:14.801: INFO: Pod "pod-4b99588a-f4b0-499f-ad3c-33af03ce2098" satisfied condition "success or failure" Dec 20 13:50:14.803: INFO: Trying to get logs from node iruya-node pod pod-4b99588a-f4b0-499f-ad3c-33af03ce2098 container test-container: STEP: delete the pod Dec 20 13:50:14.874: INFO: Waiting for pod pod-4b99588a-f4b0-499f-ad3c-33af03ce2098 to disappear Dec 20 13:50:14.884: INFO: Pod pod-4b99588a-f4b0-499f-ad3c-33af03ce2098 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:50:14.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7963" for this suite. Dec 20 13:50:20.988: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:50:21.131: INFO: namespace emptydir-7963 deletion completed in 6.243407367s • [SLOW TEST:14.559 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:50:21.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Dec 20 13:50:21.258: INFO: Waiting up to 5m0s for pod "downward-api-02b26d89-66fc-4d71-9bc6-c8b944cf3ac2" in namespace "downward-api-1319" to be "success or failure" Dec 20 13:50:21.272: INFO: Pod "downward-api-02b26d89-66fc-4d71-9bc6-c8b944cf3ac2": Phase="Pending", Reason="", readiness=false. Elapsed: 14.358914ms Dec 20 13:50:23.282: INFO: Pod "downward-api-02b26d89-66fc-4d71-9bc6-c8b944cf3ac2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024346537s Dec 20 13:50:25.294: INFO: Pod "downward-api-02b26d89-66fc-4d71-9bc6-c8b944cf3ac2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035806835s Dec 20 13:50:27.316: INFO: Pod "downward-api-02b26d89-66fc-4d71-9bc6-c8b944cf3ac2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058270635s Dec 20 13:50:29.330: INFO: Pod "downward-api-02b26d89-66fc-4d71-9bc6-c8b944cf3ac2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.071782618s Dec 20 13:50:31.371: INFO: Pod "downward-api-02b26d89-66fc-4d71-9bc6-c8b944cf3ac2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.11296171s STEP: Saw pod success Dec 20 13:50:31.371: INFO: Pod "downward-api-02b26d89-66fc-4d71-9bc6-c8b944cf3ac2" satisfied condition "success or failure" Dec 20 13:50:31.377: INFO: Trying to get logs from node iruya-node pod downward-api-02b26d89-66fc-4d71-9bc6-c8b944cf3ac2 container dapi-container: STEP: delete the pod Dec 20 13:50:31.527: INFO: Waiting for pod downward-api-02b26d89-66fc-4d71-9bc6-c8b944cf3ac2 to disappear Dec 20 13:50:31.541: INFO: Pod downward-api-02b26d89-66fc-4d71-9bc6-c8b944cf3ac2 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:50:31.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1319" for this suite. Dec 20 13:50:37.657: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:50:37.799: INFO: namespace downward-api-1319 deletion completed in 6.25030062s • [SLOW TEST:16.668 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:50:37.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-5191 STEP: creating a selector STEP: Creating the service pods in kubernetes Dec 20 13:50:39.299: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Dec 20 13:51:19.988: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-5191 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 20 13:51:19.988: INFO: >>> kubeConfig: /root/.kube/config Dec 20 13:51:20.383: INFO: Waiting for endpoints: map[] Dec 20 13:51:20.392: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-5191 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 20 13:51:20.392: INFO: >>> kubeConfig: /root/.kube/config Dec 20 13:51:20.786: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:51:20.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5191" for this suite. Dec 20 13:51:36.840: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:51:36.987: INFO: namespace pod-network-test-5191 deletion completed in 16.169631466s • [SLOW TEST:59.187 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:51:36.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-1082 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-1082 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1082 Dec 20 13:51:37.157: INFO: Found 0 stateful pods, waiting for 1 Dec 20 13:51:47.181: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Dec 20 13:51:47.186: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 20 13:51:50.079: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Dec 20 13:51:50.079: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 20 13:51:50.079: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 20 13:51:50.086: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Dec 20 13:52:00.113: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Dec 20 13:52:00.114: INFO: Waiting for statefulset status.replicas updated to 0 Dec 20 13:52:00.249: INFO: POD NODE PHASE GRACE CONDITIONS Dec 20 13:52:00.249: INFO: ss-0 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:51:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:51:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:51:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:51:37 +0000 UTC }] Dec 20 13:52:00.250: INFO: Dec 20 13:52:00.250: INFO: StatefulSet ss has not reached scale 3, at 1 Dec 20 13:52:02.074: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.922426419s Dec 20 13:52:03.415: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.098159302s Dec 20 13:52:04.424: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.757062412s Dec 20 13:52:07.863: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.747958391s Dec 20 13:52:08.883: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.308350752s Dec 20 13:52:09.905: INFO: Verifying statefulset ss doesn't scale past 3 for another 289.102569ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1082 Dec 20 13:52:10.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 20 13:52:11.437: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" Dec 20 13:52:11.437: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 20 13:52:11.437: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 20 13:52:11.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 20 13:52:11.898: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Dec 20 13:52:11.898: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 20 13:52:11.898: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 20 13:52:11.899: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 20 13:52:12.728: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Dec 20 13:52:12.728: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 20 13:52:12.728: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 20 13:52:12.738: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Dec 20 13:52:12.738: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Dec 20 13:52:12.738: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Dec 20 13:52:12.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 20 13:52:13.177: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Dec 20 13:52:13.177: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 20 13:52:13.177: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 20 13:52:13.177: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 20 13:52:13.514: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Dec 20 13:52:13.514: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 20 13:52:13.514: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 20 13:52:13.514: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 20 13:52:14.502: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Dec 20 13:52:14.502: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 20 13:52:14.502: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 20 13:52:14.502: INFO: Waiting for statefulset status.replicas updated to 0 Dec 20 13:52:14.517: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Dec 20 13:52:14.517: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Dec 20 13:52:14.517: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Dec 20 13:52:14.537: INFO: POD NODE PHASE GRACE CONDITIONS Dec 20 13:52:14.537: INFO: ss-0 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:51:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:51:37 +0000 UTC }] Dec 20 13:52:14.537: INFO: ss-1 iruya-server-sfge57q7djm7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:00 +0000 UTC }] Dec 20 13:52:14.537: INFO: ss-2 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:00 +0000 UTC }] Dec 20 13:52:14.537: INFO: Dec 20 13:52:14.537: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 20 13:52:16.042: INFO: POD NODE PHASE GRACE CONDITIONS Dec 20 13:52:16.042: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:51:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:51:37 +0000 UTC }] Dec 20 13:52:16.042: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:00 +0000 UTC }] Dec 20 13:52:16.042: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:00 +0000 UTC }] Dec 20 13:52:16.042: INFO: Dec 20 13:52:16.042: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 20 13:52:17.058: INFO: POD NODE PHASE GRACE CONDITIONS Dec 20 13:52:17.058: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:51:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:51:37 +0000 UTC }] Dec 20 13:52:17.058: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:00 +0000 UTC }] Dec 20 13:52:17.058: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:00 +0000 UTC }] Dec 20 13:52:17.058: INFO: Dec 20 13:52:17.058: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 20 13:52:18.071: INFO: POD NODE PHASE GRACE CONDITIONS Dec 20 13:52:18.071: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:51:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:51:37 +0000 UTC }] Dec 20 13:52:18.071: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:00 +0000 UTC }] Dec 20 13:52:18.071: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:00 +0000 UTC }] Dec 20 13:52:18.071: INFO: Dec 20 13:52:18.071: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 20 13:52:19.099: INFO: POD NODE PHASE GRACE CONDITIONS Dec 20 13:52:19.100: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:51:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:51:37 +0000 UTC }] Dec 20 13:52:19.100: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:00 +0000 UTC }] Dec 20 13:52:19.100: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:00 +0000 UTC }] Dec 20 13:52:19.100: INFO: Dec 20 13:52:19.100: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 20 13:52:20.109: INFO: POD NODE PHASE GRACE CONDITIONS Dec 20 13:52:20.109: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:51:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:51:37 +0000 UTC }] Dec 20 13:52:20.109: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:00 +0000 UTC }] Dec 20 13:52:20.109: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:00 +0000 UTC }] Dec 20 13:52:20.109: INFO: Dec 20 13:52:20.109: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 20 13:52:21.116: INFO: POD NODE PHASE GRACE CONDITIONS Dec 20 13:52:21.116: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:51:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:51:37 +0000 UTC }] Dec 20 13:52:21.116: INFO: ss-1 iruya-server-sfge57q7djm7 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:00 +0000 UTC }] Dec 20 13:52:21.116: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:00 +0000 UTC }] Dec 20 13:52:21.116: INFO: Dec 20 13:52:21.116: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 20 13:52:22.128: INFO: POD NODE PHASE GRACE CONDITIONS Dec 20 13:52:22.128: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:51:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:51:37 +0000 UTC }] Dec 20 13:52:22.128: INFO: ss-1 iruya-server-sfge57q7djm7 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:00 +0000 UTC }] Dec 20 13:52:22.128: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:00 +0000 UTC }] Dec 20 13:52:22.128: INFO: Dec 20 13:52:22.128: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 20 13:52:23.141: INFO: POD NODE PHASE GRACE CONDITIONS Dec 20 13:52:23.141: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:51:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:51:37 +0000 UTC }] Dec 20 13:52:23.141: INFO: ss-1 iruya-server-sfge57q7djm7 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:00 +0000 UTC }] Dec 20 13:52:23.141: INFO: ss-2 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:00 +0000 UTC }] Dec 20 13:52:23.141: INFO: Dec 20 13:52:23.141: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 20 13:52:24.148: INFO: POD NODE PHASE GRACE CONDITIONS Dec 20 13:52:24.149: INFO: ss-0 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:51:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:51:37 +0000 UTC }] Dec 20 13:52:24.149: INFO: ss-1 iruya-server-sfge57q7djm7 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:00 +0000 UTC }] Dec 20 13:52:24.149: INFO: ss-2 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 13:52:00 +0000 UTC }] Dec 20 13:52:24.149: INFO: Dec 20 13:52:24.149: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1082 Dec 20 13:52:25.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 20 13:52:25.352: INFO: rc: 1 Dec 20 13:52:25.352: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc002156180 exit status 1 true [0xc0013623d8 0xc0013623f0 0xc001362408] [0xc0013623d8 0xc0013623f0 0xc001362408] [0xc0013623e8 0xc001362400] [0xba6c50 0xba6c50] 0xc001863800 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Dec 20 13:52:35.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 20 13:52:35.502: INFO: rc: 1 Dec 20 13:52:35.502: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002156240 exit status 1 true [0xc001362410 0xc001362428 0xc001362440] [0xc001362410 0xc001362428 0xc001362440] [0xc001362420 0xc001362438] [0xba6c50 0xba6c50] 0xc001621da0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 20 13:52:45.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 20 13:52:45.701: INFO: rc: 1 Dec 20 13:52:45.701: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0020558f0 exit status 1 true [0xc00248a5c8 0xc00248a5e0 0xc00248a600] [0xc00248a5c8 0xc00248a5e0 0xc00248a600] [0xc00248a5d8 0xc00248a5f0] [0xba6c50 0xba6c50] 0xc00328ba40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 20 13:52:55.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 20 13:52:55.862: INFO: rc: 1 Dec 20 13:52:55.862: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00260db90 exit status 1 true [0xc0028945a8 0xc0028945e8 0xc002894618] [0xc0028945a8 0xc0028945e8 0xc002894618] [0xc0028945d0 0xc002894608] [0xba6c50 0xba6c50] 0xc00250ff20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 20 13:53:05.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 20 13:53:05.983: INFO: rc: 1 Dec 20 13:53:05.984: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002055a10 exit status 1 true [0xc00248a608 0xc00248a620 0xc00248a638] [0xc00248a608 0xc00248a620 0xc00248a638] [0xc00248a618 0xc00248a630] [0xba6c50 0xba6c50] 0xc00328bf20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 20 13:53:15.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 20 13:53:16.140: INFO: rc: 1 Dec 20 13:53:16.140: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00260dc50 exit status 1 true [0xc002894628 0xc002894660 0xc0028946a8] [0xc002894628 0xc002894660 0xc0028946a8] [0xc002894650 0xc002894698] [0xba6c50 0xba6c50] 0xc001b73860 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 20 13:53:26.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 20 13:53:26.281: INFO: rc: 1 Dec 20 13:53:26.281: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0021de090 exit status 1 true [0xc000186050 0xc00035ec88 0xc00035f0f8] [0xc000186050 0xc00035ec88 0xc00035f0f8] [0xc00035e970 0xc00035f0a8] [0xba6c50 0xba6c50] 0xc00144d980 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 20 13:53:36.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 20 13:53:36.371: INFO: rc: 1 Dec 20 13:53:36.372: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0021de150 exit status 1 true [0xc00035f128 0xc00035f230 0xc00035f330] [0xc00035f128 0xc00035f230 0xc00035f330] [0xc00035f220 0xc00035f2b0] [0xba6c50 0xba6c50] 0xc001c6d500 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 20 13:53:46.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 20 13:53:46.615: INFO: rc: 1 Dec 20 13:53:46.616: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0006c14a0 exit status 1 true [0xc001362000 0xc001362040 0xc001362080] [0xc001362000 0xc001362040 0xc001362080] [0xc001362038 0xc001362078] [0xba6c50 0xba6c50] 0xc001862900 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 20 13:53:56.617: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 20 13:53:56.908: INFO: rc: 1 Dec 20 13:53:56.908: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0021de210 exit status 1 true [0xc00035f340 0xc00035f520 0xc00035f6d0] [0xc00035f340 0xc00035f520 0xc00035f6d0] [0xc00035f4a8 0xc00035f548] [0xba6c50 0xba6c50] 0xc001da2de0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 20 13:54:06.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 20 13:54:07.072: INFO: rc: 1 Dec 20 13:54:07.072: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0006c1890 exit status 1 true [0xc001362088 0xc0013620a0 0xc0013620b8] [0xc001362088 0xc0013620a0 0xc0013620b8] [0xc001362098 0xc0013620b0] [0xba6c50 0xba6c50] 0xc002450de0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 20 13:54:17.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 20 13:54:17.266: INFO: rc: 1 Dec 20 13:54:17.266: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00326a090 exit status 1 true [0xc00248a000 0xc00248a020 0xc00248a038] [0xc00248a000 0xc00248a020 0xc00248a038] [0xc00248a018 0xc00248a030] [0xba6c50 0xba6c50] 0xc00223f9e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 20 13:54:27.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 20 13:54:27.413: INFO: rc: 1 Dec 20 13:54:27.413: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00326a180 exit status 1 true [0xc00248a050 0xc00248a0b0 0xc00248a0e0] [0xc00248a050 0xc00248a0b0 0xc00248a0e0] [0xc00248a098 0xc00248a0d0] [0xba6c50 0xba6c50] 0xc00284c240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 20 13:54:37.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 20 13:54:37.588: INFO: rc: 1 Dec 20 13:54:37.588: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00255e0c0 exit status 1 true [0xc002452050 0xc002452078 0xc0024520c8] [0xc002452050 0xc002452078 0xc0024520c8] [0xc002452068 0xc0024520b8] [0xba6c50 0xba6c50] 0xc0024c22a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 20 13:54:47.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 20 13:54:47.741: INFO: rc: 1 Dec 20 13:54:47.742: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00326a270 exit status 1 true [0xc00248a100 0xc00248a150 0xc00248a170] [0xc00248a100 0xc00248a150 0xc00248a170] [0xc00248a128 0xc00248a168] [0xba6c50 0xba6c50] 0xc00284c7e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 20 13:54:57.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 20 13:54:57.908: INFO: rc: 1 Dec 20 13:54:57.908: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00255e1b0 exit status 1 true [0xc002452118 0xc002452150 0xc0024521b0] [0xc002452118 0xc002452150 0xc0024521b0] [0xc002452148 0xc0024521a0] [0xba6c50 0xba6c50] 0xc0024c2f00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 20 13:55:07.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 20 13:55:08.013: INFO: rc: 1 Dec 20 13:55:08.013: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00326a390 exit status 1 true [0xc00248a180 0xc00248a1c8 0xc00248a220] [0xc00248a180 0xc00248a1c8 0xc00248a220] [0xc00248a1b8 0xc00248a1f0] [0xba6c50 0xba6c50] 0xc00284cfc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 20 13:55:18.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 20 13:55:18.193: INFO: rc: 1 Dec 20 13:55:18.193: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0021de330 exit status 1 true [0xc00035f7a0 0xc00035f888 0xc00035fb78] [0xc00035f7a0 0xc00035f888 0xc00035fb78] [0xc00035f810 0xc00035fa08] [0xba6c50 0xba6c50] 0xc001f2bb00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 20 13:55:28.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 20 13:55:28.397: INFO: rc: 1 Dec 20 13:55:28.398: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00255e060 exit status 1 true [0xc000186050 0xc002452060 0xc0024520a8] [0xc000186050 0xc002452060 0xc0024520a8] [0xc002452050 0xc002452078] [0xba6c50 0xba6c50] 0xc0017471a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 20 13:55:38.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 20 13:55:38.582: INFO: rc: 1 Dec 20 13:55:38.582: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00326a0c0 exit status 1 true [0xc00248a000 0xc00248a020 0xc00248a038] [0xc00248a000 0xc00248a020 0xc00248a038] [0xc00248a018 0xc00248a030] [0xba6c50 0xba6c50] 0xc001d9ca20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 20 13:55:48.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 20 13:55:48.744: INFO: rc: 1 Dec 20 13:55:48.745: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0021de0c0 exit status 1 true [0xc00035e970 0xc00035f0a8 0xc00035f1b0] [0xc00035e970 0xc00035f0a8 0xc00035f1b0] [0xc00035ee18 0xc00035f128] [0xba6c50 0xba6c50] 0xc0014d45a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 20 13:55:58.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 20 13:55:58.949: INFO: rc: 1 Dec 20 13:55:58.950: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00326a1e0 exit status 1 true [0xc00248a050 0xc00248a0b0 0xc00248a0e0] [0xc00248a050 0xc00248a0b0 0xc00248a0e0] [0xc00248a098 0xc00248a0d0] [0xba6c50 0xba6c50] 0xc001476ba0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 20 13:56:08.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 20 13:56:09.130: INFO: rc: 1 Dec 20 13:56:09.130: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0006c1230 exit status 1 true [0xc001362000 0xc001362040 0xc001362080] [0xc001362000 0xc001362040 0xc001362080] [0xc001362038 0xc001362078] [0xba6c50 0xba6c50] 0xc001b72f60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 20 13:56:19.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 20 13:56:19.565: INFO: rc: 1 Dec 20 13:56:19.566: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00255e180 exit status 1 true [0xc0024520b8 0xc002452138 0xc002452180] [0xc0024520b8 0xc002452138 0xc002452180] [0xc002452118 0xc002452150] [0xba6c50 0xba6c50] 0xc00223fbc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 20 13:56:29.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 20 13:56:31.983: INFO: rc: 1 Dec 20 13:56:31.984: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00326a330 exit status 1 true [0xc00248a100 0xc00248a150 0xc00248a170] [0xc00248a100 0xc00248a150 0xc00248a170] [0xc00248a128 0xc00248a168] [0xba6c50 0xba6c50] 0xc0024c2960 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 20 13:56:41.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 20 13:56:42.122: INFO: rc: 1 Dec 20 13:56:42.122: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00255e2d0 exit status 1 true [0xc0024521a0 0xc0024521c0 0xc002452208] [0xc0024521a0 0xc0024521c0 0xc002452208] [0xc0024521b8 0xc0024521e8] [0xba6c50 0xba6c50] 0xc00284c360 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 20 13:56:52.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 20 13:56:52.316: INFO: rc: 1 Dec 20 13:56:52.317: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00255e390 exit status 1 true [0xc002452218 0xc002452238 0xc002452260] [0xc002452218 0xc002452238 0xc002452260] [0xc002452230 0xc002452250] [0xba6c50 0xba6c50] 0xc00284c900 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 20 13:57:02.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 20 13:57:02.492: INFO: rc: 1 Dec 20 13:57:02.492: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00326a450 exit status 1 true [0xc00248a180 0xc00248a1c8 0xc00248a220] [0xc00248a180 0xc00248a1c8 0xc00248a220] [0xc00248a1b8 0xc00248a1f0] [0xba6c50 0xba6c50] 0xc0024c3500 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 20 13:57:12.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 20 13:57:12.685: INFO: rc: 1 Dec 20 13:57:12.685: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00255e450 exit status 1 true [0xc002452270 0xc002452290 0xc0024522a8] [0xc002452270 0xc002452290 0xc0024522a8] [0xc002452288 0xc0024522a0] [0xba6c50 0xba6c50] 0xc00284d0e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 20 13:57:22.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 20 13:57:22.786: INFO: rc: 1 Dec 20 13:57:22.786: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00255e540 exit status 1 true [0xc0024522b0 0xc002452320 0xc002452360] [0xc0024522b0 0xc002452320 0xc002452360] [0xc002452310 0xc002452350] [0xba6c50 0xba6c50] 0xc00284d860 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 20 13:57:32.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 20 13:57:33.095: INFO: rc: 1 Dec 20 13:57:33.096: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: Dec 20 13:57:33.096: INFO: Scaling statefulset ss to 0 Dec 20 13:57:33.112: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Dec 20 13:57:33.114: INFO: Deleting all statefulset in ns statefulset-1082 Dec 20 13:57:33.117: INFO: Scaling statefulset ss to 0 Dec 20 13:57:33.140: INFO: Waiting for statefulset status.replicas updated to 0 Dec 20 13:57:33.146: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:57:33.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1082" for this suite. Dec 20 13:57:41.293: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:57:41.434: INFO: namespace statefulset-1082 deletion completed in 8.163384584s • [SLOW TEST:364.447 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:57:41.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Dec 20 13:57:41.563: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6204,SelfLink:/api/v1/namespaces/watch-6204/configmaps/e2e-watch-test-configmap-a,UID:a539e378-9b5c-482e-8677-70922df466dc,ResourceVersion:17395882,Generation:0,CreationTimestamp:2019-12-20 13:57:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Dec 20 13:57:41.564: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6204,SelfLink:/api/v1/namespaces/watch-6204/configmaps/e2e-watch-test-configmap-a,UID:a539e378-9b5c-482e-8677-70922df466dc,ResourceVersion:17395882,Generation:0,CreationTimestamp:2019-12-20 13:57:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Dec 20 13:57:51.582: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6204,SelfLink:/api/v1/namespaces/watch-6204/configmaps/e2e-watch-test-configmap-a,UID:a539e378-9b5c-482e-8677-70922df466dc,ResourceVersion:17395896,Generation:0,CreationTimestamp:2019-12-20 13:57:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Dec 20 13:57:51.582: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6204,SelfLink:/api/v1/namespaces/watch-6204/configmaps/e2e-watch-test-configmap-a,UID:a539e378-9b5c-482e-8677-70922df466dc,ResourceVersion:17395896,Generation:0,CreationTimestamp:2019-12-20 13:57:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Dec 20 13:58:01.638: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6204,SelfLink:/api/v1/namespaces/watch-6204/configmaps/e2e-watch-test-configmap-a,UID:a539e378-9b5c-482e-8677-70922df466dc,ResourceVersion:17395911,Generation:0,CreationTimestamp:2019-12-20 13:57:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 20 13:58:01.638: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6204,SelfLink:/api/v1/namespaces/watch-6204/configmaps/e2e-watch-test-configmap-a,UID:a539e378-9b5c-482e-8677-70922df466dc,ResourceVersion:17395911,Generation:0,CreationTimestamp:2019-12-20 13:57:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Dec 20 13:58:11.652: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6204,SelfLink:/api/v1/namespaces/watch-6204/configmaps/e2e-watch-test-configmap-a,UID:a539e378-9b5c-482e-8677-70922df466dc,ResourceVersion:17395924,Generation:0,CreationTimestamp:2019-12-20 13:57:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 20 13:58:11.653: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6204,SelfLink:/api/v1/namespaces/watch-6204/configmaps/e2e-watch-test-configmap-a,UID:a539e378-9b5c-482e-8677-70922df466dc,ResourceVersion:17395924,Generation:0,CreationTimestamp:2019-12-20 13:57:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Dec 20 13:58:21.667: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-6204,SelfLink:/api/v1/namespaces/watch-6204/configmaps/e2e-watch-test-configmap-b,UID:965ea5b2-da98-4de2-8d8a-1d068b4cba62,ResourceVersion:17395937,Generation:0,CreationTimestamp:2019-12-20 13:58:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Dec 20 13:58:21.667: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-6204,SelfLink:/api/v1/namespaces/watch-6204/configmaps/e2e-watch-test-configmap-b,UID:965ea5b2-da98-4de2-8d8a-1d068b4cba62,ResourceVersion:17395937,Generation:0,CreationTimestamp:2019-12-20 13:58:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Dec 20 13:58:31.681: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-6204,SelfLink:/api/v1/namespaces/watch-6204/configmaps/e2e-watch-test-configmap-b,UID:965ea5b2-da98-4de2-8d8a-1d068b4cba62,ResourceVersion:17395952,Generation:0,CreationTimestamp:2019-12-20 13:58:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Dec 20 13:58:31.682: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-6204,SelfLink:/api/v1/namespaces/watch-6204/configmaps/e2e-watch-test-configmap-b,UID:965ea5b2-da98-4de2-8d8a-1d068b4cba62,ResourceVersion:17395952,Generation:0,CreationTimestamp:2019-12-20 13:58:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:58:41.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6204" for this suite. Dec 20 13:58:47.734: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:58:47.943: INFO: namespace watch-6204 deletion completed in 6.248898135s • [SLOW TEST:66.508 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:58:47.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 13:58:56.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-600" for this suite. Dec 20 13:59:48.520: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 13:59:48.680: INFO: namespace kubelet-test-600 deletion completed in 52.218723378s • [SLOW TEST:60.736 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 13:59:48.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-2zrcz in namespace proxy-2021 I1220 13:59:48.944740 8 runners.go:180] Created replication controller with name: proxy-service-2zrcz, namespace: proxy-2021, replica count: 1 I1220 13:59:49.995896 8 runners.go:180] proxy-service-2zrcz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1220 13:59:50.996384 8 runners.go:180] proxy-service-2zrcz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1220 13:59:51.997112 8 runners.go:180] proxy-service-2zrcz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1220 13:59:52.997805 8 runners.go:180] proxy-service-2zrcz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1220 13:59:53.998286 8 runners.go:180] proxy-service-2zrcz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1220 13:59:54.998539 8 runners.go:180] proxy-service-2zrcz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1220 13:59:55.999121 8 runners.go:180] proxy-service-2zrcz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1220 13:59:56.999913 8 runners.go:180] proxy-service-2zrcz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1220 13:59:58.000366 8 runners.go:180] proxy-service-2zrcz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1220 13:59:59.000730 8 runners.go:180] proxy-service-2zrcz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1220 14:00:00.001147 8 runners.go:180] proxy-service-2zrcz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1220 14:00:01.001814 8 runners.go:180] proxy-service-2zrcz Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Dec 20 14:00:01.072: INFO: setup took 12.321247081s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Dec 20 14:00:01.102: INFO: (0) /api/v1/namespaces/proxy-2021/pods/proxy-service-2zrcz-2m9vx:162/proxy/: bar (200; 27.166119ms) Dec 20 14:00:01.108: INFO: (0) /api/v1/namespaces/proxy-2021/pods/proxy-service-2zrcz-2m9vx:160/proxy/: foo (200; 33.091667ms) Dec 20 14:00:01.108: INFO: (0) /api/v1/namespaces/proxy-2021/pods/http:proxy-service-2zrcz-2m9vx:162/proxy/: bar (200; 34.375605ms) Dec 20 14:00:01.109: INFO: (0) /api/v1/namespaces/proxy-2021/pods/proxy-service-2zrcz-2m9vx/proxy/: test (200; 35.263799ms) Dec 20 14:00:01.110: INFO: (0) /api/v1/namespaces/proxy-2021/pods/proxy-service-2zrcz-2m9vx:1080/proxy/: test<... (200; 35.492367ms) Dec 20 14:00:01.112: INFO: (0) /api/v1/namespaces/proxy-2021/pods/http:proxy-service-2zrcz-2m9vx:1080/proxy/: ... (200; 36.941314ms) Dec 20 14:00:01.113: INFO: (0) /api/v1/namespaces/proxy-2021/services/proxy-service-2zrcz:portname1/proxy/: foo (200; 38.099748ms) Dec 20 14:00:01.113: INFO: (0) /api/v1/namespaces/proxy-2021/pods/http:proxy-service-2zrcz-2m9vx:160/proxy/: foo (200; 38.417914ms) Dec 20 14:00:01.113: INFO: (0) /api/v1/namespaces/proxy-2021/services/http:proxy-service-2zrcz:portname1/proxy/: foo (200; 38.192002ms) Dec 20 14:00:01.125: INFO: (0) /api/v1/namespaces/proxy-2021/services/http:proxy-service-2zrcz:portname2/proxy/: bar (200; 50.73662ms) Dec 20 14:00:01.126: INFO: (0) /api/v1/namespaces/proxy-2021/services/proxy-service-2zrcz:portname2/proxy/: bar (200; 51.352231ms) Dec 20 14:00:01.131: INFO: (0) /api/v1/namespaces/proxy-2021/pods/https:proxy-service-2zrcz-2m9vx:460/proxy/: tls baz (200; 59.022365ms) Dec 20 14:00:01.132: INFO: (0) /api/v1/namespaces/proxy-2021/pods/https:proxy-service-2zrcz-2m9vx:462/proxy/: tls qux (200; 57.915684ms) Dec 20 14:00:01.132: INFO: (0) /api/v1/namespaces/proxy-2021/pods/https:proxy-service-2zrcz-2m9vx:443/proxy/: ... (200; 13.508991ms) Dec 20 14:00:01.149: INFO: (1) /api/v1/namespaces/proxy-2021/pods/proxy-service-2zrcz-2m9vx:160/proxy/: foo (200; 12.979918ms) Dec 20 14:00:01.149: INFO: (1) /api/v1/namespaces/proxy-2021/pods/http:proxy-service-2zrcz-2m9vx:162/proxy/: bar (200; 14.15419ms) Dec 20 14:00:01.149: INFO: (1) /api/v1/namespaces/proxy-2021/pods/https:proxy-service-2zrcz-2m9vx:460/proxy/: tls baz (200; 14.572143ms) Dec 20 14:00:01.150: INFO: (1) /api/v1/namespaces/proxy-2021/services/http:proxy-service-2zrcz:portname1/proxy/: foo (200; 15.355492ms) Dec 20 14:00:01.150: INFO: (1) /api/v1/namespaces/proxy-2021/services/proxy-service-2zrcz:portname1/proxy/: foo (200; 15.669111ms) Dec 20 14:00:01.150: INFO: (1) /api/v1/namespaces/proxy-2021/services/https:proxy-service-2zrcz:tlsportname1/proxy/: tls baz (200; 15.752043ms) Dec 20 14:00:01.151: INFO: (1) /api/v1/namespaces/proxy-2021/pods/https:proxy-service-2zrcz-2m9vx:443/proxy/: test (200; 16.822255ms) Dec 20 14:00:01.152: INFO: (1) /api/v1/namespaces/proxy-2021/pods/proxy-service-2zrcz-2m9vx:1080/proxy/: test<... (200; 17.036166ms) Dec 20 14:00:01.152: INFO: (1) /api/v1/namespaces/proxy-2021/services/http:proxy-service-2zrcz:portname2/proxy/: bar (200; 16.900896ms) Dec 20 14:00:01.156: INFO: (2) /api/v1/namespaces/proxy-2021/pods/proxy-service-2zrcz-2m9vx:162/proxy/: bar (200; 4.608905ms) Dec 20 14:00:01.159: INFO: (2) /api/v1/namespaces/proxy-2021/pods/http:proxy-service-2zrcz-2m9vx:160/proxy/: foo (200; 7.307712ms) Dec 20 14:00:01.159: INFO: (2) /api/v1/namespaces/proxy-2021/services/http:proxy-service-2zrcz:portname1/proxy/: foo (200; 7.451067ms) Dec 20 14:00:01.160: INFO: (2) /api/v1/namespaces/proxy-2021/pods/https:proxy-service-2zrcz-2m9vx:443/proxy/: test (200; 8.224457ms) Dec 20 14:00:01.160: INFO: (2) /api/v1/namespaces/proxy-2021/pods/http:proxy-service-2zrcz-2m9vx:162/proxy/: bar (200; 8.545666ms) Dec 20 14:00:01.161: INFO: (2) /api/v1/namespaces/proxy-2021/pods/https:proxy-service-2zrcz-2m9vx:462/proxy/: tls qux (200; 8.812532ms) Dec 20 14:00:01.161: INFO: (2) /api/v1/namespaces/proxy-2021/pods/https:proxy-service-2zrcz-2m9vx:460/proxy/: tls baz (200; 8.826179ms) Dec 20 14:00:01.161: INFO: (2) /api/v1/namespaces/proxy-2021/pods/http:proxy-service-2zrcz-2m9vx:1080/proxy/: ... (200; 8.902852ms) Dec 20 14:00:01.161: INFO: (2) /api/v1/namespaces/proxy-2021/services/proxy-service-2zrcz:portname1/proxy/: foo (200; 9.551046ms) Dec 20 14:00:01.163: INFO: (2) /api/v1/namespaces/proxy-2021/pods/proxy-service-2zrcz-2m9vx:1080/proxy/: test<... (200; 10.919831ms) Dec 20 14:00:01.163: INFO: (2) /api/v1/namespaces/proxy-2021/services/http:proxy-service-2zrcz:portname2/proxy/: bar (200; 11.416818ms) Dec 20 14:00:01.449: INFO: (2) /api/v1/namespaces/proxy-2021/services/proxy-service-2zrcz:portname2/proxy/: bar (200; 297.489876ms) Dec 20 14:00:01.450: INFO: (2) /api/v1/namespaces/proxy-2021/services/https:proxy-service-2zrcz:tlsportname2/proxy/: tls qux (200; 297.826536ms) Dec 20 14:00:01.450: INFO: (2) /api/v1/namespaces/proxy-2021/services/https:proxy-service-2zrcz:tlsportname1/proxy/: tls baz (200; 297.858149ms) Dec 20 14:00:01.492: INFO: (3) /api/v1/namespaces/proxy-2021/pods/http:proxy-service-2zrcz-2m9vx:1080/proxy/: ... (200; 41.753334ms) Dec 20 14:00:01.492: INFO: (3) /api/v1/namespaces/proxy-2021/pods/proxy-service-2zrcz-2m9vx:162/proxy/: bar (200; 41.88203ms) Dec 20 14:00:01.493: INFO: (3) /api/v1/namespaces/proxy-2021/services/http:proxy-service-2zrcz:portname2/proxy/: bar (200; 42.670157ms) Dec 20 14:00:01.493: INFO: (3) /api/v1/namespaces/proxy-2021/pods/proxy-service-2zrcz-2m9vx/proxy/: test (200; 42.637426ms) Dec 20 14:00:01.494: INFO: (3) /api/v1/namespaces/proxy-2021/pods/proxy-service-2zrcz-2m9vx:160/proxy/: foo (200; 44.076223ms) Dec 20 14:00:01.495: INFO: (3) /api/v1/namespaces/proxy-2021/services/proxy-service-2zrcz:portname1/proxy/: foo (200; 44.880135ms) Dec 20 14:00:01.495: INFO: (3) /api/v1/namespaces/proxy-2021/services/https:proxy-service-2zrcz:tlsportname1/proxy/: tls baz (200; 45.112602ms) Dec 20 14:00:01.496: INFO: (3) /api/v1/namespaces/proxy-2021/pods/https:proxy-service-2zrcz-2m9vx:460/proxy/: tls baz (200; 45.520022ms) Dec 20 14:00:01.496: INFO: (3) /api/v1/namespaces/proxy-2021/services/proxy-service-2zrcz:portname2/proxy/: bar (200; 46.238706ms) Dec 20 14:00:01.496: INFO: (3) /api/v1/namespaces/proxy-2021/pods/proxy-service-2zrcz-2m9vx:1080/proxy/: test<... (200; 46.56299ms) Dec 20 14:00:01.496: INFO: (3) /api/v1/namespaces/proxy-2021/pods/https:proxy-service-2zrcz-2m9vx:443/proxy/: test (200; 15.231285ms) Dec 20 14:00:01.515: INFO: (4) /api/v1/namespaces/proxy-2021/pods/proxy-service-2zrcz-2m9vx:1080/proxy/: test<... (200; 15.44282ms) Dec 20 14:00:01.516: INFO: (4) /api/v1/namespaces/proxy-2021/pods/http:proxy-service-2zrcz-2m9vx:1080/proxy/: ... (200; 15.812455ms) Dec 20 14:00:01.518: INFO: (4) /api/v1/namespaces/proxy-2021/services/https:proxy-service-2zrcz:tlsportname2/proxy/: tls qux (200; 18.380927ms) Dec 20 14:00:01.519: INFO: (4) /api/v1/namespaces/proxy-2021/services/proxy-service-2zrcz:portname2/proxy/: bar (200; 19.128362ms) Dec 20 14:00:01.520: INFO: (4) /api/v1/namespaces/proxy-2021/pods/http:proxy-service-2zrcz-2m9vx:160/proxy/: foo (200; 19.877959ms) Dec 20 14:00:01.520: INFO: (4) /api/v1/namespaces/proxy-2021/services/https:proxy-service-2zrcz:tlsportname1/proxy/: tls baz (200; 20.18662ms) Dec 20 14:00:01.520: INFO: (4) /api/v1/namespaces/proxy-2021/pods/proxy-service-2zrcz-2m9vx:160/proxy/: foo (200; 20.639161ms) Dec 20 14:00:01.523: INFO: (4) /api/v1/namespaces/proxy-2021/pods/proxy-service-2zrcz-2m9vx:162/proxy/: bar (200; 22.540788ms) Dec 20 14:00:01.524: INFO: (4) /api/v1/namespaces/proxy-2021/services/http:proxy-service-2zrcz:portname2/proxy/: bar (200; 24.354147ms) Dec 20 14:00:01.524: INFO: (4) /api/v1/namespaces/proxy-2021/services/http:proxy-service-2zrcz:portname1/proxy/: foo (200; 24.577416ms) Dec 20 14:00:01.526: INFO: (4) /api/v1/namespaces/proxy-2021/services/proxy-service-2zrcz:portname1/proxy/: foo (200; 26.338003ms) Dec 20 14:00:01.533: INFO: (5) /api/v1/namespaces/proxy-2021/pods/http:proxy-service-2zrcz-2m9vx:162/proxy/: bar (200; 7.1145ms) Dec 20 14:00:01.535: INFO: (5) /api/v1/namespaces/proxy-2021/pods/https:proxy-service-2zrcz-2m9vx:460/proxy/: tls baz (200; 8.576236ms) Dec 20 14:00:01.535: INFO: (5) /api/v1/namespaces/proxy-2021/pods/proxy-service-2zrcz-2m9vx:160/proxy/: foo (200; 8.624246ms) Dec 20 14:00:01.536: INFO: (5) /api/v1/namespaces/proxy-2021/pods/proxy-service-2zrcz-2m9vx/proxy/: test (200; 9.194628ms) Dec 20 14:00:01.537: INFO: (5) /api/v1/namespaces/proxy-2021/pods/proxy-service-2zrcz-2m9vx:162/proxy/: bar (200; 10.197452ms) Dec 20 14:00:01.538: INFO: (5) /api/v1/namespaces/proxy-2021/pods/https:proxy-service-2zrcz-2m9vx:443/proxy/: ... (200; 14.43394ms) Dec 20 14:00:01.542: INFO: (5) /api/v1/namespaces/proxy-2021/services/proxy-service-2zrcz:portname1/proxy/: foo (200; 15.537211ms) Dec 20 14:00:01.543: INFO: (5) /api/v1/namespaces/proxy-2021/services/https:proxy-service-2zrcz:tlsportname2/proxy/: tls qux (200; 15.819693ms) Dec 20 14:00:01.543: INFO: (5) /api/v1/namespaces/proxy-2021/pods/proxy-service-2zrcz-2m9vx:1080/proxy/: test<... (200; 15.846331ms) Dec 20 14:00:01.543: INFO: (5) /api/v1/namespaces/proxy-2021/services/proxy-service-2zrcz:portname2/proxy/: bar (200; 16.384079ms) Dec 20 14:00:01.544: INFO: (5) /api/v1/namespaces/proxy-2021/services/https:proxy-service-2zrcz:tlsportname1/proxy/: tls baz (200; 17.132219ms) Dec 20 14:00:01.544: INFO: (5) /api/v1/namespaces/proxy-2021/services/http:proxy-service-2zrcz:portname1/proxy/: foo (200; 16.675129ms) Dec 20 14:00:01.544: INFO: (5) /api/v1/namespaces/proxy-2021/services/http:proxy-service-2zrcz:portname2/proxy/: bar (200; 17.408984ms) Dec 20 14:00:01.599: INFO: (6) /api/v1/namespaces/proxy-2021/pods/proxy-service-2zrcz-2m9vx:1080/proxy/: test<... (200; 55.217668ms) Dec 20 14:00:01.599: INFO: (6) /api/v1/namespaces/proxy-2021/pods/https:proxy-service-2zrcz-2m9vx:462/proxy/: tls qux (200; 55.513988ms) Dec 20 14:00:01.601: INFO: (6) /api/v1/namespaces/proxy-2021/pods/http:proxy-service-2zrcz-2m9vx:1080/proxy/: ... (200; 57.411168ms) Dec 20 14:00:01.602: INFO: (6) /api/v1/namespaces/proxy-2021/pods/https:proxy-service-2zrcz-2m9vx:443/proxy/: test (200; 58.284115ms) Dec 20 14:00:01.603: INFO: (6) /api/v1/namespaces/proxy-2021/pods/http:proxy-service-2zrcz-2m9vx:160/proxy/: foo (200; 59.166346ms) Dec 20 14:00:01.603: INFO: (6) /api/v1/namespaces/proxy-2021/pods/https:proxy-service-2zrcz-2m9vx:460/proxy/: tls baz (200; 59.171977ms) Dec 20 14:00:01.606: INFO: (6) /api/v1/namespaces/proxy-2021/pods/proxy-service-2zrcz-2m9vx:160/proxy/: foo (200; 61.785204ms) Dec 20 14:00:01.611: INFO: (6) /api/v1/namespaces/proxy-2021/services/proxy-service-2zrcz:portname1/proxy/: foo (200; 66.910834ms) Dec 20 14:00:01.611: INFO: (6) /api/v1/namespaces/proxy-2021/pods/http:proxy-service-2zrcz-2m9vx:162/proxy/: bar (200; 66.841734ms) Dec 20 14:00:01.611: INFO: (6) /api/v1/namespaces/proxy-2021/services/https:proxy-service-2zrcz:tlsportname1/proxy/: tls baz (200; 66.830634ms) Dec 20 14:00:01.611: INFO: (6) /api/v1/namespaces/proxy-2021/services/http:proxy-service-2zrcz:portname1/proxy/: foo (200; 67.024238ms) Dec 20 14:00:01.611: INFO: (6) /api/v1/namespaces/proxy-2021/services/https:proxy-service-2zrcz:tlsportname2/proxy/: tls qux (200; 67.409142ms) Dec 20 14:00:01.612: INFO: (6) /api/v1/namespaces/proxy-2021/services/http:proxy-service-2zrcz:portname2/proxy/: bar (200; 67.518619ms) Dec 20 14:00:01.614: INFO: (6) /api/v1/namespaces/proxy-2021/services/proxy-service-2zrcz:portname2/proxy/: bar (200; 70.171004ms) Dec 20 14:00:01.640: INFO: (7) /api/v1/namespaces/proxy-2021/services/proxy-service-2zrcz:portname1/proxy/: foo (200; 25.136018ms) Dec 20 14:00:01.640: INFO: (7) /api/v1/namespaces/proxy-2021/pods/http:proxy-service-2zrcz-2m9vx:1080/proxy/: ... (200; 25.486617ms) Dec 20 14:00:01.641: INFO: (7) /api/v1/namespaces/proxy-2021/pods/proxy-service-2zrcz-2m9vx/proxy/: test (200; 25.952535ms) Dec 20 14:00:01.641: INFO: (7) /api/v1/namespaces/proxy-2021/services/https:proxy-service-2zrcz:tlsportname2/proxy/: tls qux (200; 25.950786ms) Dec 20 14:00:01.641: INFO: (7) /api/v1/namespaces/proxy-2021/pods/proxy-service-2zrcz-2m9vx:1080/proxy/: test<... (200; 26.18028ms) Dec 20 14:00:01.641: INFO: (7) /api/v1/namespaces/proxy-2021/services/https:proxy-service-2zrcz:tlsportname1/proxy/: tls baz (200; 26.075788ms) Dec 20 14:00:01.641: INFO: (7) /api/v1/namespaces/proxy-2021/pods/proxy-service-2zrcz-2m9vx:162/proxy/: bar (200; 26.084215ms) Dec 20 14:00:01.641: INFO: (7) /api/v1/namespaces/proxy-2021/pods/proxy-service-2zrcz-2m9vx:160/proxy/: foo (200; 26.860308ms) Dec 20 14:00:01.641: INFO: (7) /api/v1/namespaces/proxy-2021/services/http:proxy-service-2zrcz:portname2/proxy/: bar (200; 26.576429ms) Dec 20 14:00:01.642: INFO: (7) /api/v1/namespaces/proxy-2021/pods/https:proxy-service-2zrcz-2m9vx:460/proxy/: tls baz (200; 27.175772ms) Dec 20 14:00:01.643: INFO: (7) /api/v1/namespaces/proxy-2021/pods/http:proxy-service-2zrcz-2m9vx:162/proxy/: bar (200; 28.437621ms) Dec 20 14:00:01.643: INFO: (7) /api/v1/namespaces/proxy-2021/services/http:proxy-service-2zrcz:portname1/proxy/: foo (200; 28.871218ms) Dec 20 14:00:01.643: INFO: (7) /api/v1/namespaces/proxy-2021/services/proxy-service-2zrcz:portname2/proxy/: bar (200; 28.584284ms) Dec 20 14:00:01.644: INFO: (7) /api/v1/namespaces/proxy-2021/pods/http:proxy-service-2zrcz-2m9vx:160/proxy/: foo (200; 29.566243ms) Dec 20 14:00:01.645: INFO: (7) /api/v1/namespaces/proxy-2021/pods/https:proxy-service-2zrcz-2m9vx:443/proxy/: test<... (200; 12.066057ms) Dec 20 14:00:01.658: INFO: (8) /api/v1/namespaces/proxy-2021/pods/proxy-service-2zrcz-2m9vx:162/proxy/: bar (200; 11.870903ms) Dec 20 14:00:01.658: INFO: (8) /api/v1/namespaces/proxy-2021/pods/https:proxy-service-2zrcz-2m9vx:462/proxy/: tls qux (200; 12.068205ms) Dec 20 14:00:01.658: INFO: (8) /api/v1/namespaces/proxy-2021/pods/http:proxy-service-2zrcz-2m9vx:160/proxy/: foo (200; 11.976927ms) Dec 20 14:00:01.658: INFO: (8) /api/v1/namespaces/proxy-2021/pods/https:proxy-service-2zrcz-2m9vx:460/proxy/: tls baz (200; 11.559242ms) Dec 20 14:00:01.658: INFO: (8) /api/v1/namespaces/proxy-2021/pods/https:proxy-service-2zrcz-2m9vx:443/proxy/: test (200; 12.552147ms) Dec 20 14:00:01.658: INFO: (8) /api/v1/namespaces/proxy-2021/pods/http:proxy-service-2zrcz-2m9vx:1080/proxy/: ... (200; 11.926915ms) Dec 20 14:00:01.659: INFO: (8) /api/v1/namespaces/proxy-2021/pods/proxy-service-2zrcz-2m9vx:160/proxy/: foo (200; 13.769463ms) Dec 20 14:00:01.659: INFO: (8) /api/v1/namespaces/proxy-2021/services/http:proxy-service-2zrcz:portname1/proxy/: foo (200; 13.148022ms) Dec 20 14:00:01.660: INFO: (8) /api/v1/namespaces/proxy-2021/services/proxy-service-2zrcz:portname1/proxy/: foo (200; 13.396609ms) Dec 20 14:00:01.660: INFO: (8) /api/v1/namespaces/proxy-2021/services/https:proxy-service-2zrcz:tlsportname1/proxy/: tls baz (200; 14.133908ms) Dec 20 14:00:01.660: INFO: (8) /api/v1/namespaces/proxy-2021/services/http:proxy-service-2zrcz:portname2/proxy/: bar (200; 14.043537ms) Dec 20 14:00:01.660: INFO: (8) /api/v1/namespaces/proxy-2021/services/https:proxy-service-2zrcz:tlsportname2/proxy/: tls qux (200; 13.954442ms) Dec 20 14:00:01.660: INFO: (8) /api/v1/namespaces/proxy-2021/services/proxy-service-2zrcz:portname2/proxy/: bar (200; 14.21069ms) Dec 20 14:00:01.669: INFO: (9) /api/v1/namespaces/proxy-2021/pods/https:proxy-service-2zrcz-2m9vx:443/proxy/: ... (200; 8.187345ms) Dec 20 14:00:01.669: INFO: (9) /api/v1/namespaces/proxy-2021/pods/proxy-service-2zrcz-2m9vx:160/proxy/: foo (200; 8.839069ms) Dec 20 14:00:01.673: INFO: (9) /api/v1/namespaces/proxy-2021/pods/https:proxy-service-2zrcz-2m9vx:460/proxy/: tls baz (200; 12.353773ms) Dec 20 14:00:01.675: INFO: (9) /api/v1/namespaces/proxy-2021/services/proxy-service-2zrcz:portname1/proxy/: foo (200; 14.724587ms) Dec 20 14:00:01.675: INFO: (9) /api/v1/namespaces/proxy-2021/services/http:proxy-service-2zrcz:portname1/proxy/: foo (200; 14.952907ms) Dec 20 14:00:01.676: INFO: (9) /api/v1/namespaces/proxy-2021/services/proxy-service-2zrcz:portname2/proxy/: bar (200; 15.071948ms) Dec 20 14:00:01.676: INFO: (9) /api/v1/namespaces/proxy-2021/services/https:proxy-service-2zrcz:tlsportname1/proxy/: tls baz (200; 15.873313ms) Dec 20 14:00:01.677: INFO: (9) /api/v1/namespaces/proxy-2021/pods/http:proxy-service-2zrcz-2m9vx:162/proxy/: bar (200; 16.125891ms) Dec 20 14:00:01.677: INFO: (9) /api/v1/namespaces/proxy-2021/pods/proxy-service-2zrcz-2m9vx:1080/proxy/: test<... (200; 16.395288ms) Dec 20 14:00:01.678: INFO: (9) /api/v1/namespaces/proxy-2021/pods/https:proxy-service-2zrcz-2m9vx:462/proxy/: tls qux (200; 17.232913ms) Dec 20 14:00:01.678: INFO: (9) /api/v1/namespaces/proxy-2021/services/http:proxy-service-2zrcz:portname2/proxy/: bar (200; 17.40171ms) Dec 20 14:00:01.678: INFO: (9) /api/v1/namespaces/proxy-2021/services/https:proxy-service-2zrcz:tlsportname2/proxy/: tls qux (200; 17.232721ms) Dec 20 14:00:01.678: INFO: (9) /api/v1/namespaces/proxy-2021/pods/proxy-service-2zrcz-2m9vx:162/proxy/: bar (200; 17.346301ms) Dec 20 14:00:01.678: INFO: (9) /api/v1/namespaces/proxy-2021/pods/proxy-service-2zrcz-2m9vx/proxy/: test (200; 17.397083ms) Dec 20 14:00:01.678: INFO: (9) /api/v1/namespaces/proxy-2021/pods/http:proxy-service-2zrcz-2m9vx:160/proxy/: foo (200; 17.478377ms) Dec 20 14:00:01.688: INFO: (10) /api/v1/namespaces/proxy-2021/pods/http:proxy-service-2zrcz-2m9vx:1080/proxy/: ... (200; 10.012174ms) Dec 20 14:00:01.693: INFO: (10) /api/v1/namespaces/proxy-2021/pods/http:proxy-service-2zrcz-2m9vx:162/proxy/: bar (200; 14.534307ms) Dec 20 14:00:01.693: INFO: (10) /api/v1/namespaces/proxy-2021/pods/proxy-service-2zrcz-2m9vx:1080/proxy/: test<... (200; 14.940842ms) Dec 20 14:00:01.694: INFO: (10) /api/v1/namespaces/proxy-2021/pods/proxy-service-2zrcz-2m9vx:160/proxy/: foo (200; 15.284305ms) Dec 20 14:00:01.694: INFO: (10) /api/v1/namespaces/proxy-2021/pods/https:proxy-service-2zrcz-2m9vx:443/proxy/: test (200; 17.312192ms) Dec 20 14:00:01.696: INFO: (10) /api/v1/namespaces/proxy-2021/services/https:proxy-service-2zrcz:tlsportname2/proxy/: tls qux (200; 17.475385ms) Dec 20 14:00:01.696: INFO: (10) /api/v1/namespaces/proxy-2021/pods/proxy-service-2zrcz-2m9vx:162/proxy/: bar (200; 17.504084ms) Dec 20 14:00:01.696: INFO: (10) /api/v1/namespaces/proxy-2021/pods/http:proxy-service-2zrcz-2m9vx:160/proxy/: foo (200; 17.705466ms) Dec 20 14:00:01.696: INFO: (10) /api/v1/namespaces/proxy-2021/pods/https:proxy-service-2zrcz-2m9vx:460/proxy/: tls baz (200; 17.973943ms) Dec 20 14:00:01.696: INFO: (10) /api/v1/namespaces/proxy-2021/services/http:proxy-service-2zrcz:portname2/proxy/: bar (200; 17.80534ms) Dec 20 14:00:01.702: INFO: (10) /api/v1/namespaces/proxy-2021/services/proxy-service-2zrcz:portname1/proxy/: foo (200; 23.75728ms) Dec 20 14:00:01.703: INFO: (10) /api/v1/namespaces/proxy-2021/services/https:proxy-service-2zrcz:tlsportname1/proxy/: tls baz (200; 24.159185ms) Dec 20 14:00:01.710: INFO: (11) /api/v1/namespaces/proxy-2021/pods/proxy-service-2zrcz-2m9vx/proxy/: test (200; 6.962564ms) Dec 20 14:00:01.710: INFO: (11) /api/v1/namespaces/proxy-2021/pods/proxy-service-2zrcz-2m9vx:162/proxy/: bar (200; 7.084241ms) Dec 20 14:00:01.710: INFO: (11) /api/v1/namespaces/proxy-2021/pods/proxy-service-2zrcz-2m9vx:1080/proxy/: test<... (200; 7.090031ms) Dec 20 14:00:01.710: INFO: (11) /api/v1/namespaces/proxy-2021/pods/proxy-service-2zrcz-2m9vx:160/proxy/: foo (200; 7.218339ms) Dec 20 14:00:01.711: INFO: (11) /api/v1/namespaces/proxy-2021/pods/http:proxy-service-2zrcz-2m9vx:162/proxy/: bar (200; 7.790115ms) Dec 20 14:00:01.712: INFO: (11) /api/v1/namespaces/proxy-2021/pods/https:proxy-service-2zrcz-2m9vx:462/proxy/: tls qux (200; 9.160531ms) Dec 20 14:00:01.712: INFO: (11) /api/v1/namespaces/proxy-2021/pods/http:proxy-service-2zrcz-2m9vx:160/proxy/: foo (200; 9.134838ms) Dec 20 14:00:01.712: INFO: (11) /api/v1/namespaces/proxy-2021/pods/https:proxy-service-2zrcz-2m9vx:460/proxy/: tls baz (200; 9.481677ms) Dec 20 14:00:01.713: INFO: (11) /api/v1/namespaces/proxy-2021/services/http:proxy-service-2zrcz:portname2/proxy/: bar (200; 9.967815ms) Dec 20 14:00:01.718: INFO: (11) /api/v1/namespaces/proxy-2021/pods/https:proxy-service-2zrcz-2m9vx:443/proxy/: ... (200; 15.951186ms) Dec 20 14:00:01.719: INFO: (11) /api/v1/namespaces/proxy-2021/services/http:proxy-service-2zrcz:portname1/proxy/: foo (200; 15.967985ms) Dec 20 14:00:01.719: INFO: (11) /api/v1/namespaces/proxy-2021/services/proxy-service-2zrcz:portname2/proxy/: bar (200; 16.17777ms) Dec 20 14:00:01.719: INFO: (11) /api/v1/namespaces/proxy-2021/services/https:proxy-service-2zrcz:tlsportname2/proxy/: tls qux (200; 16.500389ms) Dec 20 14:00:01.719: INFO: (11) /api/v1/namespaces/proxy-2021/services/https:proxy-service-2zrcz:tlsportname1/proxy/: tls baz (200; 16.74493ms) Dec 20 14:00:01.728: INFO: (12) /api/v1/namespaces/proxy-2021/pods/http:proxy-service-2zrcz-2m9vx:1080/proxy/: ... (200; 8.974651ms) Dec 20 14:00:01.729: INFO: (12) /api/v1/namespaces/proxy-2021/pods/proxy-service-2zrcz-2m9vx/proxy/: test (200; 8.805257ms) Dec 20 14:00:01.729: INFO: (12) /api/v1/namespaces/proxy-2021/pods/proxy-service-2zrcz-2m9vx:160/proxy/: foo (200; 9.017352ms) Dec 20 14:00:01.730: INFO: (12) /api/v1/namespaces/proxy-2021/pods/https:proxy-service-2zrcz-2m9vx:460/proxy/: tls baz (200; 10.32064ms) Dec 20 14:00:01.731: INFO: (12) /api/v1/namespaces/proxy-2021/pods/proxy-service-2zrcz-2m9vx:1080/proxy/: test<... (200; 10.582073ms) Dec 20 14:00:01.733: INFO: (12) /api/v1/namespaces/proxy-2021/pods/http:proxy-service-2zrcz-2m9vx:162/proxy/: bar (200; 13.419885ms) Dec 20 14:00:01.735: INFO: (12) /api/v1/namespaces/proxy-2021/pods/proxy-service-2zrcz-2m9vx:162/proxy/: bar (200; 15.124818ms) Dec 20 14:00:01.735: INFO: (12) /api/v1/namespaces/proxy-2021/pods/http:proxy-service-2zrcz-2m9vx:160/proxy/: foo (200; 15.246695ms) Dec 20 14:00:01.735: INFO: (12) /api/v1/namespaces/proxy-2021/pods/https:proxy-service-2zrcz-2m9vx:443/proxy/: test (200; 9.481172ms) Dec 20 14:00:01.751: INFO: (13) /api/v1/namespaces/proxy-2021/services/https:proxy-service-2zrcz:tlsportname2/proxy/: tls qux (200; 10.341353ms) Dec 20 14:00:01.751: INFO: (13) /api/v1/namespaces/proxy-2021/pods/proxy-service-2zrcz-2m9vx:162/proxy/: bar (200; 10.818635ms) Dec 20 14:00:01.751: INFO: (13) /api/v1/namespaces/proxy-2021/pods/proxy-service-2zrcz-2m9vx:1080/proxy/: test<... (200; 10.809247ms) Dec 20 14:00:01.751: INFO: (13) /api/v1/namespaces/proxy-2021/pods/https:proxy-service-2zrcz-2m9vx:443/proxy/: ... (200; 13.729906ms) Dec 20 14:00:01.755: INFO: (13) /api/v1/namespaces/proxy-2021/pods/http:proxy-service-2zrcz-2m9vx:162/proxy/: bar (200; 14.42446ms) Dec 20 14:00:01.755: INFO: (13) /api/v1/namespaces/proxy-2021/services/proxy-service-2zrcz:portname2/proxy/: bar (200; 14.301925ms) Dec 20 14:00:01.755: INFO: (13) /api/v1/namespaces/proxy-2021/pods/https:proxy-service-2zrcz-2m9vx:462/proxy/: tls qux (200; 14.588532ms) Dec 20 14:00:01.755: INFO: (13) /api/v1/namespaces/proxy-2021/pods/https:proxy-service-2zrcz-2m9vx:460/proxy/: tls baz (200; 14.642486ms) Dec 20 14:00:01.755: INFO: (13) /api/v1/namespaces/proxy-2021/pods/proxy-service-2zrcz-2m9vx:160/proxy/: foo (200; 14.733914ms) Dec 20 14:00:01.755: INFO: (13) /api/v1/namespaces/proxy-2021/services/proxy-service-2zrcz:portname1/proxy/: foo (200; 14.951984ms) Dec 20 14:00:01.756: INFO: (13) /api/v1/namespaces/proxy-2021/services/http:proxy-service-2zrcz:portname2/proxy/: bar (200; 15.240666ms) Dec 20 14:00:01.756: INFO: (13) /api/v1/namespaces/proxy-2021/services/https:proxy-service-2zrcz:tlsportname1/proxy/: tls baz (200; 15.514857ms) Dec 20 14:00:01.756: INFO: (13) /api/v1/namespaces/proxy-2021/services/http:proxy-service-2zrcz:portname1/proxy/: foo (200; 15.893513ms) Dec 20 14:00:01.764: INFO: (14) /api/v1/namespaces/proxy-2021/pods/https:proxy-service-2zrcz-2m9vx:462/proxy/: tls qux (200; 7.650588ms) Dec 20 14:00:01.766: INFO: (14) /api/v1/namespaces/proxy-2021/services/https:proxy-service-2zrcz:tlsportname2/proxy/: tls qux (200; 9.309399ms) Dec 20 14:00:01.768: INFO: (14) /api/v1/namespaces/proxy-2021/services/https:proxy-service-2zrcz:tlsportname1/proxy/: tls baz (200; 11.046838ms) Dec 20 14:00:01.768: INFO: (14) /api/v1/namespaces/proxy-2021/pods/http:proxy-service-2zrcz-2m9vx:1080/proxy/: ... (200; 11.534764ms) Dec 20 14:00:01.768: INFO: (14) /api/v1/namespaces/proxy-2021/services/http:proxy-service-2zrcz:portname2/proxy/: bar (200; 11.682452ms) Dec 20 14:00:01.769: INFO: (14) /api/v1/namespaces/proxy-2021/services/proxy-service-2zrcz:portname2/proxy/: bar (200; 11.95112ms) Dec 20 14:00:01.769: INFO: (14) /api/v1/namespaces/proxy-2021/pods/https:proxy-service-2zrcz-2m9vx:460/proxy/: tls baz (200; 12.244583ms) Dec 20 14:00:01.769: INFO: (14) /api/v1/namespaces/proxy-2021/pods/https:proxy-service-2zrcz-2m9vx:443/proxy/: test (200; 12.244644ms) Dec 20 14:00:01.769: INFO: (14) /api/v1/namespaces/proxy-2021/pods/proxy-service-2zrcz-2m9vx:1080/proxy/: test<... (200; 12.243755ms) Dec 20 14:00:01.769: INFO: (14) /api/v1/namespaces/proxy-2021/services/proxy-service-2zrcz:portname1/proxy/: foo (200; 12.315907ms) Dec 20 14:00:01.769: INFO: (14) /api/v1/namespaces/proxy-2021/pods/http:proxy-service-2zrcz-2m9vx:160/proxy/: foo (200; 12.613836ms) Dec 20 14:00:01.770: INFO: (14) /api/v1/namespaces/proxy-2021/services/http:proxy-service-2zrcz:portname1/proxy/: foo (200; 13.47247ms) Dec 20 14:00:01.770: INFO: (14) /api/v1/namespaces/proxy-2021/pods/http:proxy-service-2zrcz-2m9vx:162/proxy/: bar (200; 13.743026ms) Dec 20 14:00:01.770: INFO: (14) /api/v1/namespaces/proxy-2021/pods/proxy-service-2zrcz-2m9vx:160/proxy/: foo (200; 13.775269ms) Dec 20 14:00:01.770: INFO: (14) /api/v1/namespaces/proxy-2021/pods/proxy-service-2zrcz-2m9vx:162/proxy/: bar (200; 14.009226ms) Dec 20 14:00:01.781: INFO: (15) /api/v1/namespaces/proxy-2021/pods/http:proxy-service-2zrcz-2m9vx:160/proxy/: foo (200; 10.487376ms) Dec 20 14:00:01.781: INFO: (15) /api/v1/namespaces/proxy-2021/pods/proxy-service-2zrcz-2m9vx:160/proxy/: foo (200; 10.802291ms) Dec 20 14:00:01.782: INFO: (15) /api/v1/namespaces/proxy-2021/pods/http:proxy-service-2zrcz-2m9vx:162/proxy/: bar (200; 11.326986ms) Dec 20 14:00:01.782: INFO: (15) /api/v1/namespaces/proxy-2021/pods/proxy-service-2zrcz-2m9vx:1080/proxy/: test<... (200; 11.230714ms) Dec 20 14:00:01.782: INFO: (15) /api/v1/namespaces/proxy-2021/pods/proxy-service-2zrcz-2m9vx:162/proxy/: bar (200; 11.29978ms) Dec 20 14:00:01.782: INFO: (15) /api/v1/namespaces/proxy-2021/pods/https:proxy-service-2zrcz-2m9vx:460/proxy/: tls baz (200; 11.275113ms) Dec 20 14:00:01.783: INFO: (15) /api/v1/namespaces/proxy-2021/pods/https:proxy-service-2zrcz-2m9vx:462/proxy/: tls qux (200; 12.186272ms) Dec 20 14:00:01.783: INFO: (15) /api/v1/namespaces/proxy-2021/pods/http:proxy-service-2zrcz-2m9vx:1080/proxy/: ... (200; 12.247528ms) Dec 20 14:00:01.784: INFO: (15) /api/v1/namespaces/proxy-2021/pods/https:proxy-service-2zrcz-2m9vx:443/proxy/: test (200; 13.607184ms) Dec 20 14:00:01.786: INFO: (15) /api/v1/namespaces/proxy-2021/services/http:proxy-service-2zrcz:portname1/proxy/: foo (200; 14.775077ms) Dec 20 14:00:01.786: INFO: (15) /api/v1/namespaces/proxy-2021/services/http:proxy-service-2zrcz:portname2/proxy/: bar (200; 15.683402ms) Dec 20 14:00:01.787: INFO: (15) /api/v1/namespaces/proxy-2021/services/https:proxy-service-2zrcz:tlsportname1/proxy/: tls baz (200; 16.316699ms) Dec 20 14:00:01.788: INFO: (15) /api/v1/namespaces/proxy-2021/services/https:proxy-service-2zrcz:tlsportname2/proxy/: tls qux (200; 17.078791ms) Dec 20 14:00:01.788: INFO: (15) /api/v1/namespaces/proxy-2021/services/proxy-service-2zrcz:portname1/proxy/: foo (200; 17.150531ms) Dec 20 14:00:01.792: INFO: (15) /api/v1/namespaces/proxy-2021/services/proxy-service-2zrcz:portname2/proxy/: bar (200; 21.933449ms) Dec 20 14:00:01.802: INFO: (16) /api/v1/namespaces/proxy-2021/services/http:proxy-service-2zrcz:portname2/proxy/: bar (200; 9.081896ms) Dec 20 14:00:01.803: INFO: (16) /api/v1/namespaces/proxy-2021/pods/http:proxy-service-2zrcz-2m9vx:162/proxy/: bar (200; 9.678979ms) Dec 20 14:00:01.803: INFO: (16) /api/v1/namespaces/proxy-2021/services/http:proxy-service-2zrcz:portname1/proxy/: foo (200; 10.431079ms) Dec 20 14:00:01.803: INFO: (16) /api/v1/namespaces/proxy-2021/pods/proxy-service-2zrcz-2m9vx:162/proxy/: bar (200; 10.743806ms) Dec 20 14:00:01.804: INFO: (16) /api/v1/namespaces/proxy-2021/pods/https:proxy-service-2zrcz-2m9vx:462/proxy/: tls qux (200; 11.004434ms) Dec 20 14:00:01.804: INFO: (16) /api/v1/namespaces/proxy-2021/services/https:proxy-service-2zrcz:tlsportname1/proxy/: tls baz (200; 11.008973ms) Dec 20 14:00:01.804: INFO: (16) /api/v1/namespaces/proxy-2021/pods/http:proxy-service-2zrcz-2m9vx:1080/proxy/: ... (200; 10.871586ms) Dec 20 14:00:01.804: INFO: (16) /api/v1/namespaces/proxy-2021/pods/proxy-service-2zrcz-2m9vx/proxy/: test (200; 11.049165ms) Dec 20 14:00:01.807: INFO: (16) /api/v1/namespaces/proxy-2021/pods/proxy-service-2zrcz-2m9vx:1080/proxy/: test<... (200; 14.354051ms) Dec 20 14:00:01.807: INFO: (16) /api/v1/namespaces/proxy-2021/services/proxy-service-2zrcz:portname2/proxy/: bar (200; 14.676473ms) Dec 20 14:00:01.807: INFO: (16) /api/v1/namespaces/proxy-2021/pods/https:proxy-service-2zrcz-2m9vx:443/proxy/: test (200; 14.961701ms) Dec 20 14:00:01.825: INFO: (17) /api/v1/namespaces/proxy-2021/pods/proxy-service-2zrcz-2m9vx:162/proxy/: bar (200; 14.739508ms) Dec 20 14:00:01.825: INFO: (17) /api/v1/namespaces/proxy-2021/pods/https:proxy-service-2zrcz-2m9vx:462/proxy/: tls qux (200; 14.87875ms) Dec 20 14:00:01.825: INFO: (17) /api/v1/namespaces/proxy-2021/pods/proxy-service-2zrcz-2m9vx:1080/proxy/: test<... (200; 15.048469ms) Dec 20 14:00:01.825: INFO: (17) /api/v1/namespaces/proxy-2021/pods/http:proxy-service-2zrcz-2m9vx:162/proxy/: bar (200; 15.362331ms) Dec 20 14:00:01.825: INFO: (17) /api/v1/namespaces/proxy-2021/pods/http:proxy-service-2zrcz-2m9vx:160/proxy/: foo (200; 15.097944ms) Dec 20 14:00:01.825: INFO: (17) /api/v1/namespaces/proxy-2021/pods/proxy-service-2zrcz-2m9vx:160/proxy/: foo (200; 14.978297ms) Dec 20 14:00:01.825: INFO: (17) /api/v1/namespaces/proxy-2021/pods/https:proxy-service-2zrcz-2m9vx:460/proxy/: tls baz (200; 15.330399ms) Dec 20 14:00:01.825: INFO: (17) /api/v1/namespaces/proxy-2021/pods/https:proxy-service-2zrcz-2m9vx:443/proxy/: ... (200; 15.280903ms) Dec 20 14:00:01.827: INFO: (17) /api/v1/namespaces/proxy-2021/services/http:proxy-service-2zrcz:portname2/proxy/: bar (200; 16.800784ms) Dec 20 14:00:01.827: INFO: (17) /api/v1/namespaces/proxy-2021/services/http:proxy-service-2zrcz:portname1/proxy/: foo (200; 16.893039ms) Dec 20 14:00:01.828: INFO: (17) /api/v1/namespaces/proxy-2021/services/https:proxy-service-2zrcz:tlsportname1/proxy/: tls baz (200; 18.228115ms) Dec 20 14:00:01.828: INFO: (17) /api/v1/namespaces/proxy-2021/services/proxy-service-2zrcz:portname2/proxy/: bar (200; 17.774836ms) Dec 20 14:00:01.828: INFO: (17) /api/v1/namespaces/proxy-2021/services/proxy-service-2zrcz:portname1/proxy/: foo (200; 17.947651ms) Dec 20 14:00:01.828: INFO: (17) /api/v1/namespaces/proxy-2021/services/https:proxy-service-2zrcz:tlsportname2/proxy/: tls qux (200; 17.840669ms) Dec 20 14:00:01.833: INFO: (18) /api/v1/namespaces/proxy-2021/pods/https:proxy-service-2zrcz-2m9vx:443/proxy/: test (200; 6.3095ms) Dec 20 14:00:01.835: INFO: (18) /api/v1/namespaces/proxy-2021/pods/http:proxy-service-2zrcz-2m9vx:160/proxy/: foo (200; 6.735369ms) Dec 20 14:00:01.835: INFO: (18) /api/v1/namespaces/proxy-2021/pods/http:proxy-service-2zrcz-2m9vx:162/proxy/: bar (200; 6.877299ms) Dec 20 14:00:01.835: INFO: (18) /api/v1/namespaces/proxy-2021/pods/proxy-service-2zrcz-2m9vx:1080/proxy/: test<... (200; 6.917554ms) Dec 20 14:00:01.836: INFO: (18) /api/v1/namespaces/proxy-2021/pods/https:proxy-service-2zrcz-2m9vx:460/proxy/: tls baz (200; 7.816159ms) Dec 20 14:00:01.836: INFO: (18) /api/v1/namespaces/proxy-2021/pods/proxy-service-2zrcz-2m9vx:160/proxy/: foo (200; 7.788969ms) Dec 20 14:00:01.836: INFO: (18) /api/v1/namespaces/proxy-2021/pods/http:proxy-service-2zrcz-2m9vx:1080/proxy/: ... (200; 7.738294ms) Dec 20 14:00:01.837: INFO: (18) /api/v1/namespaces/proxy-2021/services/proxy-service-2zrcz:portname2/proxy/: bar (200; 8.75026ms) Dec 20 14:00:01.838: INFO: (18) /api/v1/namespaces/proxy-2021/services/https:proxy-service-2zrcz:tlsportname1/proxy/: tls baz (200; 9.484813ms) Dec 20 14:00:01.838: INFO: (18) /api/v1/namespaces/proxy-2021/services/http:proxy-service-2zrcz:portname1/proxy/: foo (200; 9.82603ms) Dec 20 14:00:01.839: INFO: (18) /api/v1/namespaces/proxy-2021/services/https:proxy-service-2zrcz:tlsportname2/proxy/: tls qux (200; 11.217975ms) Dec 20 14:00:01.840: INFO: (18) /api/v1/namespaces/proxy-2021/services/proxy-service-2zrcz:portname1/proxy/: foo (200; 11.518746ms) Dec 20 14:00:01.840: INFO: (18) /api/v1/namespaces/proxy-2021/services/http:proxy-service-2zrcz:portname2/proxy/: bar (200; 11.491804ms) Dec 20 14:00:01.845: INFO: (19) /api/v1/namespaces/proxy-2021/pods/http:proxy-service-2zrcz-2m9vx:162/proxy/: bar (200; 4.995056ms) Dec 20 14:00:01.845: INFO: (19) /api/v1/namespaces/proxy-2021/pods/https:proxy-service-2zrcz-2m9vx:443/proxy/: ... (200; 11.15336ms) Dec 20 14:00:01.852: INFO: (19) /api/v1/namespaces/proxy-2021/pods/proxy-service-2zrcz-2m9vx:1080/proxy/: test<... (200; 11.751705ms) Dec 20 14:00:01.853: INFO: (19) /api/v1/namespaces/proxy-2021/services/proxy-service-2zrcz:portname1/proxy/: foo (200; 12.560779ms) Dec 20 14:00:01.853: INFO: (19) /api/v1/namespaces/proxy-2021/services/proxy-service-2zrcz:portname2/proxy/: bar (200; 12.6944ms) Dec 20 14:00:01.853: INFO: (19) /api/v1/namespaces/proxy-2021/pods/proxy-service-2zrcz-2m9vx/proxy/: test (200; 12.697941ms) Dec 20 14:00:01.853: INFO: (19) /api/v1/namespaces/proxy-2021/pods/https:proxy-service-2zrcz-2m9vx:462/proxy/: tls qux (200; 12.711497ms) Dec 20 14:00:01.853: INFO: (19) /api/v1/namespaces/proxy-2021/services/https:proxy-service-2zrcz:tlsportname2/proxy/: tls qux (200; 12.746592ms) Dec 20 14:00:01.853: INFO: (19) /api/v1/namespaces/proxy-2021/pods/proxy-service-2zrcz-2m9vx:162/proxy/: bar (200; 12.762142ms) Dec 20 14:00:01.853: INFO: (19) /api/v1/namespaces/proxy-2021/pods/https:proxy-service-2zrcz-2m9vx:460/proxy/: tls baz (200; 12.826416ms) Dec 20 14:00:01.853: INFO: (19) /api/v1/namespaces/proxy-2021/services/http:proxy-service-2zrcz:portname1/proxy/: foo (200; 12.969075ms) Dec 20 14:00:01.854: INFO: (19) /api/v1/namespaces/proxy-2021/services/http:proxy-service-2zrcz:portname2/proxy/: bar (200; 13.676823ms) Dec 20 14:00:01.854: INFO: (19) /api/v1/namespaces/proxy-2021/pods/http:proxy-service-2zrcz-2m9vx:160/proxy/: foo (200; 13.661896ms) Dec 20 14:00:01.854: INFO: (19) /api/v1/namespaces/proxy-2021/pods/proxy-service-2zrcz-2m9vx:160/proxy/: foo (200; 14.225483ms) STEP: deleting ReplicationController proxy-service-2zrcz in namespace proxy-2021, will wait for the garbage collector to delete the pods Dec 20 14:00:01.925: INFO: Deleting ReplicationController proxy-service-2zrcz took: 16.934539ms Dec 20 14:00:02.226: INFO: Terminating ReplicationController proxy-service-2zrcz pods took: 300.780467ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 14:00:16.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-2021" for this suite. Dec 20 14:00:22.675: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 14:00:22.812: INFO: namespace proxy-2021 deletion completed in 6.166493898s • [SLOW TEST:34.132 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 14:00:22.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command Dec 20 14:00:22.956: INFO: Waiting up to 5m0s for pod "var-expansion-63e88a60-cace-4229-8166-d6c9eb4ba246" in namespace "var-expansion-2766" to be "success or failure" Dec 20 14:00:22.986: INFO: Pod "var-expansion-63e88a60-cace-4229-8166-d6c9eb4ba246": Phase="Pending", Reason="", readiness=false. Elapsed: 30.154118ms Dec 20 14:00:24.991: INFO: Pod "var-expansion-63e88a60-cace-4229-8166-d6c9eb4ba246": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035348269s Dec 20 14:00:27.017: INFO: Pod "var-expansion-63e88a60-cace-4229-8166-d6c9eb4ba246": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06055821s Dec 20 14:00:29.025: INFO: Pod "var-expansion-63e88a60-cace-4229-8166-d6c9eb4ba246": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069211928s Dec 20 14:00:31.036: INFO: Pod "var-expansion-63e88a60-cace-4229-8166-d6c9eb4ba246": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.080087589s STEP: Saw pod success Dec 20 14:00:31.036: INFO: Pod "var-expansion-63e88a60-cace-4229-8166-d6c9eb4ba246" satisfied condition "success or failure" Dec 20 14:00:31.080: INFO: Trying to get logs from node iruya-node pod var-expansion-63e88a60-cace-4229-8166-d6c9eb4ba246 container dapi-container: STEP: delete the pod Dec 20 14:00:31.184: INFO: Waiting for pod var-expansion-63e88a60-cace-4229-8166-d6c9eb4ba246 to disappear Dec 20 14:00:31.190: INFO: Pod var-expansion-63e88a60-cace-4229-8166-d6c9eb4ba246 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 20 14:00:31.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2766" for this suite. Dec 20 14:00:37.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 14:00:37.362: INFO: namespace var-expansion-2766 deletion completed in 6.166808455s • [SLOW TEST:14.550 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 20 14:00:37.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 20 14:00:37.456: INFO: (0) /api/v1/nodes/iruya-node/proxy/logs/:
alternatives.log
alternatives.l... (200; 17.591938ms)
Dec 20 14:00:37.465: INFO: (1) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.65155ms)
Dec 20 14:00:37.503: INFO: (2) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 37.63529ms)
Dec 20 14:00:37.510: INFO: (3) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.356028ms)
Dec 20 14:00:37.516: INFO: (4) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.500468ms)
Dec 20 14:00:37.520: INFO: (5) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.395155ms)
Dec 20 14:00:37.526: INFO: (6) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.644118ms)
Dec 20 14:00:37.531: INFO: (7) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.244779ms)
Dec 20 14:00:37.539: INFO: (8) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.957199ms)
Dec 20 14:00:37.547: INFO: (9) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.874571ms)
Dec 20 14:00:37.559: INFO: (10) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.876016ms)
Dec 20 14:00:37.569: INFO: (11) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.043317ms)
Dec 20 14:00:37.577: INFO: (12) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.922283ms)
Dec 20 14:00:37.582: INFO: (13) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.033206ms)
Dec 20 14:00:37.586: INFO: (14) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.058994ms)
Dec 20 14:00:37.591: INFO: (15) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.011835ms)
Dec 20 14:00:37.596: INFO: (16) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.354712ms)
Dec 20 14:00:37.599: INFO: (17) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.628888ms)
Dec 20 14:00:37.603: INFO: (18) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.040298ms)
Dec 20 14:00:37.607: INFO: (19) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.523491ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:00:37.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-5309" for this suite.
Dec 20 14:00:43.641: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:00:43.862: INFO: namespace proxy-5309 deletion completed in 6.252181186s

• [SLOW TEST:6.499 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:00:43.864: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 20 14:00:44.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-6952'
Dec 20 14:00:44.219: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 20 14:00:44.219: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426
Dec 20 14:00:46.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-6952'
Dec 20 14:00:46.396: INFO: stderr: ""
Dec 20 14:00:46.397: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:00:46.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6952" for this suite.
Dec 20 14:00:52.443: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:00:52.538: INFO: namespace kubectl-6952 deletion completed in 6.133888258s

• [SLOW TEST:8.675 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:00:52.540: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8160.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-8160.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8160.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8160.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-8160.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8160.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 20 14:01:04.739: INFO: Unable to read wheezy_udp@PodARecord from pod dns-8160/dns-test-14c66d80-a0b7-4a63-b7e1-46330c35daa0: the server could not find the requested resource (get pods dns-test-14c66d80-a0b7-4a63-b7e1-46330c35daa0)
Dec 20 14:01:04.746: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-8160/dns-test-14c66d80-a0b7-4a63-b7e1-46330c35daa0: the server could not find the requested resource (get pods dns-test-14c66d80-a0b7-4a63-b7e1-46330c35daa0)
Dec 20 14:01:04.761: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-8160.svc.cluster.local from pod dns-8160/dns-test-14c66d80-a0b7-4a63-b7e1-46330c35daa0: the server could not find the requested resource (get pods dns-test-14c66d80-a0b7-4a63-b7e1-46330c35daa0)
Dec 20 14:01:04.772: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-8160/dns-test-14c66d80-a0b7-4a63-b7e1-46330c35daa0: the server could not find the requested resource (get pods dns-test-14c66d80-a0b7-4a63-b7e1-46330c35daa0)
Dec 20 14:01:04.779: INFO: Unable to read jessie_udp@PodARecord from pod dns-8160/dns-test-14c66d80-a0b7-4a63-b7e1-46330c35daa0: the server could not find the requested resource (get pods dns-test-14c66d80-a0b7-4a63-b7e1-46330c35daa0)
Dec 20 14:01:04.782: INFO: Unable to read jessie_tcp@PodARecord from pod dns-8160/dns-test-14c66d80-a0b7-4a63-b7e1-46330c35daa0: the server could not find the requested resource (get pods dns-test-14c66d80-a0b7-4a63-b7e1-46330c35daa0)
Dec 20 14:01:04.782: INFO: Lookups using dns-8160/dns-test-14c66d80-a0b7-4a63-b7e1-46330c35daa0 failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-8160.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Dec 20 14:01:09.874: INFO: DNS probes using dns-8160/dns-test-14c66d80-a0b7-4a63-b7e1-46330c35daa0 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:01:09.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8160" for this suite.
Dec 20 14:01:16.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:01:16.168: INFO: namespace dns-8160 deletion completed in 6.193281243s

• [SLOW TEST:23.628 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:01:16.169: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 20 14:01:16.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1157'
Dec 20 14:01:17.065: INFO: stderr: ""
Dec 20 14:01:17.065: INFO: stdout: "replicationcontroller/redis-master created\n"
Dec 20 14:01:17.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1157'
Dec 20 14:01:17.861: INFO: stderr: ""
Dec 20 14:01:17.861: INFO: stdout: "service/redis-master created\n"
STEP: Waiting for Redis master to start.
Dec 20 14:01:18.883: INFO: Selector matched 1 pods for map[app:redis]
Dec 20 14:01:18.883: INFO: Found 0 / 1
Dec 20 14:01:19.880: INFO: Selector matched 1 pods for map[app:redis]
Dec 20 14:01:19.880: INFO: Found 0 / 1
Dec 20 14:01:20.871: INFO: Selector matched 1 pods for map[app:redis]
Dec 20 14:01:20.871: INFO: Found 0 / 1
Dec 20 14:01:21.880: INFO: Selector matched 1 pods for map[app:redis]
Dec 20 14:01:21.880: INFO: Found 0 / 1
Dec 20 14:01:22.884: INFO: Selector matched 1 pods for map[app:redis]
Dec 20 14:01:22.884: INFO: Found 0 / 1
Dec 20 14:01:23.879: INFO: Selector matched 1 pods for map[app:redis]
Dec 20 14:01:23.879: INFO: Found 0 / 1
Dec 20 14:01:24.881: INFO: Selector matched 1 pods for map[app:redis]
Dec 20 14:01:24.881: INFO: Found 0 / 1
Dec 20 14:01:25.875: INFO: Selector matched 1 pods for map[app:redis]
Dec 20 14:01:25.875: INFO: Found 1 / 1
Dec 20 14:01:25.875: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Dec 20 14:01:25.883: INFO: Selector matched 1 pods for map[app:redis]
Dec 20 14:01:25.883: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Dec 20 14:01:25.883: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-68w5l --namespace=kubectl-1157'
Dec 20 14:01:26.150: INFO: stderr: ""
Dec 20 14:01:26.150: INFO: stdout: "Name:           redis-master-68w5l\nNamespace:      kubectl-1157\nPriority:       0\nNode:           iruya-node/10.96.3.65\nStart Time:     Fri, 20 Dec 2019 14:01:17 +0000\nLabels:         app=redis\n                role=master\nAnnotations:    \nStatus:         Running\nIP:             10.44.0.1\nControlled By:  ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   docker://7be56043099e35d075c0e9b01d5e068a28683235df42ac8f22ba3a16d12009c4\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Fri, 20 Dec 2019 14:01:24 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-m2f95 (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-m2f95:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-m2f95\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                 Message\n  ----    ------     ----  ----                 -------\n  Normal  Scheduled  9s    default-scheduler    Successfully assigned kubectl-1157/redis-master-68w5l to iruya-node\n  Normal  Pulled     6s    kubelet, iruya-node  Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    2s    kubelet, iruya-node  Created container redis-master\n  Normal  Started    2s    kubelet, iruya-node  Started container redis-master\n"
Dec 20 14:01:26.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-1157'
Dec 20 14:01:26.325: INFO: stderr: ""
Dec 20 14:01:26.325: INFO: stdout: "Name:         redis-master\nNamespace:    kubectl-1157\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  9s    replication-controller  Created pod: redis-master-68w5l\n"
Dec 20 14:01:26.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-1157'
Dec 20 14:01:26.501: INFO: stderr: ""
Dec 20 14:01:26.501: INFO: stdout: "Name:              redis-master\nNamespace:         kubectl-1157\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.99.254.249\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.44.0.1:6379\nSession Affinity:  None\nEvents:            \n"
Dec 20 14:01:26.511: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-node'
Dec 20 14:01:26.749: INFO: stderr: ""
Dec 20 14:01:26.749: INFO: stdout: "Name:               iruya-node\nRoles:              \nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=iruya-node\n                    kubernetes.io/os=linux\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sun, 04 Aug 2019 09:01:39 +0000\nTaints:             \nUnschedulable:      false\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sat, 12 Oct 2019 11:56:49 +0000   Sat, 12 Oct 2019 11:56:49 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Fri, 20 Dec 2019 14:00:31 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Fri, 20 Dec 2019 14:00:31 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Fri, 20 Dec 2019 14:00:31 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Fri, 20 Dec 2019 14:00:31 +0000   Sun, 04 Aug 2019 09:02:19 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  10.96.3.65\n  Hostname:    iruya-node\nCapacity:\n cpu:                4\n ephemeral-storage:  20145724Ki\n hugepages-2Mi:      0\n memory:             4039076Ki\n pods:               110\nAllocatable:\n cpu:                4\n ephemeral-storage:  18566299208\n hugepages-2Mi:      0\n memory:             3936676Ki\n pods:               110\nSystem Info:\n Machine ID:                 f573dcf04d6f4a87856a35d266a2fa7a\n System UUID:                F573DCF0-4D6F-4A87-856A-35D266A2FA7A\n Boot ID:                    8baf4beb-8391-43e6-b17b-b1e184b5370a\n Kernel Version:             4.15.0-52-generic\n OS Image:                   Ubuntu 18.04.2 LTS\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  docker://18.9.7\n Kubelet Version:            v1.15.1\n Kube-Proxy Version:         v1.15.1\nPodCIDR:                     10.96.1.0/24\nNon-terminated Pods:         (3 in total)\n  Namespace                  Name                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                  ------------  ----------  ---------------  -------------  ---\n  kube-system                kube-proxy-976zl      0 (0%)        0 (0%)      0 (0%)           0 (0%)         138d\n  kube-system                weave-net-rlp57       20m (0%)      0 (0%)      0 (0%)           0 (0%)         69d\n  kubectl-1157               redis-master-68w5l    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests  Limits\n  --------           --------  ------\n  cpu                20m (0%)  0 (0%)\n  memory             0 (0%)    0 (0%)\n  ephemeral-storage  0 (0%)    0 (0%)\nEvents:              \n"
Dec 20 14:01:26.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-1157'
Dec 20 14:01:26.846: INFO: stderr: ""
Dec 20 14:01:26.847: INFO: stdout: "Name:         kubectl-1157\nLabels:       e2e-framework=kubectl\n              e2e-run=723963dc-003b-4ece-954c-dab8a15cf56a\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:01:26.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1157" for this suite.
Dec 20 14:01:48.885: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:01:49.064: INFO: namespace kubectl-1157 deletion completed in 22.211796255s

• [SLOW TEST:32.895 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:01:49.064: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 20 14:01:49.189: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Dec 20 14:01:49.220: INFO: Number of nodes with available pods: 0
Dec 20 14:01:49.220: INFO: Node iruya-node is running more than one daemon pod
Dec 20 14:01:50.244: INFO: Number of nodes with available pods: 0
Dec 20 14:01:50.244: INFO: Node iruya-node is running more than one daemon pod
Dec 20 14:01:51.254: INFO: Number of nodes with available pods: 0
Dec 20 14:01:51.254: INFO: Node iruya-node is running more than one daemon pod
Dec 20 14:01:52.232: INFO: Number of nodes with available pods: 0
Dec 20 14:01:52.233: INFO: Node iruya-node is running more than one daemon pod
Dec 20 14:01:53.391: INFO: Number of nodes with available pods: 0
Dec 20 14:01:53.391: INFO: Node iruya-node is running more than one daemon pod
Dec 20 14:01:54.237: INFO: Number of nodes with available pods: 0
Dec 20 14:01:54.237: INFO: Node iruya-node is running more than one daemon pod
Dec 20 14:01:56.268: INFO: Number of nodes with available pods: 0
Dec 20 14:01:56.269: INFO: Node iruya-node is running more than one daemon pod
Dec 20 14:01:57.970: INFO: Number of nodes with available pods: 0
Dec 20 14:01:57.971: INFO: Node iruya-node is running more than one daemon pod
Dec 20 14:01:58.306: INFO: Number of nodes with available pods: 0
Dec 20 14:01:58.306: INFO: Node iruya-node is running more than one daemon pod
Dec 20 14:01:59.232: INFO: Number of nodes with available pods: 0
Dec 20 14:01:59.232: INFO: Node iruya-node is running more than one daemon pod
Dec 20 14:02:00.234: INFO: Number of nodes with available pods: 2
Dec 20 14:02:00.234: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Dec 20 14:02:00.372: INFO: Wrong image for pod: daemon-set-6b9x4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 20 14:02:00.372: INFO: Wrong image for pod: daemon-set-fxgzp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 20 14:02:01.402: INFO: Wrong image for pod: daemon-set-6b9x4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 20 14:02:01.402: INFO: Wrong image for pod: daemon-set-fxgzp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 20 14:02:02.400: INFO: Wrong image for pod: daemon-set-6b9x4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 20 14:02:02.400: INFO: Wrong image for pod: daemon-set-fxgzp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 20 14:02:03.406: INFO: Wrong image for pod: daemon-set-6b9x4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 20 14:02:03.406: INFO: Wrong image for pod: daemon-set-fxgzp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 20 14:02:04.400: INFO: Wrong image for pod: daemon-set-6b9x4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 20 14:02:04.400: INFO: Wrong image for pod: daemon-set-fxgzp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 20 14:02:05.402: INFO: Wrong image for pod: daemon-set-6b9x4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 20 14:02:05.402: INFO: Wrong image for pod: daemon-set-fxgzp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 20 14:02:06.402: INFO: Wrong image for pod: daemon-set-6b9x4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 20 14:02:06.402: INFO: Wrong image for pod: daemon-set-fxgzp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 20 14:02:07.399: INFO: Wrong image for pod: daemon-set-6b9x4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 20 14:02:07.399: INFO: Pod daemon-set-wb6z7 is not available
Dec 20 14:02:08.404: INFO: Wrong image for pod: daemon-set-6b9x4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 20 14:02:08.404: INFO: Pod daemon-set-wb6z7 is not available
Dec 20 14:02:09.404: INFO: Wrong image for pod: daemon-set-6b9x4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 20 14:02:09.404: INFO: Pod daemon-set-wb6z7 is not available
Dec 20 14:02:10.412: INFO: Wrong image for pod: daemon-set-6b9x4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 20 14:02:10.413: INFO: Pod daemon-set-wb6z7 is not available
Dec 20 14:02:11.406: INFO: Wrong image for pod: daemon-set-6b9x4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 20 14:02:11.406: INFO: Pod daemon-set-wb6z7 is not available
Dec 20 14:02:12.406: INFO: Wrong image for pod: daemon-set-6b9x4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 20 14:02:12.406: INFO: Pod daemon-set-wb6z7 is not available
Dec 20 14:02:13.402: INFO: Wrong image for pod: daemon-set-6b9x4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 20 14:02:13.402: INFO: Pod daemon-set-wb6z7 is not available
Dec 20 14:02:14.602: INFO: Wrong image for pod: daemon-set-6b9x4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 20 14:02:15.403: INFO: Wrong image for pod: daemon-set-6b9x4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 20 14:02:16.401: INFO: Wrong image for pod: daemon-set-6b9x4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 20 14:02:17.405: INFO: Wrong image for pod: daemon-set-6b9x4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 20 14:02:17.406: INFO: Pod daemon-set-6b9x4 is not available
Dec 20 14:02:18.435: INFO: Pod daemon-set-9pnxl is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Dec 20 14:02:18.471: INFO: Number of nodes with available pods: 1
Dec 20 14:02:18.472: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 20 14:02:19.488: INFO: Number of nodes with available pods: 1
Dec 20 14:02:19.488: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 20 14:02:20.489: INFO: Number of nodes with available pods: 1
Dec 20 14:02:20.490: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 20 14:02:21.518: INFO: Number of nodes with available pods: 1
Dec 20 14:02:21.518: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 20 14:02:22.912: INFO: Number of nodes with available pods: 1
Dec 20 14:02:22.912: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 20 14:02:23.588: INFO: Number of nodes with available pods: 1
Dec 20 14:02:23.588: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 20 14:02:24.723: INFO: Number of nodes with available pods: 1
Dec 20 14:02:24.723: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 20 14:02:25.505: INFO: Number of nodes with available pods: 1
Dec 20 14:02:25.505: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 20 14:02:26.498: INFO: Number of nodes with available pods: 1
Dec 20 14:02:26.498: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 20 14:02:27.489: INFO: Number of nodes with available pods: 2
Dec 20 14:02:27.489: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1530, will wait for the garbage collector to delete the pods
Dec 20 14:02:27.601: INFO: Deleting DaemonSet.extensions daemon-set took: 16.093856ms
Dec 20 14:02:28.001: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.581321ms
Dec 20 14:02:35.328: INFO: Number of nodes with available pods: 0
Dec 20 14:02:35.328: INFO: Number of running nodes: 0, number of available pods: 0
Dec 20 14:02:35.332: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1530/daemonsets","resourceVersion":"17396569"},"items":null}

Dec 20 14:02:35.338: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1530/pods","resourceVersion":"17396569"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:02:35.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-1530" for this suite.
Dec 20 14:02:41.386: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:02:41.576: INFO: namespace daemonsets-1530 deletion completed in 6.219137273s

• [SLOW TEST:52.512 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:02:41.577: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Dec 20 14:02:41.706: INFO: Waiting up to 5m0s for pod "downward-api-3e24f038-a8ae-4f49-95e4-b5f90832bbbb" in namespace "downward-api-5076" to be "success or failure"
Dec 20 14:02:41.725: INFO: Pod "downward-api-3e24f038-a8ae-4f49-95e4-b5f90832bbbb": Phase="Pending", Reason="", readiness=false. Elapsed: 18.42277ms
Dec 20 14:02:43.733: INFO: Pod "downward-api-3e24f038-a8ae-4f49-95e4-b5f90832bbbb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026703823s
Dec 20 14:02:45.744: INFO: Pod "downward-api-3e24f038-a8ae-4f49-95e4-b5f90832bbbb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037361176s
Dec 20 14:02:47.754: INFO: Pod "downward-api-3e24f038-a8ae-4f49-95e4-b5f90832bbbb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047928511s
Dec 20 14:02:49.768: INFO: Pod "downward-api-3e24f038-a8ae-4f49-95e4-b5f90832bbbb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.061156426s
Dec 20 14:02:51.792: INFO: Pod "downward-api-3e24f038-a8ae-4f49-95e4-b5f90832bbbb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.08570005s
STEP: Saw pod success
Dec 20 14:02:51.792: INFO: Pod "downward-api-3e24f038-a8ae-4f49-95e4-b5f90832bbbb" satisfied condition "success or failure"
Dec 20 14:02:51.802: INFO: Trying to get logs from node iruya-node pod downward-api-3e24f038-a8ae-4f49-95e4-b5f90832bbbb container dapi-container: 
STEP: delete the pod
Dec 20 14:02:51.892: INFO: Waiting for pod downward-api-3e24f038-a8ae-4f49-95e4-b5f90832bbbb to disappear
Dec 20 14:02:51.897: INFO: Pod downward-api-3e24f038-a8ae-4f49-95e4-b5f90832bbbb no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:02:51.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5076" for this suite.
Dec 20 14:02:57.954: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:02:58.098: INFO: namespace downward-api-5076 deletion completed in 6.170999749s

• [SLOW TEST:16.521 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:02:58.098: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Dec 20 14:02:58.209: INFO: Waiting up to 5m0s for pod "pod-20e2a135-3843-4455-936e-e7a6431c4922" in namespace "emptydir-5963" to be "success or failure"
Dec 20 14:02:58.223: INFO: Pod "pod-20e2a135-3843-4455-936e-e7a6431c4922": Phase="Pending", Reason="", readiness=false. Elapsed: 13.487939ms
Dec 20 14:03:00.233: INFO: Pod "pod-20e2a135-3843-4455-936e-e7a6431c4922": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023693961s
Dec 20 14:03:03.456: INFO: Pod "pod-20e2a135-3843-4455-936e-e7a6431c4922": Phase="Pending", Reason="", readiness=false. Elapsed: 5.246231533s
Dec 20 14:03:05.474: INFO: Pod "pod-20e2a135-3843-4455-936e-e7a6431c4922": Phase="Pending", Reason="", readiness=false. Elapsed: 7.264917546s
Dec 20 14:03:07.484: INFO: Pod "pod-20e2a135-3843-4455-936e-e7a6431c4922": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.274439757s
STEP: Saw pod success
Dec 20 14:03:07.484: INFO: Pod "pod-20e2a135-3843-4455-936e-e7a6431c4922" satisfied condition "success or failure"
Dec 20 14:03:07.528: INFO: Trying to get logs from node iruya-node pod pod-20e2a135-3843-4455-936e-e7a6431c4922 container test-container: 
STEP: delete the pod
Dec 20 14:03:07.579: INFO: Waiting for pod pod-20e2a135-3843-4455-936e-e7a6431c4922 to disappear
Dec 20 14:03:07.595: INFO: Pod pod-20e2a135-3843-4455-936e-e7a6431c4922 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:03:07.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5963" for this suite.
Dec 20 14:03:13.680: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:03:13.857: INFO: namespace emptydir-5963 deletion completed in 6.255537184s

• [SLOW TEST:15.760 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:03:13.859: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap that has name configmap-test-emptyKey-172e0b1b-6f0a-4d86-b341-a3396c1c294c
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:03:13.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2802" for this suite.
Dec 20 14:03:20.007: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:03:20.168: INFO: namespace configmap-2802 deletion completed in 6.179584472s

• [SLOW TEST:6.310 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:03:20.168: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating replication controller my-hostname-basic-9357600f-509f-4ba6-933b-061152a4fa2f
Dec 20 14:03:20.259: INFO: Pod name my-hostname-basic-9357600f-509f-4ba6-933b-061152a4fa2f: Found 0 pods out of 1
Dec 20 14:03:25.270: INFO: Pod name my-hostname-basic-9357600f-509f-4ba6-933b-061152a4fa2f: Found 1 pods out of 1
Dec 20 14:03:25.270: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-9357600f-509f-4ba6-933b-061152a4fa2f" are running
Dec 20 14:03:29.286: INFO: Pod "my-hostname-basic-9357600f-509f-4ba6-933b-061152a4fa2f-8nhl8" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-20 14:03:20 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-20 14:03:20 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-9357600f-509f-4ba6-933b-061152a4fa2f]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-20 14:03:20 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-9357600f-509f-4ba6-933b-061152a4fa2f]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-20 14:03:20 +0000 UTC Reason: Message:}])
Dec 20 14:03:29.286: INFO: Trying to dial the pod
Dec 20 14:03:34.815: INFO: Controller my-hostname-basic-9357600f-509f-4ba6-933b-061152a4fa2f: Got expected result from replica 1 [my-hostname-basic-9357600f-509f-4ba6-933b-061152a4fa2f-8nhl8]: "my-hostname-basic-9357600f-509f-4ba6-933b-061152a4fa2f-8nhl8", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:03:34.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-1659" for this suite.
Dec 20 14:03:40.856: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:03:40.955: INFO: namespace replication-controller-1659 deletion completed in 6.136609864s

• [SLOW TEST:20.787 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:03:40.956: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5429.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5429.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 20 14:03:57.073: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-5429/dns-test-3e187dbd-45cb-497b-832f-0bffc183b4c4: the server could not find the requested resource (get pods dns-test-3e187dbd-45cb-497b-832f-0bffc183b4c4)
Dec 20 14:03:57.082: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-5429/dns-test-3e187dbd-45cb-497b-832f-0bffc183b4c4: the server could not find the requested resource (get pods dns-test-3e187dbd-45cb-497b-832f-0bffc183b4c4)
Dec 20 14:03:57.088: INFO: Unable to read wheezy_udp@PodARecord from pod dns-5429/dns-test-3e187dbd-45cb-497b-832f-0bffc183b4c4: the server could not find the requested resource (get pods dns-test-3e187dbd-45cb-497b-832f-0bffc183b4c4)
Dec 20 14:03:57.099: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-5429/dns-test-3e187dbd-45cb-497b-832f-0bffc183b4c4: the server could not find the requested resource (get pods dns-test-3e187dbd-45cb-497b-832f-0bffc183b4c4)
Dec 20 14:03:57.111: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-5429/dns-test-3e187dbd-45cb-497b-832f-0bffc183b4c4: the server could not find the requested resource (get pods dns-test-3e187dbd-45cb-497b-832f-0bffc183b4c4)
Dec 20 14:03:57.115: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-5429/dns-test-3e187dbd-45cb-497b-832f-0bffc183b4c4: the server could not find the requested resource (get pods dns-test-3e187dbd-45cb-497b-832f-0bffc183b4c4)
Dec 20 14:03:57.120: INFO: Unable to read jessie_udp@PodARecord from pod dns-5429/dns-test-3e187dbd-45cb-497b-832f-0bffc183b4c4: the server could not find the requested resource (get pods dns-test-3e187dbd-45cb-497b-832f-0bffc183b4c4)
Dec 20 14:03:57.124: INFO: Unable to read jessie_tcp@PodARecord from pod dns-5429/dns-test-3e187dbd-45cb-497b-832f-0bffc183b4c4: the server could not find the requested resource (get pods dns-test-3e187dbd-45cb-497b-832f-0bffc183b4c4)
Dec 20 14:03:57.124: INFO: Lookups using dns-5429/dns-test-3e187dbd-45cb-497b-832f-0bffc183b4c4 failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Dec 20 14:04:02.191: INFO: DNS probes using dns-5429/dns-test-3e187dbd-45cb-497b-832f-0bffc183b4c4 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:04:02.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5429" for this suite.
Dec 20 14:04:08.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:04:08.559: INFO: namespace dns-5429 deletion completed in 6.199866165s

• [SLOW TEST:27.603 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:04:08.560: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
Dec 20 14:04:09.568: INFO: created pod pod-service-account-defaultsa
Dec 20 14:04:09.568: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Dec 20 14:04:09.581: INFO: created pod pod-service-account-mountsa
Dec 20 14:04:09.582: INFO: pod pod-service-account-mountsa service account token volume mount: true
Dec 20 14:04:09.710: INFO: created pod pod-service-account-nomountsa
Dec 20 14:04:09.710: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Dec 20 14:04:09.737: INFO: created pod pod-service-account-defaultsa-mountspec
Dec 20 14:04:09.738: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Dec 20 14:04:09.777: INFO: created pod pod-service-account-mountsa-mountspec
Dec 20 14:04:09.777: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Dec 20 14:04:09.881: INFO: created pod pod-service-account-nomountsa-mountspec
Dec 20 14:04:09.881: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Dec 20 14:04:10.948: INFO: created pod pod-service-account-defaultsa-nomountspec
Dec 20 14:04:10.948: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Dec 20 14:04:10.962: INFO: created pod pod-service-account-mountsa-nomountspec
Dec 20 14:04:10.963: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Dec 20 14:04:11.367: INFO: created pod pod-service-account-nomountsa-nomountspec
Dec 20 14:04:11.368: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:04:11.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-280" for this suite.
Dec 20 14:04:55.233: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:04:55.406: INFO: namespace svcaccounts-280 deletion completed in 43.845884611s

• [SLOW TEST:46.846 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:04:55.406: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W1220 14:05:35.975043       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 20 14:05:35.975: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:05:35.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7761" for this suite.
Dec 20 14:05:46.003: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:05:46.351: INFO: namespace gc-7761 deletion completed in 10.373105525s

• [SLOW TEST:50.946 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:05:46.353: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-9a2e5706-b96e-44ad-8409-1bfe4fc9c095
STEP: Creating a pod to test consume secrets
Dec 20 14:05:49.998: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-52fb7ef4-1e2c-483f-8089-54b80457c61d" in namespace "projected-4301" to be "success or failure"
Dec 20 14:05:50.013: INFO: Pod "pod-projected-secrets-52fb7ef4-1e2c-483f-8089-54b80457c61d": Phase="Pending", Reason="", readiness=false. Elapsed: 15.158116ms
Dec 20 14:05:52.294: INFO: Pod "pod-projected-secrets-52fb7ef4-1e2c-483f-8089-54b80457c61d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.296233503s
Dec 20 14:05:54.305: INFO: Pod "pod-projected-secrets-52fb7ef4-1e2c-483f-8089-54b80457c61d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.307564686s
Dec 20 14:05:56.319: INFO: Pod "pod-projected-secrets-52fb7ef4-1e2c-483f-8089-54b80457c61d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.321017019s
Dec 20 14:05:58.330: INFO: Pod "pod-projected-secrets-52fb7ef4-1e2c-483f-8089-54b80457c61d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.332161794s
Dec 20 14:06:00.337: INFO: Pod "pod-projected-secrets-52fb7ef4-1e2c-483f-8089-54b80457c61d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.339522875s
Dec 20 14:06:02.352: INFO: Pod "pod-projected-secrets-52fb7ef4-1e2c-483f-8089-54b80457c61d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.354579249s
Dec 20 14:06:04.359: INFO: Pod "pod-projected-secrets-52fb7ef4-1e2c-483f-8089-54b80457c61d": Phase="Running", Reason="", readiness=true. Elapsed: 14.360982029s
Dec 20 14:06:06.372: INFO: Pod "pod-projected-secrets-52fb7ef4-1e2c-483f-8089-54b80457c61d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.374145733s
STEP: Saw pod success
Dec 20 14:06:06.372: INFO: Pod "pod-projected-secrets-52fb7ef4-1e2c-483f-8089-54b80457c61d" satisfied condition "success or failure"
Dec 20 14:06:06.383: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-52fb7ef4-1e2c-483f-8089-54b80457c61d container projected-secret-volume-test: 
STEP: delete the pod
Dec 20 14:06:06.537: INFO: Waiting for pod pod-projected-secrets-52fb7ef4-1e2c-483f-8089-54b80457c61d to disappear
Dec 20 14:06:06.548: INFO: Pod pod-projected-secrets-52fb7ef4-1e2c-483f-8089-54b80457c61d no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:06:06.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4301" for this suite.
Dec 20 14:06:12.615: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:06:12.726: INFO: namespace projected-4301 deletion completed in 6.171351535s

• [SLOW TEST:26.374 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:06:12.728: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with configMap that has name projected-configmap-test-upd-f0672557-7116-4bcd-90b2-f3e43875599b
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-f0672557-7116-4bcd-90b2-f3e43875599b
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:07:28.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4302" for this suite.
Dec 20 14:07:50.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:07:51.029: INFO: namespace projected-4302 deletion completed in 22.270660152s

• [SLOW TEST:98.301 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:07:51.029: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-5cbde2b5-2521-44d6-b2ee-c481e0375cd0
STEP: Creating secret with name s-test-opt-upd-47681b64-cd8d-4776-9050-74ca98d87e6f
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-5cbde2b5-2521-44d6-b2ee-c481e0375cd0
STEP: Updating secret s-test-opt-upd-47681b64-cd8d-4776-9050-74ca98d87e6f
STEP: Creating secret with name s-test-opt-create-0e9211d3-01ab-4331-88a5-60d75c91bb11
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:08:05.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4853" for this suite.
Dec 20 14:08:27.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:08:27.574: INFO: namespace projected-4853 deletion completed in 22.183101534s

• [SLOW TEST:36.544 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:08:27.574: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-projected-all-test-volume-ab58bcb5-07ec-4aae-9150-ee6c4adf1e4f
STEP: Creating secret with name secret-projected-all-test-volume-ff5af0f2-8608-4764-9d31-edfcfbbfd05f
STEP: Creating a pod to test Check all projections for projected volume plugin
Dec 20 14:08:27.696: INFO: Waiting up to 5m0s for pod "projected-volume-f2be7e91-925e-4ee5-a89a-0d2a1c1a4d53" in namespace "projected-110" to be "success or failure"
Dec 20 14:08:27.715: INFO: Pod "projected-volume-f2be7e91-925e-4ee5-a89a-0d2a1c1a4d53": Phase="Pending", Reason="", readiness=false. Elapsed: 19.004679ms
Dec 20 14:08:29.722: INFO: Pod "projected-volume-f2be7e91-925e-4ee5-a89a-0d2a1c1a4d53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025712235s
Dec 20 14:08:31.747: INFO: Pod "projected-volume-f2be7e91-925e-4ee5-a89a-0d2a1c1a4d53": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051305334s
Dec 20 14:08:33.767: INFO: Pod "projected-volume-f2be7e91-925e-4ee5-a89a-0d2a1c1a4d53": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071086362s
Dec 20 14:08:35.782: INFO: Pod "projected-volume-f2be7e91-925e-4ee5-a89a-0d2a1c1a4d53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.085862087s
STEP: Saw pod success
Dec 20 14:08:35.782: INFO: Pod "projected-volume-f2be7e91-925e-4ee5-a89a-0d2a1c1a4d53" satisfied condition "success or failure"
Dec 20 14:08:35.788: INFO: Trying to get logs from node iruya-node pod projected-volume-f2be7e91-925e-4ee5-a89a-0d2a1c1a4d53 container projected-all-volume-test: 
STEP: delete the pod
Dec 20 14:08:35.861: INFO: Waiting for pod projected-volume-f2be7e91-925e-4ee5-a89a-0d2a1c1a4d53 to disappear
Dec 20 14:08:35.995: INFO: Pod projected-volume-f2be7e91-925e-4ee5-a89a-0d2a1c1a4d53 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:08:35.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-110" for this suite.
Dec 20 14:08:42.039: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:08:42.167: INFO: namespace projected-110 deletion completed in 6.153719635s

• [SLOW TEST:14.593 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:08:42.168: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:08:42.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2891" for this suite.
Dec 20 14:08:48.359: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:08:48.503: INFO: namespace services-2891 deletion completed in 6.22581058s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:6.335 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:08:48.505: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-138/configmap-test-eabddaae-cc6f-4ebf-b77b-b1993a9e5ee8
STEP: Creating a pod to test consume configMaps
Dec 20 14:08:48.683: INFO: Waiting up to 5m0s for pod "pod-configmaps-3cfed5c1-9a48-4cc1-a38e-cce8f32e5ee5" in namespace "configmap-138" to be "success or failure"
Dec 20 14:08:48.705: INFO: Pod "pod-configmaps-3cfed5c1-9a48-4cc1-a38e-cce8f32e5ee5": Phase="Pending", Reason="", readiness=false. Elapsed: 22.058305ms
Dec 20 14:08:50.717: INFO: Pod "pod-configmaps-3cfed5c1-9a48-4cc1-a38e-cce8f32e5ee5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034425145s
Dec 20 14:08:52.738: INFO: Pod "pod-configmaps-3cfed5c1-9a48-4cc1-a38e-cce8f32e5ee5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055344764s
Dec 20 14:08:54.754: INFO: Pod "pod-configmaps-3cfed5c1-9a48-4cc1-a38e-cce8f32e5ee5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.070645546s
Dec 20 14:08:56.765: INFO: Pod "pod-configmaps-3cfed5c1-9a48-4cc1-a38e-cce8f32e5ee5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.081961951s
STEP: Saw pod success
Dec 20 14:08:56.765: INFO: Pod "pod-configmaps-3cfed5c1-9a48-4cc1-a38e-cce8f32e5ee5" satisfied condition "success or failure"
Dec 20 14:08:56.785: INFO: Trying to get logs from node iruya-node pod pod-configmaps-3cfed5c1-9a48-4cc1-a38e-cce8f32e5ee5 container env-test: 
STEP: delete the pod
Dec 20 14:08:56.863: INFO: Waiting for pod pod-configmaps-3cfed5c1-9a48-4cc1-a38e-cce8f32e5ee5 to disappear
Dec 20 14:08:56.870: INFO: Pod pod-configmaps-3cfed5c1-9a48-4cc1-a38e-cce8f32e5ee5 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:08:56.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-138" for this suite.
Dec 20 14:09:02.916: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:09:03.048: INFO: namespace configmap-138 deletion completed in 6.170312444s

• [SLOW TEST:14.543 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:09:03.048: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:09:13.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2784" for this suite.
Dec 20 14:09:57.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:09:57.437: INFO: namespace kubelet-test-2784 deletion completed in 44.119876578s

• [SLOW TEST:54.389 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:09:57.438: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Dec 20 14:09:57.515: INFO: Waiting up to 5m0s for pod "pod-099ed88b-5d06-4b11-be68-a60ce216647d" in namespace "emptydir-2007" to be "success or failure"
Dec 20 14:09:57.524: INFO: Pod "pod-099ed88b-5d06-4b11-be68-a60ce216647d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.560955ms
Dec 20 14:09:59.533: INFO: Pod "pod-099ed88b-5d06-4b11-be68-a60ce216647d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017126941s
Dec 20 14:10:01.542: INFO: Pod "pod-099ed88b-5d06-4b11-be68-a60ce216647d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026550737s
Dec 20 14:10:03.551: INFO: Pod "pod-099ed88b-5d06-4b11-be68-a60ce216647d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035330226s
Dec 20 14:10:05.563: INFO: Pod "pod-099ed88b-5d06-4b11-be68-a60ce216647d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.047849718s
Dec 20 14:10:07.578: INFO: Pod "pod-099ed88b-5d06-4b11-be68-a60ce216647d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.062522982s
STEP: Saw pod success
Dec 20 14:10:07.578: INFO: Pod "pod-099ed88b-5d06-4b11-be68-a60ce216647d" satisfied condition "success or failure"
Dec 20 14:10:07.584: INFO: Trying to get logs from node iruya-node pod pod-099ed88b-5d06-4b11-be68-a60ce216647d container test-container: 
STEP: delete the pod
Dec 20 14:10:07.928: INFO: Waiting for pod pod-099ed88b-5d06-4b11-be68-a60ce216647d to disappear
Dec 20 14:10:08.028: INFO: Pod pod-099ed88b-5d06-4b11-be68-a60ce216647d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:10:08.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2007" for this suite.
Dec 20 14:10:14.077: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:10:14.172: INFO: namespace emptydir-2007 deletion completed in 6.131419861s

• [SLOW TEST:16.735 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:10:14.173: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 20 14:10:14.296: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f6f7ffe7-6fe7-4b90-a0e7-406f3332e078" in namespace "projected-471" to be "success or failure"
Dec 20 14:10:14.306: INFO: Pod "downwardapi-volume-f6f7ffe7-6fe7-4b90-a0e7-406f3332e078": Phase="Pending", Reason="", readiness=false. Elapsed: 10.221463ms
Dec 20 14:10:16.315: INFO: Pod "downwardapi-volume-f6f7ffe7-6fe7-4b90-a0e7-406f3332e078": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019392603s
Dec 20 14:10:18.323: INFO: Pod "downwardapi-volume-f6f7ffe7-6fe7-4b90-a0e7-406f3332e078": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026748454s
Dec 20 14:10:20.335: INFO: Pod "downwardapi-volume-f6f7ffe7-6fe7-4b90-a0e7-406f3332e078": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039595793s
Dec 20 14:10:22.343: INFO: Pod "downwardapi-volume-f6f7ffe7-6fe7-4b90-a0e7-406f3332e078": Phase="Pending", Reason="", readiness=false. Elapsed: 8.04759315s
Dec 20 14:10:24.353: INFO: Pod "downwardapi-volume-f6f7ffe7-6fe7-4b90-a0e7-406f3332e078": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.056952128s
STEP: Saw pod success
Dec 20 14:10:24.353: INFO: Pod "downwardapi-volume-f6f7ffe7-6fe7-4b90-a0e7-406f3332e078" satisfied condition "success or failure"
Dec 20 14:10:24.358: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-f6f7ffe7-6fe7-4b90-a0e7-406f3332e078 container client-container: 
STEP: delete the pod
Dec 20 14:10:24.466: INFO: Waiting for pod downwardapi-volume-f6f7ffe7-6fe7-4b90-a0e7-406f3332e078 to disappear
Dec 20 14:10:24.478: INFO: Pod downwardapi-volume-f6f7ffe7-6fe7-4b90-a0e7-406f3332e078 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:10:24.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-471" for this suite.
Dec 20 14:10:30.515: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:10:30.720: INFO: namespace projected-471 deletion completed in 6.233023408s

• [SLOW TEST:16.548 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:10:30.720: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Dec 20 14:10:39.449: INFO: Successfully updated pod "labelsupdate3d58d81f-dba0-44de-9376-8b160edf687c"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:10:41.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-683" for this suite.
Dec 20 14:11:03.607: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:11:03.701: INFO: namespace projected-683 deletion completed in 22.16670108s

• [SLOW TEST:32.981 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:11:03.702: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-88ad2b1e-cd3e-460b-b535-fca3f20832df
STEP: Creating a pod to test consume secrets
Dec 20 14:11:03.900: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-cc49b726-3ae8-46ab-bca3-cd016e135b66" in namespace "projected-1368" to be "success or failure"
Dec 20 14:11:03.946: INFO: Pod "pod-projected-secrets-cc49b726-3ae8-46ab-bca3-cd016e135b66": Phase="Pending", Reason="", readiness=false. Elapsed: 46.42575ms
Dec 20 14:11:05.953: INFO: Pod "pod-projected-secrets-cc49b726-3ae8-46ab-bca3-cd016e135b66": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053455751s
Dec 20 14:11:07.967: INFO: Pod "pod-projected-secrets-cc49b726-3ae8-46ab-bca3-cd016e135b66": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0668233s
Dec 20 14:11:09.975: INFO: Pod "pod-projected-secrets-cc49b726-3ae8-46ab-bca3-cd016e135b66": Phase="Pending", Reason="", readiness=false. Elapsed: 6.075306569s
Dec 20 14:11:11.984: INFO: Pod "pod-projected-secrets-cc49b726-3ae8-46ab-bca3-cd016e135b66": Phase="Pending", Reason="", readiness=false. Elapsed: 8.083915681s
Dec 20 14:11:13.995: INFO: Pod "pod-projected-secrets-cc49b726-3ae8-46ab-bca3-cd016e135b66": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.094672393s
STEP: Saw pod success
Dec 20 14:11:13.995: INFO: Pod "pod-projected-secrets-cc49b726-3ae8-46ab-bca3-cd016e135b66" satisfied condition "success or failure"
Dec 20 14:11:13.998: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-cc49b726-3ae8-46ab-bca3-cd016e135b66 container projected-secret-volume-test: 
STEP: delete the pod
Dec 20 14:11:14.145: INFO: Waiting for pod pod-projected-secrets-cc49b726-3ae8-46ab-bca3-cd016e135b66 to disappear
Dec 20 14:11:14.182: INFO: Pod pod-projected-secrets-cc49b726-3ae8-46ab-bca3-cd016e135b66 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:11:14.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1368" for this suite.
Dec 20 14:11:20.213: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:11:20.351: INFO: namespace projected-1368 deletion completed in 6.160066158s

• [SLOW TEST:16.649 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:11:20.351: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:11:50.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-1641" for this suite.
Dec 20 14:11:56.847: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:11:56.957: INFO: namespace namespaces-1641 deletion completed in 6.148934s
STEP: Destroying namespace "nsdeletetest-3766" for this suite.
Dec 20 14:11:56.959: INFO: Namespace nsdeletetest-3766 was already deleted
STEP: Destroying namespace "nsdeletetest-2634" for this suite.
Dec 20 14:12:02.985: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:12:03.112: INFO: namespace nsdeletetest-2634 deletion completed in 6.15210776s

• [SLOW TEST:42.761 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:12:03.112: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:12:03.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4129" for this suite.
Dec 20 14:12:25.330: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:12:25.450: INFO: namespace pods-4129 deletion completed in 22.150674455s

• [SLOW TEST:22.337 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:12:25.451: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-40774018-ddda-4418-8532-ba3d77a1c188
STEP: Creating a pod to test consume configMaps
Dec 20 14:12:25.573: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-70956af2-28ab-4193-b3a3-d8c826d10349" in namespace "projected-8117" to be "success or failure"
Dec 20 14:12:25.584: INFO: Pod "pod-projected-configmaps-70956af2-28ab-4193-b3a3-d8c826d10349": Phase="Pending", Reason="", readiness=false. Elapsed: 11.03619ms
Dec 20 14:12:27.597: INFO: Pod "pod-projected-configmaps-70956af2-28ab-4193-b3a3-d8c826d10349": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023842288s
Dec 20 14:12:29.607: INFO: Pod "pod-projected-configmaps-70956af2-28ab-4193-b3a3-d8c826d10349": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033569785s
Dec 20 14:12:31.615: INFO: Pod "pod-projected-configmaps-70956af2-28ab-4193-b3a3-d8c826d10349": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041972462s
Dec 20 14:12:33.627: INFO: Pod "pod-projected-configmaps-70956af2-28ab-4193-b3a3-d8c826d10349": Phase="Pending", Reason="", readiness=false. Elapsed: 8.05372402s
Dec 20 14:12:35.638: INFO: Pod "pod-projected-configmaps-70956af2-28ab-4193-b3a3-d8c826d10349": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.064757751s
STEP: Saw pod success
Dec 20 14:12:35.638: INFO: Pod "pod-projected-configmaps-70956af2-28ab-4193-b3a3-d8c826d10349" satisfied condition "success or failure"
Dec 20 14:12:35.644: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-70956af2-28ab-4193-b3a3-d8c826d10349 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 20 14:12:35.701: INFO: Waiting for pod pod-projected-configmaps-70956af2-28ab-4193-b3a3-d8c826d10349 to disappear
Dec 20 14:12:35.714: INFO: Pod pod-projected-configmaps-70956af2-28ab-4193-b3a3-d8c826d10349 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:12:35.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8117" for this suite.
Dec 20 14:12:41.752: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:12:41.907: INFO: namespace projected-8117 deletion completed in 6.181094224s

• [SLOW TEST:16.456 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:12:41.907: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override all
Dec 20 14:12:42.044: INFO: Waiting up to 5m0s for pod "client-containers-e4f26d13-7624-4503-8959-61ae4854e210" in namespace "containers-6560" to be "success or failure"
Dec 20 14:12:42.055: INFO: Pod "client-containers-e4f26d13-7624-4503-8959-61ae4854e210": Phase="Pending", Reason="", readiness=false. Elapsed: 11.300495ms
Dec 20 14:12:44.066: INFO: Pod "client-containers-e4f26d13-7624-4503-8959-61ae4854e210": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022237574s
Dec 20 14:12:46.071: INFO: Pod "client-containers-e4f26d13-7624-4503-8959-61ae4854e210": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02663388s
Dec 20 14:12:48.077: INFO: Pod "client-containers-e4f26d13-7624-4503-8959-61ae4854e210": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03279085s
Dec 20 14:12:50.083: INFO: Pod "client-containers-e4f26d13-7624-4503-8959-61ae4854e210": Phase="Running", Reason="", readiness=true. Elapsed: 8.039062852s
Dec 20 14:12:52.093: INFO: Pod "client-containers-e4f26d13-7624-4503-8959-61ae4854e210": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.048696818s
STEP: Saw pod success
Dec 20 14:12:52.093: INFO: Pod "client-containers-e4f26d13-7624-4503-8959-61ae4854e210" satisfied condition "success or failure"
Dec 20 14:12:52.097: INFO: Trying to get logs from node iruya-node pod client-containers-e4f26d13-7624-4503-8959-61ae4854e210 container test-container: 
STEP: delete the pod
Dec 20 14:12:52.271: INFO: Waiting for pod client-containers-e4f26d13-7624-4503-8959-61ae4854e210 to disappear
Dec 20 14:12:52.295: INFO: Pod client-containers-e4f26d13-7624-4503-8959-61ae4854e210 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:12:52.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-6560" for this suite.
Dec 20 14:12:58.344: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:12:58.491: INFO: namespace containers-6560 deletion completed in 6.189039461s

• [SLOW TEST:16.583 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:12:58.491: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 20 14:12:58.653: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a1592a5b-7ced-4d4f-8d4f-5bf9eb145cee" in namespace "downward-api-5763" to be "success or failure"
Dec 20 14:12:58.708: INFO: Pod "downwardapi-volume-a1592a5b-7ced-4d4f-8d4f-5bf9eb145cee": Phase="Pending", Reason="", readiness=false. Elapsed: 55.179795ms
Dec 20 14:13:00.720: INFO: Pod "downwardapi-volume-a1592a5b-7ced-4d4f-8d4f-5bf9eb145cee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066615701s
Dec 20 14:13:02.753: INFO: Pod "downwardapi-volume-a1592a5b-7ced-4d4f-8d4f-5bf9eb145cee": Phase="Pending", Reason="", readiness=false. Elapsed: 4.100105649s
Dec 20 14:13:04.771: INFO: Pod "downwardapi-volume-a1592a5b-7ced-4d4f-8d4f-5bf9eb145cee": Phase="Pending", Reason="", readiness=false. Elapsed: 6.117503017s
Dec 20 14:13:06.788: INFO: Pod "downwardapi-volume-a1592a5b-7ced-4d4f-8d4f-5bf9eb145cee": Phase="Pending", Reason="", readiness=false. Elapsed: 8.135407414s
Dec 20 14:13:08.796: INFO: Pod "downwardapi-volume-a1592a5b-7ced-4d4f-8d4f-5bf9eb145cee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.143291345s
STEP: Saw pod success
Dec 20 14:13:08.796: INFO: Pod "downwardapi-volume-a1592a5b-7ced-4d4f-8d4f-5bf9eb145cee" satisfied condition "success or failure"
Dec 20 14:13:08.801: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-a1592a5b-7ced-4d4f-8d4f-5bf9eb145cee container client-container: 
STEP: delete the pod
Dec 20 14:13:08.851: INFO: Waiting for pod downwardapi-volume-a1592a5b-7ced-4d4f-8d4f-5bf9eb145cee to disappear
Dec 20 14:13:08.859: INFO: Pod downwardapi-volume-a1592a5b-7ced-4d4f-8d4f-5bf9eb145cee no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:13:08.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5763" for this suite.
Dec 20 14:13:14.939: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:13:15.026: INFO: namespace downward-api-5763 deletion completed in 6.161527782s

• [SLOW TEST:16.535 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:13:15.027: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Dec 20 14:13:23.713: INFO: Successfully updated pod "pod-update-9c6c786b-1165-4ad2-abf9-4c2791426614"
STEP: verifying the updated pod is in kubernetes
Dec 20 14:13:23.802: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:13:23.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-123" for this suite.
Dec 20 14:13:45.864: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:13:45.953: INFO: namespace pods-123 deletion completed in 22.136091258s

• [SLOW TEST:30.926 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:13:45.953: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W1220 14:13:47.607735       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 20 14:13:47.607: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:13:47.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6662" for this suite.
Dec 20 14:13:54.008: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:13:54.138: INFO: namespace gc-6662 deletion completed in 6.523427932s

• [SLOW TEST:8.186 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:13:54.139: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:14:02.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-2110" for this suite.
Dec 20 14:14:08.621: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:14:08.790: INFO: namespace emptydir-wrapper-2110 deletion completed in 6.305481287s

• [SLOW TEST:14.651 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:14:08.790: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-ee1d4a68-f4df-4ef8-88fb-24855d38db6f
STEP: Creating a pod to test consume secrets
Dec 20 14:14:08.977: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8ae60073-930a-4536-b58f-f7d51bba9b6d" in namespace "projected-6270" to be "success or failure"
Dec 20 14:14:08.982: INFO: Pod "pod-projected-secrets-8ae60073-930a-4536-b58f-f7d51bba9b6d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.52659ms
Dec 20 14:14:10.991: INFO: Pod "pod-projected-secrets-8ae60073-930a-4536-b58f-f7d51bba9b6d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014246205s
Dec 20 14:14:13.002: INFO: Pod "pod-projected-secrets-8ae60073-930a-4536-b58f-f7d51bba9b6d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024603889s
Dec 20 14:14:15.016: INFO: Pod "pod-projected-secrets-8ae60073-930a-4536-b58f-f7d51bba9b6d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038900357s
Dec 20 14:14:17.033: INFO: Pod "pod-projected-secrets-8ae60073-930a-4536-b58f-f7d51bba9b6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.055643299s
STEP: Saw pod success
Dec 20 14:14:17.033: INFO: Pod "pod-projected-secrets-8ae60073-930a-4536-b58f-f7d51bba9b6d" satisfied condition "success or failure"
Dec 20 14:14:17.039: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-8ae60073-930a-4536-b58f-f7d51bba9b6d container projected-secret-volume-test: 
STEP: delete the pod
Dec 20 14:14:17.102: INFO: Waiting for pod pod-projected-secrets-8ae60073-930a-4536-b58f-f7d51bba9b6d to disappear
Dec 20 14:14:17.112: INFO: Pod pod-projected-secrets-8ae60073-930a-4536-b58f-f7d51bba9b6d no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:14:17.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6270" for this suite.
Dec 20 14:14:23.235: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:14:23.373: INFO: namespace projected-6270 deletion completed in 6.242570412s

• [SLOW TEST:14.583 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:14:23.374: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-f98a5a43-0458-47ce-92d8-abeb624066bf
STEP: Creating a pod to test consume secrets
Dec 20 14:14:23.517: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-69d2c645-22b3-4012-a4a7-afbb4d3a03ac" in namespace "projected-5516" to be "success or failure"
Dec 20 14:14:23.527: INFO: Pod "pod-projected-secrets-69d2c645-22b3-4012-a4a7-afbb4d3a03ac": Phase="Pending", Reason="", readiness=false. Elapsed: 9.418989ms
Dec 20 14:14:25.543: INFO: Pod "pod-projected-secrets-69d2c645-22b3-4012-a4a7-afbb4d3a03ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024844096s
Dec 20 14:14:27.548: INFO: Pod "pod-projected-secrets-69d2c645-22b3-4012-a4a7-afbb4d3a03ac": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030481612s
Dec 20 14:14:29.557: INFO: Pod "pod-projected-secrets-69d2c645-22b3-4012-a4a7-afbb4d3a03ac": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039494588s
Dec 20 14:14:31.609: INFO: Pod "pod-projected-secrets-69d2c645-22b3-4012-a4a7-afbb4d3a03ac": Phase="Pending", Reason="", readiness=false. Elapsed: 8.091415644s
Dec 20 14:14:33.629: INFO: Pod "pod-projected-secrets-69d2c645-22b3-4012-a4a7-afbb4d3a03ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.111771917s
STEP: Saw pod success
Dec 20 14:14:33.630: INFO: Pod "pod-projected-secrets-69d2c645-22b3-4012-a4a7-afbb4d3a03ac" satisfied condition "success or failure"
Dec 20 14:14:33.638: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-69d2c645-22b3-4012-a4a7-afbb4d3a03ac container projected-secret-volume-test: 
STEP: delete the pod
Dec 20 14:14:33.755: INFO: Waiting for pod pod-projected-secrets-69d2c645-22b3-4012-a4a7-afbb4d3a03ac to disappear
Dec 20 14:14:33.810: INFO: Pod pod-projected-secrets-69d2c645-22b3-4012-a4a7-afbb4d3a03ac no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:14:33.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5516" for this suite.
Dec 20 14:14:39.862: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:14:40.006: INFO: namespace projected-5516 deletion completed in 6.182975099s

• [SLOW TEST:16.632 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:14:40.007: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod test-webserver-b9431bcd-4683-41f1-9904-15470766121d in namespace container-probe-2378
Dec 20 14:14:50.246: INFO: Started pod test-webserver-b9431bcd-4683-41f1-9904-15470766121d in namespace container-probe-2378
STEP: checking the pod's current state and verifying that restartCount is present
Dec 20 14:14:50.250: INFO: Initial restart count of pod test-webserver-b9431bcd-4683-41f1-9904-15470766121d is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:18:52.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2378" for this suite.
Dec 20 14:18:58.291: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:18:58.469: INFO: namespace container-probe-2378 deletion completed in 6.29216514s

• [SLOW TEST:258.462 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:18:58.471: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-2335/configmap-test-4c2204f5-fbdf-49b3-a0c3-7c3010eb1604
STEP: Creating a pod to test consume configMaps
Dec 20 14:18:58.692: INFO: Waiting up to 5m0s for pod "pod-configmaps-cdaa8bfe-95f3-4e55-aae9-aa6a6df69a96" in namespace "configmap-2335" to be "success or failure"
Dec 20 14:18:58.721: INFO: Pod "pod-configmaps-cdaa8bfe-95f3-4e55-aae9-aa6a6df69a96": Phase="Pending", Reason="", readiness=false. Elapsed: 29.186821ms
Dec 20 14:19:00.732: INFO: Pod "pod-configmaps-cdaa8bfe-95f3-4e55-aae9-aa6a6df69a96": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040107763s
Dec 20 14:19:02.743: INFO: Pod "pod-configmaps-cdaa8bfe-95f3-4e55-aae9-aa6a6df69a96": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05079008s
Dec 20 14:19:04.752: INFO: Pod "pod-configmaps-cdaa8bfe-95f3-4e55-aae9-aa6a6df69a96": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059826589s
Dec 20 14:19:06.759: INFO: Pod "pod-configmaps-cdaa8bfe-95f3-4e55-aae9-aa6a6df69a96": Phase="Pending", Reason="", readiness=false. Elapsed: 8.067017209s
Dec 20 14:19:08.812: INFO: Pod "pod-configmaps-cdaa8bfe-95f3-4e55-aae9-aa6a6df69a96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.11938331s
STEP: Saw pod success
Dec 20 14:19:08.812: INFO: Pod "pod-configmaps-cdaa8bfe-95f3-4e55-aae9-aa6a6df69a96" satisfied condition "success or failure"
Dec 20 14:19:08.816: INFO: Trying to get logs from node iruya-node pod pod-configmaps-cdaa8bfe-95f3-4e55-aae9-aa6a6df69a96 container env-test: 
STEP: delete the pod
Dec 20 14:19:08.913: INFO: Waiting for pod pod-configmaps-cdaa8bfe-95f3-4e55-aae9-aa6a6df69a96 to disappear
Dec 20 14:19:09.048: INFO: Pod pod-configmaps-cdaa8bfe-95f3-4e55-aae9-aa6a6df69a96 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:19:09.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2335" for this suite.
Dec 20 14:19:15.121: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:19:15.405: INFO: namespace configmap-2335 deletion completed in 6.345178784s

• [SLOW TEST:16.934 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:19:15.406: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-projected-vr49
STEP: Creating a pod to test atomic-volume-subpath
Dec 20 14:19:15.576: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-vr49" in namespace "subpath-7221" to be "success or failure"
Dec 20 14:19:15.709: INFO: Pod "pod-subpath-test-projected-vr49": Phase="Pending", Reason="", readiness=false. Elapsed: 132.922305ms
Dec 20 14:19:17.722: INFO: Pod "pod-subpath-test-projected-vr49": Phase="Pending", Reason="", readiness=false. Elapsed: 2.146415266s
Dec 20 14:19:19.733: INFO: Pod "pod-subpath-test-projected-vr49": Phase="Pending", Reason="", readiness=false. Elapsed: 4.157167302s
Dec 20 14:19:21.741: INFO: Pod "pod-subpath-test-projected-vr49": Phase="Pending", Reason="", readiness=false. Elapsed: 6.165308183s
Dec 20 14:19:23.748: INFO: Pod "pod-subpath-test-projected-vr49": Phase="Pending", Reason="", readiness=false. Elapsed: 8.172476337s
Dec 20 14:19:25.756: INFO: Pod "pod-subpath-test-projected-vr49": Phase="Running", Reason="", readiness=true. Elapsed: 10.180633241s
Dec 20 14:19:27.766: INFO: Pod "pod-subpath-test-projected-vr49": Phase="Running", Reason="", readiness=true. Elapsed: 12.189746881s
Dec 20 14:19:29.775: INFO: Pod "pod-subpath-test-projected-vr49": Phase="Running", Reason="", readiness=true. Elapsed: 14.199264937s
Dec 20 14:19:31.789: INFO: Pod "pod-subpath-test-projected-vr49": Phase="Running", Reason="", readiness=true. Elapsed: 16.212852018s
Dec 20 14:19:33.800: INFO: Pod "pod-subpath-test-projected-vr49": Phase="Running", Reason="", readiness=true. Elapsed: 18.224209345s
Dec 20 14:19:35.812: INFO: Pod "pod-subpath-test-projected-vr49": Phase="Running", Reason="", readiness=true. Elapsed: 20.23581348s
Dec 20 14:19:37.826: INFO: Pod "pod-subpath-test-projected-vr49": Phase="Running", Reason="", readiness=true. Elapsed: 22.250556938s
Dec 20 14:19:39.874: INFO: Pod "pod-subpath-test-projected-vr49": Phase="Running", Reason="", readiness=true. Elapsed: 24.298244951s
Dec 20 14:19:41.902: INFO: Pod "pod-subpath-test-projected-vr49": Phase="Running", Reason="", readiness=true. Elapsed: 26.326338243s
Dec 20 14:19:43.920: INFO: Pod "pod-subpath-test-projected-vr49": Phase="Running", Reason="", readiness=true. Elapsed: 28.344705134s
Dec 20 14:19:45.934: INFO: Pod "pod-subpath-test-projected-vr49": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.358626304s
STEP: Saw pod success
Dec 20 14:19:45.935: INFO: Pod "pod-subpath-test-projected-vr49" satisfied condition "success or failure"
Dec 20 14:19:45.943: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-projected-vr49 container test-container-subpath-projected-vr49: 
STEP: delete the pod
Dec 20 14:19:46.044: INFO: Waiting for pod pod-subpath-test-projected-vr49 to disappear
Dec 20 14:19:46.055: INFO: Pod pod-subpath-test-projected-vr49 no longer exists
STEP: Deleting pod pod-subpath-test-projected-vr49
Dec 20 14:19:46.055: INFO: Deleting pod "pod-subpath-test-projected-vr49" in namespace "subpath-7221"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:19:46.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-7221" for this suite.
Dec 20 14:19:52.242: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:19:52.494: INFO: namespace subpath-7221 deletion completed in 6.291631683s

• [SLOW TEST:37.089 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:19:52.495: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-downwardapi-t75k
STEP: Creating a pod to test atomic-volume-subpath
Dec 20 14:19:52.734: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-t75k" in namespace "subpath-4442" to be "success or failure"
Dec 20 14:19:52.748: INFO: Pod "pod-subpath-test-downwardapi-t75k": Phase="Pending", Reason="", readiness=false. Elapsed: 14.377835ms
Dec 20 14:19:54.760: INFO: Pod "pod-subpath-test-downwardapi-t75k": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025898248s
Dec 20 14:19:56.770: INFO: Pod "pod-subpath-test-downwardapi-t75k": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036421254s
Dec 20 14:19:58.780: INFO: Pod "pod-subpath-test-downwardapi-t75k": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045926478s
Dec 20 14:20:00.786: INFO: Pod "pod-subpath-test-downwardapi-t75k": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052646248s
Dec 20 14:20:02.793: INFO: Pod "pod-subpath-test-downwardapi-t75k": Phase="Running", Reason="", readiness=true. Elapsed: 10.059655843s
Dec 20 14:20:04.801: INFO: Pod "pod-subpath-test-downwardapi-t75k": Phase="Running", Reason="", readiness=true. Elapsed: 12.067504586s
Dec 20 14:20:06.809: INFO: Pod "pod-subpath-test-downwardapi-t75k": Phase="Running", Reason="", readiness=true. Elapsed: 14.074934477s
Dec 20 14:20:08.815: INFO: Pod "pod-subpath-test-downwardapi-t75k": Phase="Running", Reason="", readiness=true. Elapsed: 16.081441019s
Dec 20 14:20:10.824: INFO: Pod "pod-subpath-test-downwardapi-t75k": Phase="Running", Reason="", readiness=true. Elapsed: 18.089804371s
Dec 20 14:20:12.838: INFO: Pod "pod-subpath-test-downwardapi-t75k": Phase="Running", Reason="", readiness=true. Elapsed: 20.103854398s
Dec 20 14:20:14.927: INFO: Pod "pod-subpath-test-downwardapi-t75k": Phase="Running", Reason="", readiness=true. Elapsed: 22.193713573s
Dec 20 14:20:16.941: INFO: Pod "pod-subpath-test-downwardapi-t75k": Phase="Running", Reason="", readiness=true. Elapsed: 24.207095701s
Dec 20 14:20:18.956: INFO: Pod "pod-subpath-test-downwardapi-t75k": Phase="Running", Reason="", readiness=true. Elapsed: 26.222008015s
Dec 20 14:20:20.969: INFO: Pod "pod-subpath-test-downwardapi-t75k": Phase="Running", Reason="", readiness=true. Elapsed: 28.235409456s
Dec 20 14:20:23.011: INFO: Pod "pod-subpath-test-downwardapi-t75k": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.276896097s
STEP: Saw pod success
Dec 20 14:20:23.011: INFO: Pod "pod-subpath-test-downwardapi-t75k" satisfied condition "success or failure"
Dec 20 14:20:23.024: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-downwardapi-t75k container test-container-subpath-downwardapi-t75k: 
STEP: delete the pod
Dec 20 14:20:23.096: INFO: Waiting for pod pod-subpath-test-downwardapi-t75k to disappear
Dec 20 14:20:23.171: INFO: Pod pod-subpath-test-downwardapi-t75k no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-t75k
Dec 20 14:20:23.171: INFO: Deleting pod "pod-subpath-test-downwardapi-t75k" in namespace "subpath-4442"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:20:23.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-4442" for this suite.
Dec 20 14:20:29.205: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:20:29.373: INFO: namespace subpath-4442 deletion completed in 6.193769837s

• [SLOW TEST:36.878 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:20:29.375: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Dec 20 14:20:29.458: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 20 14:20:29.467: INFO: Waiting for terminating namespaces to be deleted...
Dec 20 14:20:29.470: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Dec 20 14:20:29.479: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Dec 20 14:20:29.479: INFO: 	Container weave ready: true, restart count 0
Dec 20 14:20:29.479: INFO: 	Container weave-npc ready: true, restart count 0
Dec 20 14:20:29.479: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Dec 20 14:20:29.479: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 20 14:20:29.479: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Dec 20 14:20:29.493: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Dec 20 14:20:29.493: INFO: 	Container etcd ready: true, restart count 0
Dec 20 14:20:29.493: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Dec 20 14:20:29.493: INFO: 	Container weave ready: true, restart count 0
Dec 20 14:20:29.493: INFO: 	Container weave-npc ready: true, restart count 0
Dec 20 14:20:29.493: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Dec 20 14:20:29.493: INFO: 	Container coredns ready: true, restart count 0
Dec 20 14:20:29.493: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Dec 20 14:20:29.493: INFO: 	Container kube-controller-manager ready: true, restart count 10
Dec 20 14:20:29.493: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Dec 20 14:20:29.493: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 20 14:20:29.493: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Dec 20 14:20:29.493: INFO: 	Container kube-apiserver ready: true, restart count 0
Dec 20 14:20:29.493: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Dec 20 14:20:29.493: INFO: 	Container kube-scheduler ready: true, restart count 7
Dec 20 14:20:29.493: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Dec 20 14:20:29.493: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15e21a33dbc22095], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:20:30.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-4095" for this suite.
Dec 20 14:20:36.647: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:20:36.796: INFO: namespace sched-pred-4095 deletion completed in 6.215831124s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:7.422 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:20:36.797: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-secret-9w6j
STEP: Creating a pod to test atomic-volume-subpath
Dec 20 14:20:36.962: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-9w6j" in namespace "subpath-7492" to be "success or failure"
Dec 20 14:20:36.978: INFO: Pod "pod-subpath-test-secret-9w6j": Phase="Pending", Reason="", readiness=false. Elapsed: 15.151368ms
Dec 20 14:20:38.986: INFO: Pod "pod-subpath-test-secret-9w6j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023539188s
Dec 20 14:20:40.992: INFO: Pod "pod-subpath-test-secret-9w6j": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029252717s
Dec 20 14:20:42.997: INFO: Pod "pod-subpath-test-secret-9w6j": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034676841s
Dec 20 14:20:45.008: INFO: Pod "pod-subpath-test-secret-9w6j": Phase="Running", Reason="", readiness=true. Elapsed: 8.045527347s
Dec 20 14:20:47.016: INFO: Pod "pod-subpath-test-secret-9w6j": Phase="Running", Reason="", readiness=true. Elapsed: 10.05383744s
Dec 20 14:20:49.024: INFO: Pod "pod-subpath-test-secret-9w6j": Phase="Running", Reason="", readiness=true. Elapsed: 12.06199807s
Dec 20 14:20:51.032: INFO: Pod "pod-subpath-test-secret-9w6j": Phase="Running", Reason="", readiness=true. Elapsed: 14.069224536s
Dec 20 14:20:53.040: INFO: Pod "pod-subpath-test-secret-9w6j": Phase="Running", Reason="", readiness=true. Elapsed: 16.077458093s
Dec 20 14:20:55.056: INFO: Pod "pod-subpath-test-secret-9w6j": Phase="Running", Reason="", readiness=true. Elapsed: 18.093704463s
Dec 20 14:20:57.067: INFO: Pod "pod-subpath-test-secret-9w6j": Phase="Running", Reason="", readiness=true. Elapsed: 20.104264227s
Dec 20 14:20:59.074: INFO: Pod "pod-subpath-test-secret-9w6j": Phase="Running", Reason="", readiness=true. Elapsed: 22.111716031s
Dec 20 14:21:01.083: INFO: Pod "pod-subpath-test-secret-9w6j": Phase="Running", Reason="", readiness=true. Elapsed: 24.12032433s
Dec 20 14:21:03.091: INFO: Pod "pod-subpath-test-secret-9w6j": Phase="Running", Reason="", readiness=true. Elapsed: 26.128910902s
Dec 20 14:21:05.098: INFO: Pod "pod-subpath-test-secret-9w6j": Phase="Running", Reason="", readiness=true. Elapsed: 28.135823595s
Dec 20 14:21:07.125: INFO: Pod "pod-subpath-test-secret-9w6j": Phase="Running", Reason="", readiness=true. Elapsed: 30.162985686s
Dec 20 14:21:09.147: INFO: Pod "pod-subpath-test-secret-9w6j": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.184088007s
STEP: Saw pod success
Dec 20 14:21:09.147: INFO: Pod "pod-subpath-test-secret-9w6j" satisfied condition "success or failure"
Dec 20 14:21:09.154: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-secret-9w6j container test-container-subpath-secret-9w6j: 
STEP: delete the pod
Dec 20 14:21:09.515: INFO: Waiting for pod pod-subpath-test-secret-9w6j to disappear
Dec 20 14:21:09.521: INFO: Pod pod-subpath-test-secret-9w6j no longer exists
STEP: Deleting pod pod-subpath-test-secret-9w6j
Dec 20 14:21:09.521: INFO: Deleting pod "pod-subpath-test-secret-9w6j" in namespace "subpath-7492"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:21:09.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-7492" for this suite.
Dec 20 14:21:15.569: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:21:15.660: INFO: namespace subpath-7492 deletion completed in 6.129412691s

• [SLOW TEST:38.863 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:21:15.661: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Dec 20 14:21:25.042: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:21:25.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-2677" for this suite.
Dec 20 14:21:31.182: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:21:31.330: INFO: namespace container-runtime-2677 deletion completed in 6.241439109s

• [SLOW TEST:15.668 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:21:31.330: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-7ace5078-3851-47ef-945b-0cfea8406fdf
STEP: Creating a pod to test consume configMaps
Dec 20 14:21:31.515: INFO: Waiting up to 5m0s for pod "pod-configmaps-1c82600a-47fd-45de-9c0d-ba6a0ddbfa3b" in namespace "configmap-8345" to be "success or failure"
Dec 20 14:21:31.528: INFO: Pod "pod-configmaps-1c82600a-47fd-45de-9c0d-ba6a0ddbfa3b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.952982ms
Dec 20 14:21:33.539: INFO: Pod "pod-configmaps-1c82600a-47fd-45de-9c0d-ba6a0ddbfa3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024742019s
Dec 20 14:21:35.567: INFO: Pod "pod-configmaps-1c82600a-47fd-45de-9c0d-ba6a0ddbfa3b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052634859s
Dec 20 14:21:37.578: INFO: Pod "pod-configmaps-1c82600a-47fd-45de-9c0d-ba6a0ddbfa3b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063447722s
Dec 20 14:21:39.594: INFO: Pod "pod-configmaps-1c82600a-47fd-45de-9c0d-ba6a0ddbfa3b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.079134677s
Dec 20 14:21:41.608: INFO: Pod "pod-configmaps-1c82600a-47fd-45de-9c0d-ba6a0ddbfa3b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.093478853s
STEP: Saw pod success
Dec 20 14:21:41.608: INFO: Pod "pod-configmaps-1c82600a-47fd-45de-9c0d-ba6a0ddbfa3b" satisfied condition "success or failure"
Dec 20 14:21:41.614: INFO: Trying to get logs from node iruya-node pod pod-configmaps-1c82600a-47fd-45de-9c0d-ba6a0ddbfa3b container configmap-volume-test: 
STEP: delete the pod
Dec 20 14:21:41.686: INFO: Waiting for pod pod-configmaps-1c82600a-47fd-45de-9c0d-ba6a0ddbfa3b to disappear
Dec 20 14:21:41.690: INFO: Pod pod-configmaps-1c82600a-47fd-45de-9c0d-ba6a0ddbfa3b no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:21:41.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8345" for this suite.
Dec 20 14:21:47.748: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:21:47.907: INFO: namespace configmap-8345 deletion completed in 6.210978619s

• [SLOW TEST:16.577 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:21:47.907: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-7772
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-7772
STEP: Creating statefulset with conflicting port in namespace statefulset-7772
STEP: Waiting until pod test-pod will start running in namespace statefulset-7772
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-7772
Dec 20 14:22:00.246: INFO: Observed stateful pod in namespace: statefulset-7772, name: ss-0, uid: 15070c20-df09-45c7-852f-f5577d8eb072, status phase: Pending. Waiting for statefulset controller to delete.
Dec 20 14:27:00.247: INFO: Pod ss-0 expected to be re-created at least once
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Dec 20 14:27:00.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe po ss-0 --namespace=statefulset-7772'
Dec 20 14:27:02.419: INFO: stderr: ""
Dec 20 14:27:02.419: INFO: stdout: "Name:           ss-0\nNamespace:      statefulset-7772\nPriority:       0\nNode:           iruya-node/\nLabels:         baz=blah\n                controller-revision-hash=ss-6f98bdb9c4\n                foo=bar\n                statefulset.kubernetes.io/pod-name=ss-0\nAnnotations:    \nStatus:         Pending\nIP:             \nControlled By:  StatefulSet/ss\nContainers:\n  nginx:\n    Image:        docker.io/library/nginx:1.14-alpine\n    Port:         21017/TCP\n    Host Port:    21017/TCP\n    Environment:  \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-9ttmq (ro)\nVolumes:\n  default-token-9ttmq:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-9ttmq\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type     Reason            Age   From                 Message\n  ----     ------            ----  ----                 -------\n  Warning  PodFitsHostPorts  5m5s  kubelet, iruya-node  Predicate PodFitsHostPorts failed\n"
Dec 20 14:27:02.419: INFO: 
Output of kubectl describe ss-0:
Name:           ss-0
Namespace:      statefulset-7772
Priority:       0
Node:           iruya-node/
Labels:         baz=blah
                controller-revision-hash=ss-6f98bdb9c4
                foo=bar
                statefulset.kubernetes.io/pod-name=ss-0
Annotations:    
Status:         Pending
IP:             
Controlled By:  StatefulSet/ss
Containers:
  nginx:
    Image:        docker.io/library/nginx:1.14-alpine
    Port:         21017/TCP
    Host Port:    21017/TCP
    Environment:  
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-9ttmq (ro)
Volumes:
  default-token-9ttmq:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-9ttmq
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age   From                 Message
  ----     ------            ----  ----                 -------
  Warning  PodFitsHostPorts  5m5s  kubelet, iruya-node  Predicate PodFitsHostPorts failed

Dec 20 14:27:02.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs ss-0 --namespace=statefulset-7772 --tail=100'
Dec 20 14:27:02.615: INFO: rc: 1
Dec 20 14:27:02.616: INFO: 
Last 100 log lines of ss-0:

Dec 20 14:27:02.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe po test-pod --namespace=statefulset-7772'
Dec 20 14:27:02.896: INFO: stderr: ""
Dec 20 14:27:02.896: INFO: stdout: "Name:         test-pod\nNamespace:    statefulset-7772\nPriority:     0\nNode:         iruya-node/10.96.3.65\nStart Time:   Fri, 20 Dec 2019 14:21:48 +0000\nLabels:       \nAnnotations:  \nStatus:       Running\nIP:           10.44.0.1\nContainers:\n  nginx:\n    Container ID:   docker://fe982ed14b68495f298b2f0f377c0fec273a5d54494784424927ed738db98af5\n    Image:          docker.io/library/nginx:1.14-alpine\n    Image ID:       docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\n    Port:           21017/TCP\n    Host Port:      21017/TCP\n    State:          Running\n      Started:      Fri, 20 Dec 2019 14:21:58 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-9ttmq (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-9ttmq:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-9ttmq\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason   Age   From                 Message\n  ----    ------   ----  ----                 -------\n  Normal  Pulled   5m8s  kubelet, iruya-node  Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\n  Normal  Created  5m5s  kubelet, iruya-node  Created container nginx\n  Normal  Started  5m4s  kubelet, iruya-node  Started container nginx\n"
Dec 20 14:27:02.896: INFO: 
Output of kubectl describe test-pod:
Name:         test-pod
Namespace:    statefulset-7772
Priority:     0
Node:         iruya-node/10.96.3.65
Start Time:   Fri, 20 Dec 2019 14:21:48 +0000
Labels:       
Annotations:  
Status:       Running
IP:           10.44.0.1
Containers:
  nginx:
    Container ID:   docker://fe982ed14b68495f298b2f0f377c0fec273a5d54494784424927ed738db98af5
    Image:          docker.io/library/nginx:1.14-alpine
    Image ID:       docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7
    Port:           21017/TCP
    Host Port:      21017/TCP
    State:          Running
      Started:      Fri, 20 Dec 2019 14:21:58 +0000
    Ready:          True
    Restart Count:  0
    Environment:    
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-9ttmq (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-9ttmq:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-9ttmq
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason   Age   From                 Message
  ----    ------   ----  ----                 -------
  Normal  Pulled   5m8s  kubelet, iruya-node  Container image "docker.io/library/nginx:1.14-alpine" already present on machine
  Normal  Created  5m5s  kubelet, iruya-node  Created container nginx
  Normal  Started  5m4s  kubelet, iruya-node  Started container nginx

Dec 20 14:27:02.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs test-pod --namespace=statefulset-7772 --tail=100'
Dec 20 14:27:03.069: INFO: stderr: ""
Dec 20 14:27:03.069: INFO: stdout: ""
Dec 20 14:27:03.069: INFO: 
Last 100 log lines of test-pod:

Dec 20 14:27:03.069: INFO: Deleting all statefulset in ns statefulset-7772
Dec 20 14:27:03.076: INFO: Scaling statefulset ss to 0
Dec 20 14:27:13.105: INFO: Waiting for statefulset status.replicas updated to 0
Dec 20 14:27:13.108: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Collecting events from namespace "statefulset-7772".
STEP: Found 16 events.
Dec 20 14:27:13.131: INFO: At 2019-12-20 14:21:48 +0000 UTC - event for ss: {statefulset-controller } SuccessfulCreate: create Pod ss-0 in StatefulSet ss successful
Dec 20 14:27:13.131: INFO: At 2019-12-20 14:21:48 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Dec 20 14:27:13.131: INFO: At 2019-12-20 14:21:49 +0000 UTC - event for ss: {statefulset-controller } SuccessfulDelete: delete Pod ss-0 in StatefulSet ss successful
Dec 20 14:27:13.131: INFO: At 2019-12-20 14:21:49 +0000 UTC - event for ss: {statefulset-controller } RecreatingFailedPod: StatefulSet statefulset-7772/ss is recreating failed Pod ss-0
Dec 20 14:27:13.131: INFO: At 2019-12-20 14:21:49 +0000 UTC - event for ss: {statefulset-controller } FailedCreate: create Pod ss-0 in StatefulSet ss failed error: The POST operation against Pod could not be completed at this time, please try again.
Dec 20 14:27:13.131: INFO: At 2019-12-20 14:21:49 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Dec 20 14:27:13.131: INFO: At 2019-12-20 14:21:50 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Dec 20 14:27:13.131: INFO: At 2019-12-20 14:21:51 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Dec 20 14:27:13.131: INFO: At 2019-12-20 14:21:51 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Dec 20 14:27:13.131: INFO: At 2019-12-20 14:21:52 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Dec 20 14:27:13.131: INFO: At 2019-12-20 14:21:52 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Dec 20 14:27:13.131: INFO: At 2019-12-20 14:21:54 +0000 UTC - event for test-pod: {kubelet iruya-node} Pulled: Container image "docker.io/library/nginx:1.14-alpine" already present on machine
Dec 20 14:27:13.131: INFO: At 2019-12-20 14:21:56 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Dec 20 14:27:13.131: INFO: At 2019-12-20 14:21:57 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Dec 20 14:27:13.131: INFO: At 2019-12-20 14:21:57 +0000 UTC - event for test-pod: {kubelet iruya-node} Created: Created container nginx
Dec 20 14:27:13.131: INFO: At 2019-12-20 14:21:58 +0000 UTC - event for test-pod: {kubelet iruya-node} Started: Started container nginx
Dec 20 14:27:13.135: INFO: POD       NODE        PHASE    GRACE  CONDITIONS
Dec 20 14:27:13.135: INFO: test-pod  iruya-node  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:21:48 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:21:59 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:21:59 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:21:48 +0000 UTC  }]
Dec 20 14:27:13.135: INFO: 
Dec 20 14:27:13.163: INFO: 
Logging node info for node iruya-node
Dec 20 14:27:13.167: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:iruya-node,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/iruya-node,UID:b2aa273d-23ea-4c86-9e2f-72569e3392bd,ResourceVersion:17399725,Generation:0,CreationTimestamp:2019-08-04 09:01:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: iruya-node,kubernetes.io/os: linux,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.96.1.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-10-12 11:56:49 +0000 UTC 2019-10-12 11:56:49 +0000 UTC WeaveIsUp Weave pod has set this} {MemoryPressure False 2019-12-20 14:26:38 +0000 UTC 2019-08-04 09:01:39 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-12-20 14:26:38 +0000 UTC 2019-08-04 09:01:39 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-12-20 14:26:38 +0000 UTC 2019-08-04 09:01:39 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-12-20 14:26:38 +0000 UTC 2019-08-04 09:02:19 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.96.3.65} {Hostname iruya-node}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f573dcf04d6f4a87856a35d266a2fa7a,SystemUUID:F573DCF0-4D6F-4A87-856A-35D266A2FA7A,BootID:8baf4beb-8391-43e6-b17b-b1e184b5370a,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.15.1,KubeProxyVersion:v1.15.1,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6] 373099368} {[k8s.gcr.io/etcd@sha256:17da501f5d2a675be46040422a27b7cc21b8a43895ac998b171db1c346f361f7 k8s.gcr.io/etcd:3.3.10] 258116302} {[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15] 246640776} {[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0] 195659796} {[weaveworks/weave-kube@sha256:8fea236b8e64192c454e459b40381bd48795bd54d791fa684d818afdc12bd100 weaveworks/weave-kube:2.5.2] 148150868} {[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine] 126894770} {[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine] 123781643} {[gcr.io/google-samples/gb-redisslave@sha256:57730a481f97b3321138161ba2c8c9ca3b32df32ce9180e4029e6940446800ec gcr.io/google-samples/gb-redisslave:v3] 98945667} {[k8s.gcr.io/kube-proxy@sha256:08186f4897488e96cb098dd8d1d931af9a6ea718bb8737bf44bb76e42075f0ce k8s.gcr.io/kube-proxy:v1.15.1] 82408284} {[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10] 61365829} {[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6] 57345321} {[weaveworks/weave-npc@sha256:56c93a359d54107558720a2859b83cb28a31c70c82a1aaa3dc4704e6c62e3b15 weaveworks/weave-npc:2.5.2] 49569458} {[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine] 29331594} {[gcr.io/kubernetes-e2e-test-images/nettest@sha256:6aa91bc71993260a87513e31b672ec14ce84bc253cd5233406c6946d3a8f55a1 gcr.io/kubernetes-e2e-test-images/nettest:1.0] 27413498} {[nginx@sha256:57a226fb6ab6823027c0704a9346a890ffb0cacde06bc19bbc234c8720673555 nginx:1.15-alpine] 16087791} {[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine] 16032814} {[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0] 11443478} {[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1] 9349974} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 8490662} {[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0] 6757579} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 6705349} {[gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 gcr.io/kubernetes-e2e-test-images/redis:1.0] 5905732} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 5851985} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:71c3fc838e0637df570497febafa0ee73bf47176dfd43612de5c55a71230674e gcr.io/kubernetes-e2e-test-images/liveness:1.1] 5829944} {[appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 appropriate/curl:latest] 5496756} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 4753501} {[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0] 4747037} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 4732240} {[gcr.io/kubernetes-e2e-test-images/porter@sha256:d6389405e453950618ae7749d9eee388f0eb32e0328a7e6583c41433aa5f2a77 gcr.io/kubernetes-e2e-test-images/porter:1.0] 4681408} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 2729534} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 1563521} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 1450451} {[busybox@sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e busybox:latest] 1219782} {[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29] 1154361} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472} {[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest] 239840}],VolumesInUse:[],VolumesAttached:[],Config:nil,},}
Dec 20 14:27:13.168: INFO: 
Logging kubelet events for node iruya-node
Dec 20 14:27:13.173: INFO: 
Logging pods the kubelet thinks is on node iruya-node
Dec 20 14:27:13.185: INFO: kube-proxy-976zl started at 2019-08-04 09:01:39 +0000 UTC (0+1 container statuses recorded)
Dec 20 14:27:13.185: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 20 14:27:13.185: INFO: weave-net-rlp57 started at 2019-10-12 11:56:39 +0000 UTC (0+2 container statuses recorded)
Dec 20 14:27:13.185: INFO: 	Container weave ready: true, restart count 0
Dec 20 14:27:13.185: INFO: 	Container weave-npc ready: true, restart count 0
Dec 20 14:27:13.185: INFO: test-pod started at 2019-12-20 14:21:48 +0000 UTC (0+1 container statuses recorded)
Dec 20 14:27:13.185: INFO: 	Container nginx ready: true, restart count 0
W1220 14:27:13.190618       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 20 14:27:13.246: INFO: 
Latency metrics for node iruya-node
Dec 20 14:27:13.246: INFO: 
Logging node info for node iruya-server-sfge57q7djm7
Dec 20 14:27:13.251: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:iruya-server-sfge57q7djm7,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/iruya-server-sfge57q7djm7,UID:67f2a658-4743-4118-95e7-463a23bcd212,ResourceVersion:17399768,Generation:0,CreationTimestamp:2019-08-04 08:52:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: iruya-server-sfge57q7djm7,kubernetes.io/os: linux,node-role.kubernetes.io/master: ,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.96.0.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-08-04 08:53:00 +0000 UTC 2019-08-04 08:53:00 +0000 UTC WeaveIsUp Weave pod has set this} {MemoryPressure False 2019-12-20 14:27:05 +0000 UTC 2019-08-04 08:52:04 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-12-20 14:27:05 +0000 UTC 2019-08-04 08:52:04 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-12-20 14:27:05 +0000 UTC 2019-08-04 08:52:04 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-12-20 14:27:05 +0000 UTC 2019-08-04 08:53:09 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.96.2.216} {Hostname iruya-server-sfge57q7djm7}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:78bacef342604a51913cae58dd95802b,SystemUUID:78BACEF3-4260-4A51-913C-AE58DD95802B,BootID:db143d3a-01b3-4483-b23e-e72adff2b28d,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.15.1,KubeProxyVersion:v1.15.1,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6] 373099368} {[k8s.gcr.io/etcd@sha256:17da501f5d2a675be46040422a27b7cc21b8a43895ac998b171db1c346f361f7 k8s.gcr.io/etcd:3.3.10] 258116302} {[k8s.gcr.io/kube-apiserver@sha256:304a1c38707834062ee87df62ef329d52a8b9a3e70459565d0a396479073f54c k8s.gcr.io/kube-apiserver:v1.15.1] 206827454} {[k8s.gcr.io/kube-controller-manager@sha256:9abae95e428e228fe8f6d1630d55e79e018037460f3731312805c0f37471e4bf k8s.gcr.io/kube-controller-manager:v1.15.1] 158722622} {[weaveworks/weave-kube@sha256:8fea236b8e64192c454e459b40381bd48795bd54d791fa684d818afdc12bd100 weaveworks/weave-kube:2.5.2] 148150868} {[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine] 126894770} {[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine] 123781643} {[gcr.io/google-samples/gb-redisslave@sha256:57730a481f97b3321138161ba2c8c9ca3b32df32ce9180e4029e6940446800ec gcr.io/google-samples/gb-redisslave:v3] 98945667} {[k8s.gcr.io/kube-proxy@sha256:08186f4897488e96cb098dd8d1d931af9a6ea718bb8737bf44bb76e42075f0ce k8s.gcr.io/kube-proxy:v1.15.1] 82408284} {[k8s.gcr.io/kube-scheduler@sha256:d0ee18a9593013fbc44b1920e4930f29b664b59a3958749763cb33b57e0e8956 k8s.gcr.io/kube-scheduler:v1.15.1] 81107582} {[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6] 57345321} {[weaveworks/weave-npc@sha256:56c93a359d54107558720a2859b83cb28a31c70c82a1aaa3dc4704e6c62e3b15 weaveworks/weave-npc:2.5.2] 49569458} {[k8s.gcr.io/coredns@sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4 k8s.gcr.io/coredns:1.3.1] 40303560} {[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine] 29331594} {[nginx@sha256:57a226fb6ab6823027c0704a9346a890ffb0cacde06bc19bbc234c8720673555 nginx:1.15-alpine] 16087791} {[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine] 16032814} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 6705349} {[gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 gcr.io/kubernetes-e2e-test-images/redis:1.0] 5905732} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 4753501} {[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0] 4747037} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 1563521} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472} {[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest] 239840}],VolumesInUse:[],VolumesAttached:[],Config:nil,},}
Dec 20 14:27:13.251: INFO: 
Logging kubelet events for node iruya-server-sfge57q7djm7
Dec 20 14:27:13.254: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7
Dec 20 14:27:13.264: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 started at 2019-08-04 08:51:42 +0000 UTC (0+1 container statuses recorded)
Dec 20 14:27:13.264: INFO: 	Container kube-controller-manager ready: true, restart count 10
Dec 20 14:27:13.264: INFO: kube-proxy-58v95 started at 2019-08-04 08:52:37 +0000 UTC (0+1 container statuses recorded)
Dec 20 14:27:13.264: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 20 14:27:13.264: INFO: kube-apiserver-iruya-server-sfge57q7djm7 started at 2019-08-04 08:51:39 +0000 UTC (0+1 container statuses recorded)
Dec 20 14:27:13.264: INFO: 	Container kube-apiserver ready: true, restart count 0
Dec 20 14:27:13.264: INFO: kube-scheduler-iruya-server-sfge57q7djm7 started at 2019-08-04 08:51:43 +0000 UTC (0+1 container statuses recorded)
Dec 20 14:27:13.264: INFO: 	Container kube-scheduler ready: true, restart count 7
Dec 20 14:27:13.264: INFO: coredns-5c98db65d4-xx8w8 started at 2019-08-04 08:53:12 +0000 UTC (0+1 container statuses recorded)
Dec 20 14:27:13.264: INFO: 	Container coredns ready: true, restart count 0
Dec 20 14:27:13.264: INFO: etcd-iruya-server-sfge57q7djm7 started at 2019-08-04 08:51:38 +0000 UTC (0+1 container statuses recorded)
Dec 20 14:27:13.264: INFO: 	Container etcd ready: true, restart count 0
Dec 20 14:27:13.264: INFO: weave-net-bzl4d started at 2019-08-04 08:52:37 +0000 UTC (0+2 container statuses recorded)
Dec 20 14:27:13.264: INFO: 	Container weave ready: true, restart count 0
Dec 20 14:27:13.264: INFO: 	Container weave-npc ready: true, restart count 0
Dec 20 14:27:13.264: INFO: coredns-5c98db65d4-bm4gs started at 2019-08-04 08:53:12 +0000 UTC (0+1 container statuses recorded)
Dec 20 14:27:13.264: INFO: 	Container coredns ready: true, restart count 0
W1220 14:27:13.268356       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 20 14:27:13.297: INFO: 
Latency metrics for node iruya-server-sfge57q7djm7
Dec 20 14:27:13.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-7772" for this suite.
Dec 20 14:27:35.328: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:27:35.456: INFO: namespace statefulset-7772 deletion completed in 22.155615906s

• Failure [347.549 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Should recreate evicted statefulset [Conformance] [It]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697

    Dec 20 14:27:00.248: Pod ss-0 expected to be re-created at least once

    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:769
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:27:35.457: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W1220 14:27:45.750438       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 20 14:27:45.750: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:27:45.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2578" for this suite.
Dec 20 14:27:51.865: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:27:52.012: INFO: namespace gc-2578 deletion completed in 6.256517302s

• [SLOW TEST:16.555 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:27:52.014: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 20 14:27:52.190: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8afcce7c-b269-4b64-a483-1841f7eeccb3" in namespace "downward-api-9229" to be "success or failure"
Dec 20 14:27:52.198: INFO: Pod "downwardapi-volume-8afcce7c-b269-4b64-a483-1841f7eeccb3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.51377ms
Dec 20 14:27:54.204: INFO: Pod "downwardapi-volume-8afcce7c-b269-4b64-a483-1841f7eeccb3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014535889s
Dec 20 14:27:56.210: INFO: Pod "downwardapi-volume-8afcce7c-b269-4b64-a483-1841f7eeccb3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020496356s
Dec 20 14:27:58.217: INFO: Pod "downwardapi-volume-8afcce7c-b269-4b64-a483-1841f7eeccb3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.026872418s
Dec 20 14:28:00.234: INFO: Pod "downwardapi-volume-8afcce7c-b269-4b64-a483-1841f7eeccb3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.044774246s
STEP: Saw pod success
Dec 20 14:28:00.235: INFO: Pod "downwardapi-volume-8afcce7c-b269-4b64-a483-1841f7eeccb3" satisfied condition "success or failure"
Dec 20 14:28:00.243: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-8afcce7c-b269-4b64-a483-1841f7eeccb3 container client-container: 
STEP: delete the pod
Dec 20 14:28:00.416: INFO: Waiting for pod downwardapi-volume-8afcce7c-b269-4b64-a483-1841f7eeccb3 to disappear
Dec 20 14:28:00.432: INFO: Pod downwardapi-volume-8afcce7c-b269-4b64-a483-1841f7eeccb3 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:28:00.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9229" for this suite.
Dec 20 14:28:06.492: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:28:06.631: INFO: namespace downward-api-9229 deletion completed in 6.19315324s

• [SLOW TEST:14.618 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:28:06.632: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Dec 20 14:28:14.806: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:28:14.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-5402" for this suite.
Dec 20 14:28:20.870: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:28:20.982: INFO: namespace container-runtime-5402 deletion completed in 6.139477683s

• [SLOW TEST:14.350 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:28:20.983: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-7e203afc-2e0f-46a3-97d5-5f727f0113ae
STEP: Creating a pod to test consume configMaps
Dec 20 14:28:21.135: INFO: Waiting up to 5m0s for pod "pod-configmaps-e10f21fa-6048-4f07-8adf-90680be6daae" in namespace "configmap-1593" to be "success or failure"
Dec 20 14:28:21.144: INFO: Pod "pod-configmaps-e10f21fa-6048-4f07-8adf-90680be6daae": Phase="Pending", Reason="", readiness=false. Elapsed: 9.022586ms
Dec 20 14:28:23.163: INFO: Pod "pod-configmaps-e10f21fa-6048-4f07-8adf-90680be6daae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027464749s
Dec 20 14:28:25.371: INFO: Pod "pod-configmaps-e10f21fa-6048-4f07-8adf-90680be6daae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.235954368s
Dec 20 14:28:27.381: INFO: Pod "pod-configmaps-e10f21fa-6048-4f07-8adf-90680be6daae": Phase="Pending", Reason="", readiness=false. Elapsed: 6.245514805s
Dec 20 14:28:29.390: INFO: Pod "pod-configmaps-e10f21fa-6048-4f07-8adf-90680be6daae": Phase="Pending", Reason="", readiness=false. Elapsed: 8.254454409s
Dec 20 14:28:31.405: INFO: Pod "pod-configmaps-e10f21fa-6048-4f07-8adf-90680be6daae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.270036499s
STEP: Saw pod success
Dec 20 14:28:31.406: INFO: Pod "pod-configmaps-e10f21fa-6048-4f07-8adf-90680be6daae" satisfied condition "success or failure"
Dec 20 14:28:31.419: INFO: Trying to get logs from node iruya-node pod pod-configmaps-e10f21fa-6048-4f07-8adf-90680be6daae container configmap-volume-test: 
STEP: delete the pod
Dec 20 14:28:31.814: INFO: Waiting for pod pod-configmaps-e10f21fa-6048-4f07-8adf-90680be6daae to disappear
Dec 20 14:28:31.825: INFO: Pod pod-configmaps-e10f21fa-6048-4f07-8adf-90680be6daae no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:28:31.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1593" for this suite.
Dec 20 14:28:39.877: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:28:40.065: INFO: namespace configmap-1593 deletion completed in 8.229000067s

• [SLOW TEST:19.082 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:28:40.065: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: executing a command with run --rm and attach with stdin
Dec 20 14:28:40.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2678 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Dec 20 14:28:49.877: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\n"
Dec 20 14:28:49.877: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:28:51.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2678" for this suite.
Dec 20 14:28:57.918: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:28:58.029: INFO: namespace kubectl-2678 deletion completed in 6.131086262s

• [SLOW TEST:17.964 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:28:58.030: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 20 14:29:24.139: INFO: Container started at 2019-12-20 14:29:04 +0000 UTC, pod became ready at 2019-12-20 14:29:22 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:29:24.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8008" for this suite.
Dec 20 14:29:46.247: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:29:46.389: INFO: namespace container-probe-8008 deletion completed in 22.243123427s

• [SLOW TEST:48.360 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:29:46.391: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Dec 20 14:29:46.516: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:30:03.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4872" for this suite.
Dec 20 14:30:09.052: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:30:09.203: INFO: namespace pods-4872 deletion completed in 6.168119486s

• [SLOW TEST:22.812 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:30:09.204: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-8fcede56-ebf1-479e-865c-850811af0748
STEP: Creating a pod to test consume secrets
Dec 20 14:30:09.369: INFO: Waiting up to 5m0s for pod "pod-secrets-745f2cfb-faf1-481b-92b4-b0d702d7b99e" in namespace "secrets-549" to be "success or failure"
Dec 20 14:30:09.397: INFO: Pod "pod-secrets-745f2cfb-faf1-481b-92b4-b0d702d7b99e": Phase="Pending", Reason="", readiness=false. Elapsed: 27.781597ms
Dec 20 14:30:11.413: INFO: Pod "pod-secrets-745f2cfb-faf1-481b-92b4-b0d702d7b99e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044354534s
Dec 20 14:30:13.433: INFO: Pod "pod-secrets-745f2cfb-faf1-481b-92b4-b0d702d7b99e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064017265s
Dec 20 14:30:15.449: INFO: Pod "pod-secrets-745f2cfb-faf1-481b-92b4-b0d702d7b99e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079745857s
Dec 20 14:30:17.463: INFO: Pod "pod-secrets-745f2cfb-faf1-481b-92b4-b0d702d7b99e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.093866508s
Dec 20 14:30:19.472: INFO: Pod "pod-secrets-745f2cfb-faf1-481b-92b4-b0d702d7b99e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.10352804s
STEP: Saw pod success
Dec 20 14:30:19.472: INFO: Pod "pod-secrets-745f2cfb-faf1-481b-92b4-b0d702d7b99e" satisfied condition "success or failure"
Dec 20 14:30:19.479: INFO: Trying to get logs from node iruya-node pod pod-secrets-745f2cfb-faf1-481b-92b4-b0d702d7b99e container secret-env-test: 
STEP: delete the pod
Dec 20 14:30:19.812: INFO: Waiting for pod pod-secrets-745f2cfb-faf1-481b-92b4-b0d702d7b99e to disappear
Dec 20 14:30:19.838: INFO: Pod pod-secrets-745f2cfb-faf1-481b-92b4-b0d702d7b99e no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:30:19.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-549" for this suite.
Dec 20 14:30:25.897: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:30:26.053: INFO: namespace secrets-549 deletion completed in 6.193590221s

• [SLOW TEST:16.849 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:30:26.053: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-4384
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 20 14:30:26.194: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 20 14:31:00.563: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4384 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 20 14:31:00.563: INFO: >>> kubeConfig: /root/.kube/config
Dec 20 14:31:01.988: INFO: Found all expected endpoints: [netserver-0]
Dec 20 14:31:02.000: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4384 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 20 14:31:02.000: INFO: >>> kubeConfig: /root/.kube/config
Dec 20 14:31:03.384: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:31:03.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-4384" for this suite.
Dec 20 14:31:27.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:31:27.580: INFO: namespace pod-network-test-4384 deletion completed in 24.187980774s

• [SLOW TEST:61.527 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:31:27.581: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-66f49078-ccf5-4d9d-9c87-5b510f2ebe69
STEP: Creating a pod to test consume configMaps
Dec 20 14:31:27.737: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c8111992-0984-4afe-b04e-88db9a52074a" in namespace "projected-9603" to be "success or failure"
Dec 20 14:31:27.754: INFO: Pod "pod-projected-configmaps-c8111992-0984-4afe-b04e-88db9a52074a": Phase="Pending", Reason="", readiness=false. Elapsed: 16.4371ms
Dec 20 14:31:29.764: INFO: Pod "pod-projected-configmaps-c8111992-0984-4afe-b04e-88db9a52074a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026814423s
Dec 20 14:31:31.780: INFO: Pod "pod-projected-configmaps-c8111992-0984-4afe-b04e-88db9a52074a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042387535s
Dec 20 14:31:33.791: INFO: Pod "pod-projected-configmaps-c8111992-0984-4afe-b04e-88db9a52074a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053556425s
Dec 20 14:31:35.837: INFO: Pod "pod-projected-configmaps-c8111992-0984-4afe-b04e-88db9a52074a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.100051193s
Dec 20 14:31:37.847: INFO: Pod "pod-projected-configmaps-c8111992-0984-4afe-b04e-88db9a52074a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.109902244s
STEP: Saw pod success
Dec 20 14:31:37.847: INFO: Pod "pod-projected-configmaps-c8111992-0984-4afe-b04e-88db9a52074a" satisfied condition "success or failure"
Dec 20 14:31:37.858: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-c8111992-0984-4afe-b04e-88db9a52074a container projected-configmap-volume-test: 
STEP: delete the pod
Dec 20 14:31:38.177: INFO: Waiting for pod pod-projected-configmaps-c8111992-0984-4afe-b04e-88db9a52074a to disappear
Dec 20 14:31:38.186: INFO: Pod pod-projected-configmaps-c8111992-0984-4afe-b04e-88db9a52074a no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:31:38.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9603" for this suite.
Dec 20 14:31:46.218: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:31:46.447: INFO: namespace projected-9603 deletion completed in 8.255954745s

• [SLOW TEST:18.867 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:31:46.448: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-34c60ea4-5c37-48d7-b529-e9d53b7d6fd2
STEP: Creating a pod to test consume secrets
Dec 20 14:31:46.655: INFO: Waiting up to 5m0s for pod "pod-secrets-93e29814-44dc-4381-9b2b-0e9852d4be44" in namespace "secrets-4234" to be "success or failure"
Dec 20 14:31:46.666: INFO: Pod "pod-secrets-93e29814-44dc-4381-9b2b-0e9852d4be44": Phase="Pending", Reason="", readiness=false. Elapsed: 10.495805ms
Dec 20 14:31:48.675: INFO: Pod "pod-secrets-93e29814-44dc-4381-9b2b-0e9852d4be44": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019512481s
Dec 20 14:31:50.688: INFO: Pod "pod-secrets-93e29814-44dc-4381-9b2b-0e9852d4be44": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03314014s
Dec 20 14:31:52.696: INFO: Pod "pod-secrets-93e29814-44dc-4381-9b2b-0e9852d4be44": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041299982s
Dec 20 14:31:54.714: INFO: Pod "pod-secrets-93e29814-44dc-4381-9b2b-0e9852d4be44": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.058679942s
STEP: Saw pod success
Dec 20 14:31:54.714: INFO: Pod "pod-secrets-93e29814-44dc-4381-9b2b-0e9852d4be44" satisfied condition "success or failure"
Dec 20 14:31:54.720: INFO: Trying to get logs from node iruya-node pod pod-secrets-93e29814-44dc-4381-9b2b-0e9852d4be44 container secret-volume-test: 
STEP: delete the pod
Dec 20 14:31:54.890: INFO: Waiting for pod pod-secrets-93e29814-44dc-4381-9b2b-0e9852d4be44 to disappear
Dec 20 14:31:54.938: INFO: Pod pod-secrets-93e29814-44dc-4381-9b2b-0e9852d4be44 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:31:54.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4234" for this suite.
Dec 20 14:32:00.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:32:01.173: INFO: namespace secrets-4234 deletion completed in 6.226305721s

• [SLOW TEST:14.726 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:32:01.174: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating server pod server in namespace prestop-6456
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-6456
STEP: Deleting pre-stop pod
Dec 20 14:32:24.431: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:32:24.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-6456" for this suite.
Dec 20 14:33:03.239: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:33:03.380: INFO: namespace prestop-6456 deletion completed in 38.907647602s

• [SLOW TEST:62.206 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:33:03.381: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Dec 20 14:33:21.616: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 20 14:33:21.628: INFO: Pod pod-with-prestop-http-hook still exists
Dec 20 14:33:23.629: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 20 14:33:23.644: INFO: Pod pod-with-prestop-http-hook still exists
Dec 20 14:33:25.628: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 20 14:33:25.678: INFO: Pod pod-with-prestop-http-hook still exists
Dec 20 14:33:27.629: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 20 14:33:27.642: INFO: Pod pod-with-prestop-http-hook still exists
Dec 20 14:33:29.628: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 20 14:33:29.711: INFO: Pod pod-with-prestop-http-hook still exists
Dec 20 14:33:31.628: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 20 14:33:31.644: INFO: Pod pod-with-prestop-http-hook still exists
Dec 20 14:33:33.628: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 20 14:33:33.689: INFO: Pod pod-with-prestop-http-hook still exists
Dec 20 14:33:35.629: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 20 14:33:35.646: INFO: Pod pod-with-prestop-http-hook still exists
Dec 20 14:33:37.629: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 20 14:33:37.678: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:33:37.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-2324" for this suite.
Dec 20 14:33:59.864: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:33:59.997: INFO: namespace container-lifecycle-hook-2324 deletion completed in 22.250898829s

• [SLOW TEST:56.617 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:33:59.997: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test use defaults
Dec 20 14:34:00.451: INFO: Waiting up to 5m0s for pod "client-containers-4d060f3e-988b-4989-9ce7-5a8c07d9230f" in namespace "containers-8938" to be "success or failure"
Dec 20 14:34:00.458: INFO: Pod "client-containers-4d060f3e-988b-4989-9ce7-5a8c07d9230f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.531185ms
Dec 20 14:34:02.472: INFO: Pod "client-containers-4d060f3e-988b-4989-9ce7-5a8c07d9230f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020947724s
Dec 20 14:34:04.488: INFO: Pod "client-containers-4d060f3e-988b-4989-9ce7-5a8c07d9230f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03672537s
Dec 20 14:34:06.570: INFO: Pod "client-containers-4d060f3e-988b-4989-9ce7-5a8c07d9230f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.118815451s
Dec 20 14:34:08.586: INFO: Pod "client-containers-4d060f3e-988b-4989-9ce7-5a8c07d9230f": Phase="Running", Reason="", readiness=true. Elapsed: 8.134041628s
Dec 20 14:34:10.614: INFO: Pod "client-containers-4d060f3e-988b-4989-9ce7-5a8c07d9230f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.162534344s
STEP: Saw pod success
Dec 20 14:34:10.615: INFO: Pod "client-containers-4d060f3e-988b-4989-9ce7-5a8c07d9230f" satisfied condition "success or failure"
Dec 20 14:34:10.622: INFO: Trying to get logs from node iruya-node pod client-containers-4d060f3e-988b-4989-9ce7-5a8c07d9230f container test-container: 
STEP: delete the pod
Dec 20 14:34:10.784: INFO: Waiting for pod client-containers-4d060f3e-988b-4989-9ce7-5a8c07d9230f to disappear
Dec 20 14:34:10.794: INFO: Pod client-containers-4d060f3e-988b-4989-9ce7-5a8c07d9230f no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:34:10.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-8938" for this suite.
Dec 20 14:34:16.823: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:34:16.953: INFO: namespace containers-8938 deletion completed in 6.1521863s

• [SLOW TEST:16.955 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:34:16.953: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-741ea58e-6398-46cb-8ddf-5ebfc5444d5c
STEP: Creating a pod to test consume configMaps
Dec 20 14:34:17.116: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7b2d02d2-9f7c-4a24-9062-1fbcfa970526" in namespace "projected-7660" to be "success or failure"
Dec 20 14:34:17.120: INFO: Pod "pod-projected-configmaps-7b2d02d2-9f7c-4a24-9062-1fbcfa970526": Phase="Pending", Reason="", readiness=false. Elapsed: 3.932542ms
Dec 20 14:34:19.132: INFO: Pod "pod-projected-configmaps-7b2d02d2-9f7c-4a24-9062-1fbcfa970526": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015201961s
Dec 20 14:34:21.140: INFO: Pod "pod-projected-configmaps-7b2d02d2-9f7c-4a24-9062-1fbcfa970526": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023141714s
Dec 20 14:34:24.565: INFO: Pod "pod-projected-configmaps-7b2d02d2-9f7c-4a24-9062-1fbcfa970526": Phase="Pending", Reason="", readiness=false. Elapsed: 7.448681711s
Dec 20 14:34:26.583: INFO: Pod "pod-projected-configmaps-7b2d02d2-9f7c-4a24-9062-1fbcfa970526": Phase="Pending", Reason="", readiness=false. Elapsed: 9.466480468s
Dec 20 14:34:28.607: INFO: Pod "pod-projected-configmaps-7b2d02d2-9f7c-4a24-9062-1fbcfa970526": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.490424286s
STEP: Saw pod success
Dec 20 14:34:28.607: INFO: Pod "pod-projected-configmaps-7b2d02d2-9f7c-4a24-9062-1fbcfa970526" satisfied condition "success or failure"
Dec 20 14:34:28.626: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-7b2d02d2-9f7c-4a24-9062-1fbcfa970526 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 20 14:34:29.127: INFO: Waiting for pod pod-projected-configmaps-7b2d02d2-9f7c-4a24-9062-1fbcfa970526 to disappear
Dec 20 14:34:29.135: INFO: Pod pod-projected-configmaps-7b2d02d2-9f7c-4a24-9062-1fbcfa970526 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:34:29.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7660" for this suite.
Dec 20 14:34:35.181: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:34:35.281: INFO: namespace projected-7660 deletion completed in 6.140893103s

• [SLOW TEST:18.328 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:34:35.282: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 20 14:34:35.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-1496'
Dec 20 14:34:35.568: INFO: stderr: ""
Dec 20 14:34:35.568: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Dec 20 14:34:45.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-1496 -o json'
Dec 20 14:34:45.816: INFO: stderr: ""
Dec 20 14:34:45.816: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2019-12-20T14:34:35Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"kubectl-1496\",\n        \"resourceVersion\": \"17400876\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-1496/pods/e2e-test-nginx-pod\",\n        \"uid\": \"7409b797-a1a7-426f-bf1a-8a89de6cd735\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-nl5xm\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"iruya-node\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-nl5xm\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-nl5xm\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-20T14:34:35Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-20T14:34:44Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-20T14:34:44Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-20T14:34:35Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://6bced6fd96ce2e428ec4a04743ecc85bbbd05330170eaa71ae644e79968de24b\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2019-12-20T14:34:43Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.3.65\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.44.0.1\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2019-12-20T14:34:35Z\"\n    }\n}\n"
STEP: replace the image in the pod
Dec 20 14:34:45.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-1496'
Dec 20 14:34:46.181: INFO: stderr: ""
Dec 20 14:34:46.181: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726
Dec 20 14:34:46.186: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-1496'
Dec 20 14:34:53.325: INFO: stderr: ""
Dec 20 14:34:53.325: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:34:53.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1496" for this suite.
Dec 20 14:34:59.412: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:34:59.595: INFO: namespace kubectl-1496 deletion completed in 6.2337651s

• [SLOW TEST:24.312 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:34:59.595: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 20 14:34:59.703: INFO: Creating ReplicaSet my-hostname-basic-bc2f8c7f-929d-4aeb-b83c-a62bf58b4ce0
Dec 20 14:34:59.736: INFO: Pod name my-hostname-basic-bc2f8c7f-929d-4aeb-b83c-a62bf58b4ce0: Found 0 pods out of 1
Dec 20 14:35:04.751: INFO: Pod name my-hostname-basic-bc2f8c7f-929d-4aeb-b83c-a62bf58b4ce0: Found 1 pods out of 1
Dec 20 14:35:04.751: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-bc2f8c7f-929d-4aeb-b83c-a62bf58b4ce0" is running
Dec 20 14:35:08.768: INFO: Pod "my-hostname-basic-bc2f8c7f-929d-4aeb-b83c-a62bf58b4ce0-mjfjr" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-20 14:34:59 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-20 14:34:59 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-bc2f8c7f-929d-4aeb-b83c-a62bf58b4ce0]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-20 14:34:59 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-bc2f8c7f-929d-4aeb-b83c-a62bf58b4ce0]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-20 14:34:59 +0000 UTC Reason: Message:}])
Dec 20 14:35:08.768: INFO: Trying to dial the pod
Dec 20 14:35:13.897: INFO: Controller my-hostname-basic-bc2f8c7f-929d-4aeb-b83c-a62bf58b4ce0: Got expected result from replica 1 [my-hostname-basic-bc2f8c7f-929d-4aeb-b83c-a62bf58b4ce0-mjfjr]: "my-hostname-basic-bc2f8c7f-929d-4aeb-b83c-a62bf58b4ce0-mjfjr", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:35:13.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-2126" for this suite.
Dec 20 14:35:19.928: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:35:20.055: INFO: namespace replicaset-2126 deletion completed in 6.150071834s

• [SLOW TEST:20.460 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:35:20.055: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 20 14:35:20.185: INFO: (0) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 15.893029ms)
Dec 20 14:35:20.189: INFO: (1) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.406249ms)
Dec 20 14:35:20.194: INFO: (2) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.089092ms)
Dec 20 14:35:20.198: INFO: (3) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.980146ms)
Dec 20 14:35:20.202: INFO: (4) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.550578ms)
Dec 20 14:35:20.207: INFO: (5) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.307237ms)
Dec 20 14:35:20.214: INFO: (6) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.187956ms)
Dec 20 14:35:20.220: INFO: (7) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.480568ms)
Dec 20 14:35:20.227: INFO: (8) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.086718ms)
Dec 20 14:35:20.236: INFO: (9) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.587949ms)
Dec 20 14:35:20.242: INFO: (10) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.424912ms)
Dec 20 14:35:20.250: INFO: (11) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.188009ms)
Dec 20 14:35:20.255: INFO: (12) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.218744ms)
Dec 20 14:35:20.261: INFO: (13) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.65376ms)
Dec 20 14:35:20.288: INFO: (14) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 27.151593ms)
Dec 20 14:35:20.295: INFO: (15) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.72721ms)
Dec 20 14:35:20.300: INFO: (16) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.737951ms)
Dec 20 14:35:20.307: INFO: (17) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.860101ms)
Dec 20 14:35:20.313: INFO: (18) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.004345ms)
Dec 20 14:35:20.319: INFO: (19) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.275351ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:35:20.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-994" for this suite.
Dec 20 14:35:26.344: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:35:26.459: INFO: namespace proxy-994 deletion completed in 6.136799491s

• [SLOW TEST:6.404 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:35:26.461: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-f60c4e97-59aa-45c9-a5a3-98864777c67e in namespace container-probe-986
Dec 20 14:35:38.692: INFO: Started pod liveness-f60c4e97-59aa-45c9-a5a3-98864777c67e in namespace container-probe-986
STEP: checking the pod's current state and verifying that restartCount is present
Dec 20 14:35:38.697: INFO: Initial restart count of pod liveness-f60c4e97-59aa-45c9-a5a3-98864777c67e is 0
Dec 20 14:35:56.807: INFO: Restart count of pod container-probe-986/liveness-f60c4e97-59aa-45c9-a5a3-98864777c67e is now 1 (18.109779837s elapsed)
Dec 20 14:36:14.924: INFO: Restart count of pod container-probe-986/liveness-f60c4e97-59aa-45c9-a5a3-98864777c67e is now 2 (36.226106124s elapsed)
Dec 20 14:36:35.030: INFO: Restart count of pod container-probe-986/liveness-f60c4e97-59aa-45c9-a5a3-98864777c67e is now 3 (56.332755034s elapsed)
Dec 20 14:36:57.191: INFO: Restart count of pod container-probe-986/liveness-f60c4e97-59aa-45c9-a5a3-98864777c67e is now 4 (1m18.49347966s elapsed)
Dec 20 14:38:05.936: INFO: Restart count of pod container-probe-986/liveness-f60c4e97-59aa-45c9-a5a3-98864777c67e is now 5 (2m27.238153742s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:38:05.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-986" for this suite.
Dec 20 14:38:12.077: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:38:12.208: INFO: namespace container-probe-986 deletion completed in 6.200568359s

• [SLOW TEST:165.747 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:38:12.208: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 20 14:38:12.312: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8d3cfb29-e4f5-432f-b759-5cb5c4c0af56" in namespace "downward-api-448" to be "success or failure"
Dec 20 14:38:12.325: INFO: Pod "downwardapi-volume-8d3cfb29-e4f5-432f-b759-5cb5c4c0af56": Phase="Pending", Reason="", readiness=false. Elapsed: 12.527237ms
Dec 20 14:38:14.337: INFO: Pod "downwardapi-volume-8d3cfb29-e4f5-432f-b759-5cb5c4c0af56": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024238423s
Dec 20 14:38:16.542: INFO: Pod "downwardapi-volume-8d3cfb29-e4f5-432f-b759-5cb5c4c0af56": Phase="Pending", Reason="", readiness=false. Elapsed: 4.228980239s
Dec 20 14:38:18.557: INFO: Pod "downwardapi-volume-8d3cfb29-e4f5-432f-b759-5cb5c4c0af56": Phase="Pending", Reason="", readiness=false. Elapsed: 6.244665891s
Dec 20 14:38:20.583: INFO: Pod "downwardapi-volume-8d3cfb29-e4f5-432f-b759-5cb5c4c0af56": Phase="Pending", Reason="", readiness=false. Elapsed: 8.270809914s
Dec 20 14:38:22.611: INFO: Pod "downwardapi-volume-8d3cfb29-e4f5-432f-b759-5cb5c4c0af56": Phase="Pending", Reason="", readiness=false. Elapsed: 10.298526294s
Dec 20 14:38:24.621: INFO: Pod "downwardapi-volume-8d3cfb29-e4f5-432f-b759-5cb5c4c0af56": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.30868584s
STEP: Saw pod success
Dec 20 14:38:24.621: INFO: Pod "downwardapi-volume-8d3cfb29-e4f5-432f-b759-5cb5c4c0af56" satisfied condition "success or failure"
Dec 20 14:38:24.626: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-8d3cfb29-e4f5-432f-b759-5cb5c4c0af56 container client-container: 
STEP: delete the pod
Dec 20 14:38:24.675: INFO: Waiting for pod downwardapi-volume-8d3cfb29-e4f5-432f-b759-5cb5c4c0af56 to disappear
Dec 20 14:38:24.691: INFO: Pod downwardapi-volume-8d3cfb29-e4f5-432f-b759-5cb5c4c0af56 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:38:24.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-448" for this suite.
Dec 20 14:38:30.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:38:30.915: INFO: namespace downward-api-448 deletion completed in 6.125476176s

• [SLOW TEST:18.707 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:38:30.916: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 20 14:38:30.996: INFO: Waiting up to 5m0s for pod "downwardapi-volume-be8cf151-e99f-426d-9139-3751596d56f2" in namespace "projected-146" to be "success or failure"
Dec 20 14:38:31.054: INFO: Pod "downwardapi-volume-be8cf151-e99f-426d-9139-3751596d56f2": Phase="Pending", Reason="", readiness=false. Elapsed: 58.528352ms
Dec 20 14:38:33.064: INFO: Pod "downwardapi-volume-be8cf151-e99f-426d-9139-3751596d56f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06858233s
Dec 20 14:38:35.073: INFO: Pod "downwardapi-volume-be8cf151-e99f-426d-9139-3751596d56f2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.077551658s
Dec 20 14:38:37.081: INFO: Pod "downwardapi-volume-be8cf151-e99f-426d-9139-3751596d56f2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.085660509s
Dec 20 14:38:39.094: INFO: Pod "downwardapi-volume-be8cf151-e99f-426d-9139-3751596d56f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.09771955s
STEP: Saw pod success
Dec 20 14:38:39.094: INFO: Pod "downwardapi-volume-be8cf151-e99f-426d-9139-3751596d56f2" satisfied condition "success or failure"
Dec 20 14:38:39.098: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-be8cf151-e99f-426d-9139-3751596d56f2 container client-container: 
STEP: delete the pod
Dec 20 14:38:39.203: INFO: Waiting for pod downwardapi-volume-be8cf151-e99f-426d-9139-3751596d56f2 to disappear
Dec 20 14:38:39.208: INFO: Pod downwardapi-volume-be8cf151-e99f-426d-9139-3751596d56f2 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:38:39.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-146" for this suite.
Dec 20 14:38:45.259: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:38:45.387: INFO: namespace projected-146 deletion completed in 6.173548479s

• [SLOW TEST:14.471 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:38:45.387: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-67ab4917-0c19-4102-93e3-0593464e9b0e
STEP: Creating a pod to test consume configMaps
Dec 20 14:38:45.507: INFO: Waiting up to 5m0s for pod "pod-configmaps-796a54ca-a857-49a8-af8e-0e9b512516d8" in namespace "configmap-4856" to be "success or failure"
Dec 20 14:38:45.511: INFO: Pod "pod-configmaps-796a54ca-a857-49a8-af8e-0e9b512516d8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.907179ms
Dec 20 14:38:47.522: INFO: Pod "pod-configmaps-796a54ca-a857-49a8-af8e-0e9b512516d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014984252s
Dec 20 14:38:49.530: INFO: Pod "pod-configmaps-796a54ca-a857-49a8-af8e-0e9b512516d8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022659212s
Dec 20 14:38:51.538: INFO: Pod "pod-configmaps-796a54ca-a857-49a8-af8e-0e9b512516d8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030717659s
Dec 20 14:38:53.550: INFO: Pod "pod-configmaps-796a54ca-a857-49a8-af8e-0e9b512516d8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.042973959s
Dec 20 14:38:55.559: INFO: Pod "pod-configmaps-796a54ca-a857-49a8-af8e-0e9b512516d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.051972127s
STEP: Saw pod success
Dec 20 14:38:55.559: INFO: Pod "pod-configmaps-796a54ca-a857-49a8-af8e-0e9b512516d8" satisfied condition "success or failure"
Dec 20 14:38:55.564: INFO: Trying to get logs from node iruya-node pod pod-configmaps-796a54ca-a857-49a8-af8e-0e9b512516d8 container configmap-volume-test: 
STEP: delete the pod
Dec 20 14:38:55.840: INFO: Waiting for pod pod-configmaps-796a54ca-a857-49a8-af8e-0e9b512516d8 to disappear
Dec 20 14:38:55.849: INFO: Pod pod-configmaps-796a54ca-a857-49a8-af8e-0e9b512516d8 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:38:55.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4856" for this suite.
Dec 20 14:39:01.993: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:39:02.178: INFO: namespace configmap-4856 deletion completed in 6.293803263s

• [SLOW TEST:16.791 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:39:02.178: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Dec 20 14:39:02.334: INFO: Waiting up to 5m0s for pod "pod-4c1c3d96-d826-4a13-b943-b12512ca762d" in namespace "emptydir-1881" to be "success or failure"
Dec 20 14:39:02.344: INFO: Pod "pod-4c1c3d96-d826-4a13-b943-b12512ca762d": Phase="Pending", Reason="", readiness=false. Elapsed: 9.696999ms
Dec 20 14:39:04.355: INFO: Pod "pod-4c1c3d96-d826-4a13-b943-b12512ca762d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020144094s
Dec 20 14:39:06.469: INFO: Pod "pod-4c1c3d96-d826-4a13-b943-b12512ca762d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.134432794s
Dec 20 14:39:08.486: INFO: Pod "pod-4c1c3d96-d826-4a13-b943-b12512ca762d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.151539969s
Dec 20 14:39:11.377: INFO: Pod "pod-4c1c3d96-d826-4a13-b943-b12512ca762d": Phase="Pending", Reason="", readiness=false. Elapsed: 9.042882276s
Dec 20 14:39:13.390: INFO: Pod "pod-4c1c3d96-d826-4a13-b943-b12512ca762d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.055304618s
STEP: Saw pod success
Dec 20 14:39:13.390: INFO: Pod "pod-4c1c3d96-d826-4a13-b943-b12512ca762d" satisfied condition "success or failure"
Dec 20 14:39:13.394: INFO: Trying to get logs from node iruya-node pod pod-4c1c3d96-d826-4a13-b943-b12512ca762d container test-container: 
STEP: delete the pod
Dec 20 14:39:13.609: INFO: Waiting for pod pod-4c1c3d96-d826-4a13-b943-b12512ca762d to disappear
Dec 20 14:39:13.619: INFO: Pod pod-4c1c3d96-d826-4a13-b943-b12512ca762d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:39:13.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1881" for this suite.
Dec 20 14:39:19.648: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:39:19.765: INFO: namespace emptydir-1881 deletion completed in 6.139427486s

• [SLOW TEST:17.587 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:39:19.766: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Dec 20 14:39:28.792: INFO: Successfully updated pod "pod-update-activedeadlineseconds-7ece051f-c390-4408-8d8e-65477bcfafbe"
Dec 20 14:39:28.792: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-7ece051f-c390-4408-8d8e-65477bcfafbe" in namespace "pods-4057" to be "terminated due to deadline exceeded"
Dec 20 14:39:28.797: INFO: Pod "pod-update-activedeadlineseconds-7ece051f-c390-4408-8d8e-65477bcfafbe": Phase="Running", Reason="", readiness=true. Elapsed: 4.435738ms
Dec 20 14:39:30.807: INFO: Pod "pod-update-activedeadlineseconds-7ece051f-c390-4408-8d8e-65477bcfafbe": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.014614041s
Dec 20 14:39:30.807: INFO: Pod "pod-update-activedeadlineseconds-7ece051f-c390-4408-8d8e-65477bcfafbe" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:39:30.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4057" for this suite.
Dec 20 14:39:36.845: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:39:37.006: INFO: namespace pods-4057 deletion completed in 6.19033819s

• [SLOW TEST:17.240 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:39:37.007: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Dec 20 14:39:37.140: INFO: Waiting up to 5m0s for pod "pod-777cdbfa-81dd-4c1d-a305-24d0dedca0f9" in namespace "emptydir-7473" to be "success or failure"
Dec 20 14:39:37.153: INFO: Pod "pod-777cdbfa-81dd-4c1d-a305-24d0dedca0f9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.468321ms
Dec 20 14:39:39.163: INFO: Pod "pod-777cdbfa-81dd-4c1d-a305-24d0dedca0f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022951028s
Dec 20 14:39:41.171: INFO: Pod "pod-777cdbfa-81dd-4c1d-a305-24d0dedca0f9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030561227s
Dec 20 14:39:43.180: INFO: Pod "pod-777cdbfa-81dd-4c1d-a305-24d0dedca0f9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039321393s
Dec 20 14:39:45.188: INFO: Pod "pod-777cdbfa-81dd-4c1d-a305-24d0dedca0f9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.048200856s
Dec 20 14:39:47.197: INFO: Pod "pod-777cdbfa-81dd-4c1d-a305-24d0dedca0f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.05663039s
STEP: Saw pod success
Dec 20 14:39:47.197: INFO: Pod "pod-777cdbfa-81dd-4c1d-a305-24d0dedca0f9" satisfied condition "success or failure"
Dec 20 14:39:47.201: INFO: Trying to get logs from node iruya-node pod pod-777cdbfa-81dd-4c1d-a305-24d0dedca0f9 container test-container: 
STEP: delete the pod
Dec 20 14:39:47.253: INFO: Waiting for pod pod-777cdbfa-81dd-4c1d-a305-24d0dedca0f9 to disappear
Dec 20 14:39:47.263: INFO: Pod pod-777cdbfa-81dd-4c1d-a305-24d0dedca0f9 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:39:47.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7473" for this suite.
Dec 20 14:39:53.345: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:39:53.508: INFO: namespace emptydir-7473 deletion completed in 6.240500437s

• [SLOW TEST:16.502 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:39:53.509: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
STEP: reading a file in the container
Dec 20 14:40:02.136: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3450 pod-service-account-2e3f2616-8826-4fe6-ab40-180fcbb8c137 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Dec 20 14:40:04.896: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3450 pod-service-account-2e3f2616-8826-4fe6-ab40-180fcbb8c137 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Dec 20 14:40:05.354: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3450 pod-service-account-2e3f2616-8826-4fe6-ab40-180fcbb8c137 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:40:05.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-3450" for this suite.
Dec 20 14:40:11.873: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:40:12.028: INFO: namespace svcaccounts-3450 deletion completed in 6.219876s

• [SLOW TEST:18.519 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:40:12.030: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-7ef4f7c5-2c68-44b3-8371-f0442bf49b46
STEP: Creating a pod to test consume secrets
Dec 20 14:40:12.188: INFO: Waiting up to 5m0s for pod "pod-secrets-a26c520a-1b8b-415e-a62a-691a74cdf784" in namespace "secrets-6634" to be "success or failure"
Dec 20 14:40:12.223: INFO: Pod "pod-secrets-a26c520a-1b8b-415e-a62a-691a74cdf784": Phase="Pending", Reason="", readiness=false. Elapsed: 34.946208ms
Dec 20 14:40:14.229: INFO: Pod "pod-secrets-a26c520a-1b8b-415e-a62a-691a74cdf784": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040774749s
Dec 20 14:40:16.344: INFO: Pod "pod-secrets-a26c520a-1b8b-415e-a62a-691a74cdf784": Phase="Pending", Reason="", readiness=false. Elapsed: 4.156707207s
Dec 20 14:40:18.353: INFO: Pod "pod-secrets-a26c520a-1b8b-415e-a62a-691a74cdf784": Phase="Pending", Reason="", readiness=false. Elapsed: 6.165652231s
Dec 20 14:40:20.364: INFO: Pod "pod-secrets-a26c520a-1b8b-415e-a62a-691a74cdf784": Phase="Pending", Reason="", readiness=false. Elapsed: 8.176277884s
Dec 20 14:40:22.373: INFO: Pod "pod-secrets-a26c520a-1b8b-415e-a62a-691a74cdf784": Phase="Pending", Reason="", readiness=false. Elapsed: 10.184939874s
Dec 20 14:40:24.381: INFO: Pod "pod-secrets-a26c520a-1b8b-415e-a62a-691a74cdf784": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.193197157s
STEP: Saw pod success
Dec 20 14:40:24.381: INFO: Pod "pod-secrets-a26c520a-1b8b-415e-a62a-691a74cdf784" satisfied condition "success or failure"
Dec 20 14:40:24.386: INFO: Trying to get logs from node iruya-node pod pod-secrets-a26c520a-1b8b-415e-a62a-691a74cdf784 container secret-volume-test: 
STEP: delete the pod
Dec 20 14:40:25.331: INFO: Waiting for pod pod-secrets-a26c520a-1b8b-415e-a62a-691a74cdf784 to disappear
Dec 20 14:40:25.340: INFO: Pod pod-secrets-a26c520a-1b8b-415e-a62a-691a74cdf784 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:40:25.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6634" for this suite.
Dec 20 14:40:31.363: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:40:31.557: INFO: namespace secrets-6634 deletion completed in 6.210837656s

• [SLOW TEST:19.527 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:40:31.558: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-e5e613bb-eca1-4f2b-a8cd-8851df56deef
STEP: Creating a pod to test consume configMaps
Dec 20 14:40:31.730: INFO: Waiting up to 5m0s for pod "pod-configmaps-7039917f-10aa-4aa7-8e2c-2abc40f6be78" in namespace "configmap-7053" to be "success or failure"
Dec 20 14:40:31.761: INFO: Pod "pod-configmaps-7039917f-10aa-4aa7-8e2c-2abc40f6be78": Phase="Pending", Reason="", readiness=false. Elapsed: 29.987506ms
Dec 20 14:40:33.776: INFO: Pod "pod-configmaps-7039917f-10aa-4aa7-8e2c-2abc40f6be78": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045030791s
Dec 20 14:40:36.163: INFO: Pod "pod-configmaps-7039917f-10aa-4aa7-8e2c-2abc40f6be78": Phase="Pending", Reason="", readiness=false. Elapsed: 4.432585656s
Dec 20 14:40:38.175: INFO: Pod "pod-configmaps-7039917f-10aa-4aa7-8e2c-2abc40f6be78": Phase="Pending", Reason="", readiness=false. Elapsed: 6.444347069s
Dec 20 14:40:40.189: INFO: Pod "pod-configmaps-7039917f-10aa-4aa7-8e2c-2abc40f6be78": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.45871253s
STEP: Saw pod success
Dec 20 14:40:40.189: INFO: Pod "pod-configmaps-7039917f-10aa-4aa7-8e2c-2abc40f6be78" satisfied condition "success or failure"
Dec 20 14:40:40.197: INFO: Trying to get logs from node iruya-node pod pod-configmaps-7039917f-10aa-4aa7-8e2c-2abc40f6be78 container configmap-volume-test: 
STEP: delete the pod
Dec 20 14:40:40.301: INFO: Waiting for pod pod-configmaps-7039917f-10aa-4aa7-8e2c-2abc40f6be78 to disappear
Dec 20 14:40:40.317: INFO: Pod pod-configmaps-7039917f-10aa-4aa7-8e2c-2abc40f6be78 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:40:40.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7053" for this suite.
Dec 20 14:40:46.354: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:40:46.569: INFO: namespace configmap-7053 deletion completed in 6.24152505s

• [SLOW TEST:15.011 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:40:46.570: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 20 14:40:46.731: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Dec 20 14:40:46.759: INFO: Pod name sample-pod: Found 0 pods out of 1
Dec 20 14:40:51.770: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 20 14:40:53.794: INFO: Creating deployment "test-rolling-update-deployment"
Dec 20 14:40:53.813: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Dec 20 14:40:53.840: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Dec 20 14:40:55.852: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Dec 20 14:40:55.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712449653, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712449653, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712449654, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712449653, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 20 14:40:57.867: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712449653, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712449653, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712449654, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712449653, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 20 14:40:59.869: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712449653, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712449653, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712449654, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712449653, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 20 14:41:01.865: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712449653, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712449653, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712449654, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712449653, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 20 14:41:03.870: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Dec 20 14:41:03.884: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-6450,SelfLink:/apis/apps/v1/namespaces/deployment-6450/deployments/test-rolling-update-deployment,UID:de6f21f2-f5fd-471d-9493-f3728dea78b2,ResourceVersion:17401749,Generation:1,CreationTimestamp:2019-12-20 14:40:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-20 14:40:53 +0000 UTC 2019-12-20 14:40:53 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-20 14:41:02 +0000 UTC 2019-12-20 14:40:53 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Dec 20 14:41:03.887: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-6450,SelfLink:/apis/apps/v1/namespaces/deployment-6450/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:d61b8a39-d72b-4027-b965-bafd135ef3c8,ResourceVersion:17401739,Generation:1,CreationTimestamp:2019-12-20 14:40:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment de6f21f2-f5fd-471d-9493-f3728dea78b2 0xc00307bc37 0xc00307bc38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Dec 20 14:41:03.887: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Dec 20 14:41:03.888: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-6450,SelfLink:/apis/apps/v1/namespaces/deployment-6450/replicasets/test-rolling-update-controller,UID:658f9e85-b0a8-408f-9fde-211490404db4,ResourceVersion:17401748,Generation:2,CreationTimestamp:2019-12-20 14:40:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment de6f21f2-f5fd-471d-9493-f3728dea78b2 0xc00307bb67 0xc00307bb68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 20 14:41:03.891: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-6ps55" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-6ps55,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-6450,SelfLink:/api/v1/namespaces/deployment-6450/pods/test-rolling-update-deployment-79f6b9d75c-6ps55,UID:5bbaa510-f9c1-4c3d-846f-c15964554d65,ResourceVersion:17401738,Generation:0,CreationTimestamp:2019-12-20 14:40:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c d61b8a39-d72b-4027-b965-bafd135ef3c8 0xc0027f2577 0xc0027f2578}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-n42gj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n42gj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-n42gj true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027f2710} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027f2840}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:40:54 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:41:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:41:02 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:40:53 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2019-12-20 14:40:54 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-20 14:41:01 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://275cc87432a809c718c56416c714279f2f8c3400d1f56b64cc1406a918bca5a0}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:41:03.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-6450" for this suite.
Dec 20 14:41:09.972: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:41:10.076: INFO: namespace deployment-6450 deletion completed in 6.181758744s

• [SLOW TEST:23.507 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:41:10.077: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Dec 20 14:41:10.249: INFO: Waiting up to 5m0s for pod "pod-d2df3a0b-2f33-4f47-ac8b-abcc0e7d16a5" in namespace "emptydir-9924" to be "success or failure"
Dec 20 14:41:10.254: INFO: Pod "pod-d2df3a0b-2f33-4f47-ac8b-abcc0e7d16a5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.552925ms
Dec 20 14:41:12.259: INFO: Pod "pod-d2df3a0b-2f33-4f47-ac8b-abcc0e7d16a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009854548s
Dec 20 14:41:14.267: INFO: Pod "pod-d2df3a0b-2f33-4f47-ac8b-abcc0e7d16a5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017825266s
Dec 20 14:41:16.277: INFO: Pod "pod-d2df3a0b-2f33-4f47-ac8b-abcc0e7d16a5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.027780014s
Dec 20 14:41:18.292: INFO: Pod "pod-d2df3a0b-2f33-4f47-ac8b-abcc0e7d16a5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.042347679s
Dec 20 14:41:20.304: INFO: Pod "pod-d2df3a0b-2f33-4f47-ac8b-abcc0e7d16a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.054356199s
STEP: Saw pod success
Dec 20 14:41:20.304: INFO: Pod "pod-d2df3a0b-2f33-4f47-ac8b-abcc0e7d16a5" satisfied condition "success or failure"
Dec 20 14:41:20.310: INFO: Trying to get logs from node iruya-node pod pod-d2df3a0b-2f33-4f47-ac8b-abcc0e7d16a5 container test-container: 
STEP: delete the pod
Dec 20 14:41:20.395: INFO: Waiting for pod pod-d2df3a0b-2f33-4f47-ac8b-abcc0e7d16a5 to disappear
Dec 20 14:41:20.403: INFO: Pod pod-d2df3a0b-2f33-4f47-ac8b-abcc0e7d16a5 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:41:20.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9924" for this suite.
Dec 20 14:41:26.445: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:41:26.672: INFO: namespace emptydir-9924 deletion completed in 6.262066081s

• [SLOW TEST:16.595 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:41:26.673: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5175.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5175.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5175.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5175.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 20 14:41:40.921: INFO: File wheezy_udp@dns-test-service-3.dns-5175.svc.cluster.local from pod  dns-5175/dns-test-57a3666f-f808-456c-a48b-3ee6a7d5c456 contains '' instead of 'foo.example.com.'
Dec 20 14:41:41.001: INFO: File jessie_udp@dns-test-service-3.dns-5175.svc.cluster.local from pod  dns-5175/dns-test-57a3666f-f808-456c-a48b-3ee6a7d5c456 contains '' instead of 'foo.example.com.'
Dec 20 14:41:41.001: INFO: Lookups using dns-5175/dns-test-57a3666f-f808-456c-a48b-3ee6a7d5c456 failed for: [wheezy_udp@dns-test-service-3.dns-5175.svc.cluster.local jessie_udp@dns-test-service-3.dns-5175.svc.cluster.local]

Dec 20 14:41:46.014: INFO: DNS probes using dns-test-57a3666f-f808-456c-a48b-3ee6a7d5c456 succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5175.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5175.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5175.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5175.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 20 14:42:02.279: INFO: File wheezy_udp@dns-test-service-3.dns-5175.svc.cluster.local from pod  dns-5175/dns-test-9300b45d-0391-4f65-b06c-9d74a48fe0d5 contains '' instead of 'bar.example.com.'
Dec 20 14:42:02.289: INFO: File jessie_udp@dns-test-service-3.dns-5175.svc.cluster.local from pod  dns-5175/dns-test-9300b45d-0391-4f65-b06c-9d74a48fe0d5 contains '' instead of 'bar.example.com.'
Dec 20 14:42:02.289: INFO: Lookups using dns-5175/dns-test-9300b45d-0391-4f65-b06c-9d74a48fe0d5 failed for: [wheezy_udp@dns-test-service-3.dns-5175.svc.cluster.local jessie_udp@dns-test-service-3.dns-5175.svc.cluster.local]

Dec 20 14:42:07.308: INFO: File wheezy_udp@dns-test-service-3.dns-5175.svc.cluster.local from pod  dns-5175/dns-test-9300b45d-0391-4f65-b06c-9d74a48fe0d5 contains 'foo.example.com.
' instead of 'bar.example.com.'
Dec 20 14:42:07.319: INFO: File jessie_udp@dns-test-service-3.dns-5175.svc.cluster.local from pod  dns-5175/dns-test-9300b45d-0391-4f65-b06c-9d74a48fe0d5 contains 'foo.example.com.
' instead of 'bar.example.com.'
Dec 20 14:42:07.319: INFO: Lookups using dns-5175/dns-test-9300b45d-0391-4f65-b06c-9d74a48fe0d5 failed for: [wheezy_udp@dns-test-service-3.dns-5175.svc.cluster.local jessie_udp@dns-test-service-3.dns-5175.svc.cluster.local]

Dec 20 14:42:12.309: INFO: File wheezy_udp@dns-test-service-3.dns-5175.svc.cluster.local from pod  dns-5175/dns-test-9300b45d-0391-4f65-b06c-9d74a48fe0d5 contains 'foo.example.com.
' instead of 'bar.example.com.'
Dec 20 14:42:12.319: INFO: File jessie_udp@dns-test-service-3.dns-5175.svc.cluster.local from pod  dns-5175/dns-test-9300b45d-0391-4f65-b06c-9d74a48fe0d5 contains 'foo.example.com.
' instead of 'bar.example.com.'
Dec 20 14:42:12.319: INFO: Lookups using dns-5175/dns-test-9300b45d-0391-4f65-b06c-9d74a48fe0d5 failed for: [wheezy_udp@dns-test-service-3.dns-5175.svc.cluster.local jessie_udp@dns-test-service-3.dns-5175.svc.cluster.local]

Dec 20 14:42:17.321: INFO: DNS probes using dns-test-9300b45d-0391-4f65-b06c-9d74a48fe0d5 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5175.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-5175.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5175.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-5175.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 20 14:42:33.807: INFO: File wheezy_udp@dns-test-service-3.dns-5175.svc.cluster.local from pod  dns-5175/dns-test-e9439b8e-9f87-469e-b067-5d071286f3b0 contains '' instead of '10.103.8.143'
Dec 20 14:42:33.817: INFO: File jessie_udp@dns-test-service-3.dns-5175.svc.cluster.local from pod  dns-5175/dns-test-e9439b8e-9f87-469e-b067-5d071286f3b0 contains '' instead of '10.103.8.143'
Dec 20 14:42:33.817: INFO: Lookups using dns-5175/dns-test-e9439b8e-9f87-469e-b067-5d071286f3b0 failed for: [wheezy_udp@dns-test-service-3.dns-5175.svc.cluster.local jessie_udp@dns-test-service-3.dns-5175.svc.cluster.local]

Dec 20 14:42:38.927: INFO: DNS probes using dns-test-e9439b8e-9f87-469e-b067-5d071286f3b0 succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:42:39.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5175" for this suite.
Dec 20 14:42:47.237: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:42:47.375: INFO: namespace dns-5175 deletion completed in 8.238680363s

• [SLOW TEST:80.702 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:42:47.375: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 20 14:42:55.712: INFO: Waiting up to 5m0s for pod "client-envvars-bbebc067-b9d1-4f2c-98ac-3595c53e6b86" in namespace "pods-5531" to be "success or failure"
Dec 20 14:42:55.758: INFO: Pod "client-envvars-bbebc067-b9d1-4f2c-98ac-3595c53e6b86": Phase="Pending", Reason="", readiness=false. Elapsed: 45.753859ms
Dec 20 14:42:57.766: INFO: Pod "client-envvars-bbebc067-b9d1-4f2c-98ac-3595c53e6b86": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054236949s
Dec 20 14:42:59.785: INFO: Pod "client-envvars-bbebc067-b9d1-4f2c-98ac-3595c53e6b86": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072705856s
Dec 20 14:43:01.805: INFO: Pod "client-envvars-bbebc067-b9d1-4f2c-98ac-3595c53e6b86": Phase="Pending", Reason="", readiness=false. Elapsed: 6.093172393s
Dec 20 14:43:03.817: INFO: Pod "client-envvars-bbebc067-b9d1-4f2c-98ac-3595c53e6b86": Phase="Pending", Reason="", readiness=false. Elapsed: 8.105104233s
Dec 20 14:43:05.834: INFO: Pod "client-envvars-bbebc067-b9d1-4f2c-98ac-3595c53e6b86": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.122320938s
STEP: Saw pod success
Dec 20 14:43:05.834: INFO: Pod "client-envvars-bbebc067-b9d1-4f2c-98ac-3595c53e6b86" satisfied condition "success or failure"
Dec 20 14:43:05.847: INFO: Trying to get logs from node iruya-node pod client-envvars-bbebc067-b9d1-4f2c-98ac-3595c53e6b86 container env3cont: 
STEP: delete the pod
Dec 20 14:43:06.348: INFO: Waiting for pod client-envvars-bbebc067-b9d1-4f2c-98ac-3595c53e6b86 to disappear
Dec 20 14:43:06.359: INFO: Pod client-envvars-bbebc067-b9d1-4f2c-98ac-3595c53e6b86 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:43:06.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5531" for this suite.
Dec 20 14:43:48.399: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:43:48.536: INFO: namespace pods-5531 deletion completed in 42.165176225s

• [SLOW TEST:61.161 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:43:48.537: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 20 14:43:48.611: INFO: Creating deployment "test-recreate-deployment"
Dec 20 14:43:48.683: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Dec 20 14:43:48.735: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Dec 20 14:43:50.763: INFO: Waiting deployment "test-recreate-deployment" to complete
Dec 20 14:43:50.769: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712449828, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712449828, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712449828, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712449828, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 20 14:43:52.787: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712449828, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712449828, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712449828, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712449828, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 20 14:43:54.784: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712449828, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712449828, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712449828, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712449828, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 20 14:43:56.810: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712449828, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712449828, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712449828, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712449828, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 20 14:43:58.776: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Dec 20 14:43:58.788: INFO: Updating deployment test-recreate-deployment
Dec 20 14:43:58.788: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Dec 20 14:43:59.335: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-7623,SelfLink:/apis/apps/v1/namespaces/deployment-7623/deployments/test-recreate-deployment,UID:20084e07-b59e-4b1a-80a3-c8b014f256a3,ResourceVersion:17402242,Generation:2,CreationTimestamp:2019-12-20 14:43:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2019-12-20 14:43:59 +0000 UTC 2019-12-20 14:43:59 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2019-12-20 14:43:59 +0000 UTC 2019-12-20 14:43:48 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Dec 20 14:43:59.348: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-7623,SelfLink:/apis/apps/v1/namespaces/deployment-7623/replicasets/test-recreate-deployment-5c8c9cc69d,UID:b454c182-dd79-46ab-a2ba-1f3edd1d9ec5,ResourceVersion:17402241,Generation:1,CreationTimestamp:2019-12-20 14:43:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 20084e07-b59e-4b1a-80a3-c8b014f256a3 0xc00238d237 0xc00238d238}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 20 14:43:59.348: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Dec 20 14:43:59.349: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-7623,SelfLink:/apis/apps/v1/namespaces/deployment-7623/replicasets/test-recreate-deployment-6df85df6b9,UID:36acfacf-1583-40e7-9ebc-dfdc2536dc9d,ResourceVersion:17402231,Generation:2,CreationTimestamp:2019-12-20 14:43:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 20084e07-b59e-4b1a-80a3-c8b014f256a3 0xc00238d307 0xc00238d308}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 20 14:43:59.355: INFO: Pod "test-recreate-deployment-5c8c9cc69d-x2kbq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-x2kbq,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-7623,SelfLink:/api/v1/namespaces/deployment-7623/pods/test-recreate-deployment-5c8c9cc69d-x2kbq,UID:d246dc5f-557e-472d-ba3d-358384c13358,ResourceVersion:17402243,Generation:0,CreationTimestamp:2019-12-20 14:43:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d b454c182-dd79-46ab-a2ba-1f3edd1d9ec5 0xc00238dbd7 0xc00238dbd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-brjmx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-brjmx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-brjmx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00238dc50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00238dc70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:43:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:43:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:43:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:43:59 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-20 14:43:59 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:43:59.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-7623" for this suite.
Dec 20 14:44:07.643: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:44:07.809: INFO: namespace deployment-7623 deletion completed in 8.444082742s

• [SLOW TEST:19.272 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:44:07.809: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Dec 20 14:44:08.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9721'
Dec 20 14:44:08.425: INFO: stderr: ""
Dec 20 14:44:08.425: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 20 14:44:08.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9721'
Dec 20 14:44:08.658: INFO: stderr: ""
Dec 20 14:44:08.659: INFO: stdout: "update-demo-nautilus-2kw6w update-demo-nautilus-svmxm "
Dec 20 14:44:08.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2kw6w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9721'
Dec 20 14:44:08.844: INFO: stderr: ""
Dec 20 14:44:08.844: INFO: stdout: ""
Dec 20 14:44:08.844: INFO: update-demo-nautilus-2kw6w is created but not running
Dec 20 14:44:13.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9721'
Dec 20 14:44:13.973: INFO: stderr: ""
Dec 20 14:44:13.973: INFO: stdout: "update-demo-nautilus-2kw6w update-demo-nautilus-svmxm "
Dec 20 14:44:13.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2kw6w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9721'
Dec 20 14:44:14.072: INFO: stderr: ""
Dec 20 14:44:14.073: INFO: stdout: ""
Dec 20 14:44:14.073: INFO: update-demo-nautilus-2kw6w is created but not running
Dec 20 14:44:19.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9721'
Dec 20 14:44:19.248: INFO: stderr: ""
Dec 20 14:44:19.248: INFO: stdout: "update-demo-nautilus-2kw6w update-demo-nautilus-svmxm "
Dec 20 14:44:19.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2kw6w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9721'
Dec 20 14:44:19.330: INFO: stderr: ""
Dec 20 14:44:19.330: INFO: stdout: ""
Dec 20 14:44:19.330: INFO: update-demo-nautilus-2kw6w is created but not running
Dec 20 14:44:24.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9721'
Dec 20 14:44:24.460: INFO: stderr: ""
Dec 20 14:44:24.460: INFO: stdout: "update-demo-nautilus-2kw6w update-demo-nautilus-svmxm "
Dec 20 14:44:24.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2kw6w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9721'
Dec 20 14:44:24.594: INFO: stderr: ""
Dec 20 14:44:24.594: INFO: stdout: "true"
Dec 20 14:44:24.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2kw6w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9721'
Dec 20 14:44:24.683: INFO: stderr: ""
Dec 20 14:44:24.683: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 20 14:44:24.684: INFO: validating pod update-demo-nautilus-2kw6w
Dec 20 14:44:24.702: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 20 14:44:24.702: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 20 14:44:24.702: INFO: update-demo-nautilus-2kw6w is verified up and running
Dec 20 14:44:24.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-svmxm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9721'
Dec 20 14:44:24.822: INFO: stderr: ""
Dec 20 14:44:24.822: INFO: stdout: "true"
Dec 20 14:44:24.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-svmxm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9721'
Dec 20 14:44:24.994: INFO: stderr: ""
Dec 20 14:44:24.994: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 20 14:44:24.994: INFO: validating pod update-demo-nautilus-svmxm
Dec 20 14:44:25.047: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 20 14:44:25.047: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 20 14:44:25.047: INFO: update-demo-nautilus-svmxm is verified up and running
STEP: using delete to clean up resources
Dec 20 14:44:25.047: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9721'
Dec 20 14:44:25.202: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 20 14:44:25.202: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Dec 20 14:44:25.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9721'
Dec 20 14:44:25.397: INFO: stderr: "No resources found.\n"
Dec 20 14:44:25.397: INFO: stdout: ""
Dec 20 14:44:25.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9721 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 20 14:44:25.495: INFO: stderr: ""
Dec 20 14:44:25.495: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:44:25.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9721" for this suite.
Dec 20 14:44:47.541: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:44:47.652: INFO: namespace kubectl-9721 deletion completed in 22.150868645s

• [SLOW TEST:39.843 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:44:47.652: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-9066827d-e3d6-481d-914c-2605e75b1255 in namespace container-probe-3618
Dec 20 14:44:57.872: INFO: Started pod busybox-9066827d-e3d6-481d-914c-2605e75b1255 in namespace container-probe-3618
STEP: checking the pod's current state and verifying that restartCount is present
Dec 20 14:44:57.878: INFO: Initial restart count of pod busybox-9066827d-e3d6-481d-914c-2605e75b1255 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:48:59.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3618" for this suite.
Dec 20 14:49:05.417: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:49:05.704: INFO: namespace container-probe-3618 deletion completed in 6.334669617s

• [SLOW TEST:258.052 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:49:05.704: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override command
Dec 20 14:49:05.869: INFO: Waiting up to 5m0s for pod "client-containers-268b6968-d7de-419b-a071-fc93d8bdd73c" in namespace "containers-9631" to be "success or failure"
Dec 20 14:49:05.878: INFO: Pod "client-containers-268b6968-d7de-419b-a071-fc93d8bdd73c": Phase="Pending", Reason="", readiness=false. Elapsed: 7.948914ms
Dec 20 14:49:07.892: INFO: Pod "client-containers-268b6968-d7de-419b-a071-fc93d8bdd73c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02221505s
Dec 20 14:49:09.907: INFO: Pod "client-containers-268b6968-d7de-419b-a071-fc93d8bdd73c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037372201s
Dec 20 14:49:11.917: INFO: Pod "client-containers-268b6968-d7de-419b-a071-fc93d8bdd73c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047122423s
Dec 20 14:49:13.929: INFO: Pod "client-containers-268b6968-d7de-419b-a071-fc93d8bdd73c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.059755435s
Dec 20 14:49:15.936: INFO: Pod "client-containers-268b6968-d7de-419b-a071-fc93d8bdd73c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.066583823s
STEP: Saw pod success
Dec 20 14:49:15.936: INFO: Pod "client-containers-268b6968-d7de-419b-a071-fc93d8bdd73c" satisfied condition "success or failure"
Dec 20 14:49:15.939: INFO: Trying to get logs from node iruya-node pod client-containers-268b6968-d7de-419b-a071-fc93d8bdd73c container test-container: 
STEP: delete the pod
Dec 20 14:49:16.111: INFO: Waiting for pod client-containers-268b6968-d7de-419b-a071-fc93d8bdd73c to disappear
Dec 20 14:49:16.171: INFO: Pod client-containers-268b6968-d7de-419b-a071-fc93d8bdd73c no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:49:16.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-9631" for this suite.
Dec 20 14:49:22.322: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:49:22.508: INFO: namespace containers-9631 deletion completed in 6.327579801s

• [SLOW TEST:16.804 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:49:22.509: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:49:30.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-4310" for this suite.
Dec 20 14:50:16.743: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:50:16.915: INFO: namespace kubelet-test-4310 deletion completed in 46.197811277s

• [SLOW TEST:54.406 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:50:16.916: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service multi-endpoint-test in namespace services-1104
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1104 to expose endpoints map[]
Dec 20 14:50:17.074: INFO: Get endpoints failed (12.921374ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Dec 20 14:50:18.083: INFO: successfully validated that service multi-endpoint-test in namespace services-1104 exposes endpoints map[] (1.02191272s elapsed)
STEP: Creating pod pod1 in namespace services-1104
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1104 to expose endpoints map[pod1:[100]]
Dec 20 14:50:22.822: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.727649955s elapsed, will retry)
Dec 20 14:50:28.020: INFO: successfully validated that service multi-endpoint-test in namespace services-1104 exposes endpoints map[pod1:[100]] (9.92530309s elapsed)
STEP: Creating pod pod2 in namespace services-1104
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1104 to expose endpoints map[pod1:[100] pod2:[101]]
Dec 20 14:50:33.079: INFO: Unexpected endpoints: found map[45a37b16-7285-4f09-87fe-ea755c5f280d:[100]], expected map[pod1:[100] pod2:[101]] (5.054088695s elapsed, will retry)
Dec 20 14:50:36.155: INFO: successfully validated that service multi-endpoint-test in namespace services-1104 exposes endpoints map[pod1:[100] pod2:[101]] (8.130362646s elapsed)
STEP: Deleting pod pod1 in namespace services-1104
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1104 to expose endpoints map[pod2:[101]]
Dec 20 14:50:36.258: INFO: successfully validated that service multi-endpoint-test in namespace services-1104 exposes endpoints map[pod2:[101]] (90.835493ms elapsed)
STEP: Deleting pod pod2 in namespace services-1104
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1104 to expose endpoints map[]
Dec 20 14:50:37.296: INFO: successfully validated that service multi-endpoint-test in namespace services-1104 exposes endpoints map[] (1.020217876s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:50:37.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1104" for this suite.
Dec 20 14:50:59.461: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:50:59.630: INFO: namespace services-1104 deletion completed in 22.198596369s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:42.714 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:50:59.630: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Dec 20 14:51:08.916: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:51:09.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-3925" for this suite.
Dec 20 14:51:31.982: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:51:32.092: INFO: namespace replicaset-3925 deletion completed in 22.134567146s

• [SLOW TEST:32.462 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:51:32.093: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 20 14:51:32.161: INFO: Creating deployment "nginx-deployment"
Dec 20 14:51:32.227: INFO: Waiting for observed generation 1
Dec 20 14:51:35.359: INFO: Waiting for all required pods to come up
Dec 20 14:51:35.386: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Dec 20 14:52:04.708: INFO: Waiting for deployment "nginx-deployment" to complete
Dec 20 14:52:04.718: INFO: Updating deployment "nginx-deployment" with a non-existent image
Dec 20 14:52:04.732: INFO: Updating deployment nginx-deployment
Dec 20 14:52:04.732: INFO: Waiting for observed generation 2
Dec 20 14:52:07.026: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Dec 20 14:52:07.893: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Dec 20 14:52:07.956: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Dec 20 14:52:07.968: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Dec 20 14:52:07.968: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Dec 20 14:52:07.970: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Dec 20 14:52:07.976: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Dec 20 14:52:07.976: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Dec 20 14:52:08.184: INFO: Updating deployment nginx-deployment
Dec 20 14:52:08.184: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Dec 20 14:52:08.600: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Dec 20 14:52:16.523: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Dec 20 14:52:21.387: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-5744,SelfLink:/apis/apps/v1/namespaces/deployment-5744/deployments/nginx-deployment,UID:969f37d1-2394-4153-ad3c-e0b8819df678,ResourceVersion:17403367,Generation:3,CreationTimestamp:2019-12-20 14:51:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2019-12-20 14:52:08 +0000 UTC 2019-12-20 14:52:08 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2019-12-20 14:52:15 +0000 UTC 2019-12-20 14:51:32 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},}

Dec 20 14:52:22.918: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-5744,SelfLink:/apis/apps/v1/namespaces/deployment-5744/replicasets/nginx-deployment-55fb7cb77f,UID:b0be310a-d615-48c1-9d49-fbdf9a600883,ResourceVersion:17403357,Generation:3,CreationTimestamp:2019-12-20 14:52:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 969f37d1-2394-4153-ad3c-e0b8819df678 0xc002621bb7 0xc002621bb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 20 14:52:22.919: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Dec 20 14:52:22.919: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-5744,SelfLink:/apis/apps/v1/namespaces/deployment-5744/replicasets/nginx-deployment-7b8c6f4498,UID:2627dcdd-41e6-4d14-bd40-168fc82d68c1,ResourceVersion:17403362,Generation:3,CreationTimestamp:2019-12-20 14:51:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 969f37d1-2394-4153-ad3c-e0b8819df678 0xc002621d17 0xc002621d18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Dec 20 14:52:26.324: INFO: Pod "nginx-deployment-55fb7cb77f-2h8kp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-2h8kp,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5744,SelfLink:/api/v1/namespaces/deployment-5744/pods/nginx-deployment-55fb7cb77f-2h8kp,UID:740a3bf9-da7f-46c6-9be0-03af09b17417,ResourceVersion:17403324,Generation:0,CreationTimestamp:2019-12-20 14:52:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f b0be310a-d615-48c1-9d49-fbdf9a600883 0xc001eff357 0xc001eff358}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-sxp97 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-sxp97,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-sxp97 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001eff3d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001eff3f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:09 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 14:52:26.324: INFO: Pod "nginx-deployment-55fb7cb77f-4bsj8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-4bsj8,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5744,SelfLink:/api/v1/namespaces/deployment-5744/pods/nginx-deployment-55fb7cb77f-4bsj8,UID:f78af7e6-cc40-4604-9a9d-9351842cfbae,ResourceVersion:17403340,Generation:0,CreationTimestamp:2019-12-20 14:52:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f b0be310a-d615-48c1-9d49-fbdf9a600883 0xc001eff497 0xc001eff498}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-sxp97 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-sxp97,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-sxp97 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001eff530} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001eff550}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:11 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 14:52:26.324: INFO: Pod "nginx-deployment-55fb7cb77f-79jdf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-79jdf,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5744,SelfLink:/api/v1/namespaces/deployment-5744/pods/nginx-deployment-55fb7cb77f-79jdf,UID:d3788579-969d-43a8-a594-2c6a976a4a28,ResourceVersion:17403290,Generation:0,CreationTimestamp:2019-12-20 14:52:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f b0be310a-d615-48c1-9d49-fbdf9a600883 0xc001eff5d7 0xc001eff5d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-sxp97 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-sxp97,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-sxp97 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001eff650} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001eff670}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:05 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:05 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-20 14:52:05 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 14:52:26.325: INFO: Pod "nginx-deployment-55fb7cb77f-8sstf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-8sstf,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5744,SelfLink:/api/v1/namespaces/deployment-5744/pods/nginx-deployment-55fb7cb77f-8sstf,UID:afef7d60-581f-4c12-83fa-47f6d5bd431c,ResourceVersion:17403289,Generation:0,CreationTimestamp:2019-12-20 14:52:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f b0be310a-d615-48c1-9d49-fbdf9a600883 0xc001eff767 0xc001eff768}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-sxp97 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-sxp97,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-sxp97 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001eff7d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001eff7f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:05 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:05 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2019-12-20 14:52:05 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 14:52:26.325: INFO: Pod "nginx-deployment-55fb7cb77f-98bql" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-98bql,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5744,SelfLink:/api/v1/namespaces/deployment-5744/pods/nginx-deployment-55fb7cb77f-98bql,UID:1375311f-b216-438a-9767-c1fccc6f7110,ResourceVersion:17403332,Generation:0,CreationTimestamp:2019-12-20 14:52:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f b0be310a-d615-48c1-9d49-fbdf9a600883 0xc001eff8c7 0xc001eff8c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-sxp97 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-sxp97,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-sxp97 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001eff930} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001eff950}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:09 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 14:52:26.325: INFO: Pod "nginx-deployment-55fb7cb77f-b2xnf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-b2xnf,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5744,SelfLink:/api/v1/namespaces/deployment-5744/pods/nginx-deployment-55fb7cb77f-b2xnf,UID:6271c2fc-58b5-4e42-abd4-8f785df796ab,ResourceVersion:17403264,Generation:0,CreationTimestamp:2019-12-20 14:52:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f b0be310a-d615-48c1-9d49-fbdf9a600883 0xc001eff9d7 0xc001eff9d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-sxp97 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-sxp97,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-sxp97 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001effa40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001effa60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:04 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2019-12-20 14:52:04 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 14:52:26.326: INFO: Pod "nginx-deployment-55fb7cb77f-fl7t8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-fl7t8,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5744,SelfLink:/api/v1/namespaces/deployment-5744/pods/nginx-deployment-55fb7cb77f-fl7t8,UID:d5d3b75b-d28f-4c84-906c-4a54850770fe,ResourceVersion:17403276,Generation:0,CreationTimestamp:2019-12-20 14:52:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f b0be310a-d615-48c1-9d49-fbdf9a600883 0xc001effb37 0xc001effb38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-sxp97 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-sxp97,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-sxp97 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001effc00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001effc50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:04 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-20 14:52:04 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 14:52:26.326: INFO: Pod "nginx-deployment-55fb7cb77f-hwxdl" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-hwxdl,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5744,SelfLink:/api/v1/namespaces/deployment-5744/pods/nginx-deployment-55fb7cb77f-hwxdl,UID:e75d35af-ac6e-40f7-8a5b-55dad62f5fa1,ResourceVersion:17403347,Generation:0,CreationTimestamp:2019-12-20 14:52:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f b0be310a-d615-48c1-9d49-fbdf9a600883 0xc001effeb7 0xc001effeb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-sxp97 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-sxp97,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-sxp97 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001efff70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001efffb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:12 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 14:52:26.326: INFO: Pod "nginx-deployment-55fb7cb77f-jdpj5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-jdpj5,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5744,SelfLink:/api/v1/namespaces/deployment-5744/pods/nginx-deployment-55fb7cb77f-jdpj5,UID:1426645e-10d6-4a4c-b454-503181730ed4,ResourceVersion:17403363,Generation:0,CreationTimestamp:2019-12-20 14:52:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f b0be310a-d615-48c1-9d49-fbdf9a600883 0xc002d6a067 0xc002d6a068}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-sxp97 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-sxp97,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-sxp97 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d6a0d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d6a0f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:11 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:08 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2019-12-20 14:52:11 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 14:52:26.327: INFO: Pod "nginx-deployment-55fb7cb77f-jzg6q" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-jzg6q,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5744,SelfLink:/api/v1/namespaces/deployment-5744/pods/nginx-deployment-55fb7cb77f-jzg6q,UID:358770b1-da7a-4336-8955-7437bbf96c08,ResourceVersion:17403336,Generation:0,CreationTimestamp:2019-12-20 14:52:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f b0be310a-d615-48c1-9d49-fbdf9a600883 0xc002d6a1c7 0xc002d6a1c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-sxp97 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-sxp97,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-sxp97 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d6a240} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d6a260}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:11 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 14:52:26.327: INFO: Pod "nginx-deployment-55fb7cb77f-l7mx4" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-l7mx4,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5744,SelfLink:/api/v1/namespaces/deployment-5744/pods/nginx-deployment-55fb7cb77f-l7mx4,UID:97f5a433-69a5-4fcc-b134-5cbf5e93e988,ResourceVersion:17403261,Generation:0,CreationTimestamp:2019-12-20 14:52:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f b0be310a-d615-48c1-9d49-fbdf9a600883 0xc002d6a2e7 0xc002d6a2e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-sxp97 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-sxp97,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-sxp97 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d6a360} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d6a380}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:04 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-20 14:52:04 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 14:52:26.327: INFO: Pod "nginx-deployment-55fb7cb77f-lp4cw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-lp4cw,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5744,SelfLink:/api/v1/namespaces/deployment-5744/pods/nginx-deployment-55fb7cb77f-lp4cw,UID:5d70dfb4-4990-45f3-a342-9baa731e1e3f,ResourceVersion:17403349,Generation:0,CreationTimestamp:2019-12-20 14:52:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f b0be310a-d615-48c1-9d49-fbdf9a600883 0xc002d6a457 0xc002d6a458}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-sxp97 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-sxp97,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-sxp97 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d6a4c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d6a4e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:13 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 14:52:26.327: INFO: Pod "nginx-deployment-55fb7cb77f-qgz62" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-qgz62,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5744,SelfLink:/api/v1/namespaces/deployment-5744/pods/nginx-deployment-55fb7cb77f-qgz62,UID:54befd33-cfac-4778-a0e6-5f1607d5c193,ResourceVersion:17403337,Generation:0,CreationTimestamp:2019-12-20 14:52:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f b0be310a-d615-48c1-9d49-fbdf9a600883 0xc002d6a567 0xc002d6a568}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-sxp97 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-sxp97,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-sxp97 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d6a5e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d6a600}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:11 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 14:52:26.328: INFO: Pod "nginx-deployment-7b8c6f4498-2rr8v" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2rr8v,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5744,SelfLink:/api/v1/namespaces/deployment-5744/pods/nginx-deployment-7b8c6f4498-2rr8v,UID:6bc0bd17-465c-4348-bddc-6b90657b0024,ResourceVersion:17403237,Generation:0,CreationTimestamp:2019-12-20 14:51:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2627dcdd-41e6-4d14-bd40-168fc82d68c1 0xc002d6a687 0xc002d6a688}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-sxp97 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-sxp97,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-sxp97 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d6a6f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d6a710}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:51:32 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:04 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:04 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:51:32 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.5,StartTime:2019-12-20 14:51:32 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-20 14:52:02 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://9bb5215cf5ccff1e0836619cf9a999f13e00ac3819af80e9787790046e25eb4a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 14:52:26.328: INFO: Pod "nginx-deployment-7b8c6f4498-2wmzx" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2wmzx,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5744,SelfLink:/api/v1/namespaces/deployment-5744/pods/nginx-deployment-7b8c6f4498-2wmzx,UID:40b3446e-a268-43ef-9aae-59409efe222f,ResourceVersion:17403216,Generation:0,CreationTimestamp:2019-12-20 14:51:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2627dcdd-41e6-4d14-bd40-168fc82d68c1 0xc002d6a7e7 0xc002d6a7e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-sxp97 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-sxp97,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-sxp97 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d6a860} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d6a880}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:51:32 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:00 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:51:32 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.3,StartTime:2019-12-20 14:51:32 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-20 14:52:00 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://58219c50611c826f98d375314be1ca906ab349b5f5bd37ab2d4409524926dca1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 14:52:26.328: INFO: Pod "nginx-deployment-7b8c6f4498-4ggqk" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-4ggqk,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5744,SelfLink:/api/v1/namespaces/deployment-5744/pods/nginx-deployment-7b8c6f4498-4ggqk,UID:a3ca73a8-0721-4eea-aed4-10ca5d9811f3,ResourceVersion:17403210,Generation:0,CreationTimestamp:2019-12-20 14:51:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2627dcdd-41e6-4d14-bd40-168fc82d68c1 0xc002d6a957 0xc002d6a958}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-sxp97 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-sxp97,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-sxp97 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d6a9d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d6a9f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:51:32 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:00 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:51:32 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2019-12-20 14:51:32 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-20 14:51:59 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://f5e767aa3abf454d13d523af8d6a823250a40f5998c34f21493234581747669c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 14:52:26.329: INFO: Pod "nginx-deployment-7b8c6f4498-8tj92" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8tj92,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5744,SelfLink:/api/v1/namespaces/deployment-5744/pods/nginx-deployment-7b8c6f4498-8tj92,UID:b348ecdc-3ded-4137-bfb6-f9566c8b6e16,ResourceVersion:17403213,Generation:0,CreationTimestamp:2019-12-20 14:51:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2627dcdd-41e6-4d14-bd40-168fc82d68c1 0xc002d6aac7 0xc002d6aac8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-sxp97 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-sxp97,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-sxp97 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d6ab40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d6ab60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:51:32 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:00 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:51:32 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2019-12-20 14:51:32 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-20 14:51:59 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://1b78a39b8e37e6b1c1e5631d014cf0b2a340c9ecaa71351de234a7f1dc6d9156}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 14:52:26.329: INFO: Pod "nginx-deployment-7b8c6f4498-92r5n" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-92r5n,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5744,SelfLink:/api/v1/namespaces/deployment-5744/pods/nginx-deployment-7b8c6f4498-92r5n,UID:b019bf15-b289-4050-90fb-541fa59ec329,ResourceVersion:17403338,Generation:0,CreationTimestamp:2019-12-20 14:52:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2627dcdd-41e6-4d14-bd40-168fc82d68c1 0xc002d6ac37 0xc002d6ac38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-sxp97 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-sxp97,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-sxp97 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d6aca0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d6acc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:11 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 14:52:26.329: INFO: Pod "nginx-deployment-7b8c6f4498-bjkvg" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bjkvg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5744,SelfLink:/api/v1/namespaces/deployment-5744/pods/nginx-deployment-7b8c6f4498-bjkvg,UID:ab8ebe42-384c-4e02-b77f-3e1364c1e59c,ResourceVersion:17403230,Generation:0,CreationTimestamp:2019-12-20 14:51:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2627dcdd-41e6-4d14-bd40-168fc82d68c1 0xc002d6ad47 0xc002d6ad48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-sxp97 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-sxp97,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-sxp97 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d6adb0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d6add0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:51:32 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:04 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:04 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:51:32 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.6,StartTime:2019-12-20 14:51:32 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-20 14:52:02 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://b3c55d25869842f875f2952f1e3ff6b9019ca3ae1cfa3d7940997bd918eea04a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 14:52:26.329: INFO: Pod "nginx-deployment-7b8c6f4498-bnmts" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bnmts,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5744,SelfLink:/api/v1/namespaces/deployment-5744/pods/nginx-deployment-7b8c6f4498-bnmts,UID:9a2bef2e-4359-4e72-808f-94e8c1f4f952,ResourceVersion:17403196,Generation:0,CreationTimestamp:2019-12-20 14:51:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2627dcdd-41e6-4d14-bd40-168fc82d68c1 0xc002d6aea7 0xc002d6aea8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-sxp97 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-sxp97,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-sxp97 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d6af20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d6af40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:51:35 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:00 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:51:32 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.5,StartTime:2019-12-20 14:51:35 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-20 14:52:00 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://b3282d3496cd94a1345b3f5e93cd783070c3b64533c12ca139248fa530edf6cd}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 14:52:26.330: INFO: Pod "nginx-deployment-7b8c6f4498-clnhz" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-clnhz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5744,SelfLink:/api/v1/namespaces/deployment-5744/pods/nginx-deployment-7b8c6f4498-clnhz,UID:2ae0c13a-0f19-4791-a666-245d4f6378de,ResourceVersion:17403344,Generation:0,CreationTimestamp:2019-12-20 14:52:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2627dcdd-41e6-4d14-bd40-168fc82d68c1 0xc002d6b017 0xc002d6b018}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-sxp97 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-sxp97,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-sxp97 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d6b080} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d6b0a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:11 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 14:52:26.330: INFO: Pod "nginx-deployment-7b8c6f4498-l4m57" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-l4m57,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5744,SelfLink:/api/v1/namespaces/deployment-5744/pods/nginx-deployment-7b8c6f4498-l4m57,UID:8283fb19-8acb-48e8-9d7d-d878e58b5af2,ResourceVersion:17403354,Generation:0,CreationTimestamp:2019-12-20 14:52:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2627dcdd-41e6-4d14-bd40-168fc82d68c1 0xc002d6b137 0xc002d6b138}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-sxp97 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-sxp97,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-sxp97 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d6b1b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d6b1d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:08 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-20 14:52:09 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 14:52:26.330: INFO: Pod "nginx-deployment-7b8c6f4498-l5f5x" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-l5f5x,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5744,SelfLink:/api/v1/namespaces/deployment-5744/pods/nginx-deployment-7b8c6f4498-l5f5x,UID:8a4b041b-3dd3-4d48-a700-9b9db4379257,ResourceVersion:17403318,Generation:0,CreationTimestamp:2019-12-20 14:52:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2627dcdd-41e6-4d14-bd40-168fc82d68c1 0xc002d6b297 0xc002d6b298}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-sxp97 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-sxp97,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-sxp97 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d6b300} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d6b320}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:09 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 14:52:26.331: INFO: Pod "nginx-deployment-7b8c6f4498-mstft" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-mstft,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5744,SelfLink:/api/v1/namespaces/deployment-5744/pods/nginx-deployment-7b8c6f4498-mstft,UID:05bbc668-17a4-495c-a1af-1cbd4a97cf9e,ResourceVersion:17403345,Generation:0,CreationTimestamp:2019-12-20 14:52:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2627dcdd-41e6-4d14-bd40-168fc82d68c1 0xc002d6b3a7 0xc002d6b3a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-sxp97 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-sxp97,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-sxp97 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d6b410} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d6b430}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:11 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 14:52:26.331: INFO: Pod "nginx-deployment-7b8c6f4498-phkdx" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-phkdx,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5744,SelfLink:/api/v1/namespaces/deployment-5744/pods/nginx-deployment-7b8c6f4498-phkdx,UID:daaeec5b-19d3-4401-bd91-6614001d3cf7,ResourceVersion:17403223,Generation:0,CreationTimestamp:2019-12-20 14:51:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2627dcdd-41e6-4d14-bd40-168fc82d68c1 0xc002d6b4b7 0xc002d6b4b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-sxp97 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-sxp97,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-sxp97 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d6b520} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d6b540}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:51:32 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:03 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:03 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:51:32 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2019-12-20 14:51:32 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-20 14:51:59 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://49d62207f28df2d366bd9e923b4e1c8c3dda2fbf20a6fcc32521f1fd44693245}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 14:52:26.332: INFO: Pod "nginx-deployment-7b8c6f4498-qnc24" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-qnc24,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5744,SelfLink:/api/v1/namespaces/deployment-5744/pods/nginx-deployment-7b8c6f4498-qnc24,UID:248fadbc-f48c-48ea-9a55-5780c22a91bf,ResourceVersion:17403387,Generation:0,CreationTimestamp:2019-12-20 14:52:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2627dcdd-41e6-4d14-bd40-168fc82d68c1 0xc002d6b617 0xc002d6b618}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-sxp97 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-sxp97,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-sxp97 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d6b680} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d6b6a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:09 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2019-12-20 14:52:17 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 14:52:26.332: INFO: Pod "nginx-deployment-7b8c6f4498-tsz4d" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-tsz4d,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5744,SelfLink:/api/v1/namespaces/deployment-5744/pods/nginx-deployment-7b8c6f4498-tsz4d,UID:3c46e175-0b0b-4650-a23b-b63a4722f253,ResourceVersion:17403339,Generation:0,CreationTimestamp:2019-12-20 14:52:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2627dcdd-41e6-4d14-bd40-168fc82d68c1 0xc002d6b767 0xc002d6b768}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-sxp97 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-sxp97,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-sxp97 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d6b7e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d6b800}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:11 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 14:52:26.332: INFO: Pod "nginx-deployment-7b8c6f4498-tvkg5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-tvkg5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5744,SelfLink:/api/v1/namespaces/deployment-5744/pods/nginx-deployment-7b8c6f4498-tvkg5,UID:0962b72f-2bc8-4d49-aec1-5da4d5000911,ResourceVersion:17403331,Generation:0,CreationTimestamp:2019-12-20 14:52:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2627dcdd-41e6-4d14-bd40-168fc82d68c1 0xc002d6b887 0xc002d6b888}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-sxp97 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-sxp97,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-sxp97 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d6b900} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d6b920}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:09 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 14:52:26.332: INFO: Pod "nginx-deployment-7b8c6f4498-tx5kz" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-tx5kz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5744,SelfLink:/api/v1/namespaces/deployment-5744/pods/nginx-deployment-7b8c6f4498-tx5kz,UID:f91168e3-6929-4bb7-9604-baa3d6a051de,ResourceVersion:17403206,Generation:0,CreationTimestamp:2019-12-20 14:51:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2627dcdd-41e6-4d14-bd40-168fc82d68c1 0xc002d6b9a7 0xc002d6b9a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-sxp97 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-sxp97,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-sxp97 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d6ba20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d6ba40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:51:34 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:00 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:51:32 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.4,StartTime:2019-12-20 14:51:34 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-20 14:52:00 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://a2ff912618d5f20a91f67dd4d9e9ec3b39a114b5b34c03da63f065cf42839640}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 14:52:26.333: INFO: Pod "nginx-deployment-7b8c6f4498-vlhp5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vlhp5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5744,SelfLink:/api/v1/namespaces/deployment-5744/pods/nginx-deployment-7b8c6f4498-vlhp5,UID:f0b8fa77-f03d-4b65-b3e0-82ce00828fd5,ResourceVersion:17403346,Generation:0,CreationTimestamp:2019-12-20 14:52:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2627dcdd-41e6-4d14-bd40-168fc82d68c1 0xc002d6bb17 0xc002d6bb18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-sxp97 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-sxp97,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-sxp97 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d6bb90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d6bbb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:11 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 14:52:26.333: INFO: Pod "nginx-deployment-7b8c6f4498-wh4r7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-wh4r7,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5744,SelfLink:/api/v1/namespaces/deployment-5744/pods/nginx-deployment-7b8c6f4498-wh4r7,UID:7cb3a32f-b6bd-4b73-9ad8-3e6df4477284,ResourceVersion:17403353,Generation:0,CreationTimestamp:2019-12-20 14:52:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2627dcdd-41e6-4d14-bd40-168fc82d68c1 0xc002d6bc37 0xc002d6bc38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-sxp97 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-sxp97,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-sxp97 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d6bca0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d6bcc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:10 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:08 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2019-12-20 14:52:10 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 14:52:26.333: INFO: Pod "nginx-deployment-7b8c6f4498-wvr4r" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-wvr4r,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5744,SelfLink:/api/v1/namespaces/deployment-5744/pods/nginx-deployment-7b8c6f4498-wvr4r,UID:10a0cfb2-95da-4c88-85d0-296b6518e5c4,ResourceVersion:17403366,Generation:0,CreationTimestamp:2019-12-20 14:52:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2627dcdd-41e6-4d14-bd40-168fc82d68c1 0xc002d6bd87 0xc002d6bd88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-sxp97 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-sxp97,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-sxp97 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d6be00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d6be20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:12 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:09 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-20 14:52:12 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 14:52:26.334: INFO: Pod "nginx-deployment-7b8c6f4498-zwqsc" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zwqsc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5744,SelfLink:/api/v1/namespaces/deployment-5744/pods/nginx-deployment-7b8c6f4498-zwqsc,UID:d9867a4a-de8a-4308-9c06-8136b7489a03,ResourceVersion:17403378,Generation:0,CreationTimestamp:2019-12-20 14:52:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2627dcdd-41e6-4d14-bd40-168fc82d68c1 0xc002d6bee7 0xc002d6bee8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-sxp97 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-sxp97,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-sxp97 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d6bf50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d6bf70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:52:08 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2019-12-20 14:52:13 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:52:26.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-5744" for this suite.
Dec 20 14:53:26.420: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:53:26.643: INFO: namespace deployment-5744 deletion completed in 57.14783184s

• [SLOW TEST:114.550 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:53:26.643: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Dec 20 14:53:39.110: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:53:39.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-1534" for this suite.
Dec 20 14:53:45.173: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:53:45.303: INFO: namespace container-runtime-1534 deletion completed in 6.156401051s

• [SLOW TEST:18.660 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:53:45.304: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-9d128f00-4f04-435f-96c0-f317a8203ab6
STEP: Creating a pod to test consume secrets
Dec 20 14:53:45.445: INFO: Waiting up to 5m0s for pod "pod-secrets-9ce7717e-c60b-4fa7-bb0c-852f11c418b8" in namespace "secrets-7321" to be "success or failure"
Dec 20 14:53:45.482: INFO: Pod "pod-secrets-9ce7717e-c60b-4fa7-bb0c-852f11c418b8": Phase="Pending", Reason="", readiness=false. Elapsed: 36.571135ms
Dec 20 14:53:47.491: INFO: Pod "pod-secrets-9ce7717e-c60b-4fa7-bb0c-852f11c418b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045523303s
Dec 20 14:53:49.502: INFO: Pod "pod-secrets-9ce7717e-c60b-4fa7-bb0c-852f11c418b8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056880663s
Dec 20 14:53:51.508: INFO: Pod "pod-secrets-9ce7717e-c60b-4fa7-bb0c-852f11c418b8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062905983s
Dec 20 14:53:53.519: INFO: Pod "pod-secrets-9ce7717e-c60b-4fa7-bb0c-852f11c418b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.073562847s
STEP: Saw pod success
Dec 20 14:53:53.519: INFO: Pod "pod-secrets-9ce7717e-c60b-4fa7-bb0c-852f11c418b8" satisfied condition "success or failure"
Dec 20 14:53:53.524: INFO: Trying to get logs from node iruya-node pod pod-secrets-9ce7717e-c60b-4fa7-bb0c-852f11c418b8 container secret-volume-test: 
STEP: delete the pod
Dec 20 14:53:53.615: INFO: Waiting for pod pod-secrets-9ce7717e-c60b-4fa7-bb0c-852f11c418b8 to disappear
Dec 20 14:53:53.644: INFO: Pod pod-secrets-9ce7717e-c60b-4fa7-bb0c-852f11c418b8 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:53:53.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7321" for this suite.
Dec 20 14:53:59.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:53:59.858: INFO: namespace secrets-7321 deletion completed in 6.202116392s

• [SLOW TEST:14.554 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:53:59.859: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Dec 20 14:54:06.602: INFO: 10 pods remaining
Dec 20 14:54:06.602: INFO: 10 pods has nil DeletionTimestamp
Dec 20 14:54:06.602: INFO: 
Dec 20 14:54:07.413: INFO: 10 pods remaining
Dec 20 14:54:07.413: INFO: 2 pods has nil DeletionTimestamp
Dec 20 14:54:07.413: INFO: 
STEP: Gathering metrics
W1220 14:54:08.388075       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 20 14:54:08.388: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:54:08.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3817" for this suite.
Dec 20 14:54:20.550: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:54:20.707: INFO: namespace gc-3817 deletion completed in 12.316443848s

• [SLOW TEST:20.848 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:54:20.708: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Dec 20 14:54:20.846: INFO: Waiting up to 5m0s for pod "pod-61f3feca-fadf-47fd-bcd8-594644cb1176" in namespace "emptydir-7470" to be "success or failure"
Dec 20 14:54:20.874: INFO: Pod "pod-61f3feca-fadf-47fd-bcd8-594644cb1176": Phase="Pending", Reason="", readiness=false. Elapsed: 27.85609ms
Dec 20 14:54:22.885: INFO: Pod "pod-61f3feca-fadf-47fd-bcd8-594644cb1176": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038828034s
Dec 20 14:54:24.946: INFO: Pod "pod-61f3feca-fadf-47fd-bcd8-594644cb1176": Phase="Pending", Reason="", readiness=false. Elapsed: 4.100219103s
Dec 20 14:54:26.954: INFO: Pod "pod-61f3feca-fadf-47fd-bcd8-594644cb1176": Phase="Pending", Reason="", readiness=false. Elapsed: 6.10791691s
Dec 20 14:54:28.971: INFO: Pod "pod-61f3feca-fadf-47fd-bcd8-594644cb1176": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.124626989s
STEP: Saw pod success
Dec 20 14:54:28.971: INFO: Pod "pod-61f3feca-fadf-47fd-bcd8-594644cb1176" satisfied condition "success or failure"
Dec 20 14:54:28.976: INFO: Trying to get logs from node iruya-node pod pod-61f3feca-fadf-47fd-bcd8-594644cb1176 container test-container: 
STEP: delete the pod
Dec 20 14:54:29.131: INFO: Waiting for pod pod-61f3feca-fadf-47fd-bcd8-594644cb1176 to disappear
Dec 20 14:54:29.144: INFO: Pod pod-61f3feca-fadf-47fd-bcd8-594644cb1176 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:54:29.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7470" for this suite.
Dec 20 14:54:35.190: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:54:35.328: INFO: namespace emptydir-7470 deletion completed in 6.178282645s

• [SLOW TEST:14.620 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:54:35.328: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 20 14:54:35.551: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Dec 20 14:54:40.567: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 20 14:54:44.587: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Dec 20 14:54:54.679: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-475,SelfLink:/apis/apps/v1/namespaces/deployment-475/deployments/test-cleanup-deployment,UID:cc99f3a9-afa8-4915-83c6-71b6f03e596d,ResourceVersion:17403987,Generation:1,CreationTimestamp:2019-12-20 14:54:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 1,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-20 14:54:44 +0000 UTC 2019-12-20 14:54:44 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-20 14:54:54 +0000 UTC 2019-12-20 14:54:44 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-cleanup-deployment-55bbcbc84c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Dec 20 14:54:54.684: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-475,SelfLink:/apis/apps/v1/namespaces/deployment-475/replicasets/test-cleanup-deployment-55bbcbc84c,UID:72c77eb0-95be-4ed8-aab6-7ffb465be34f,ResourceVersion:17403976,Generation:1,CreationTimestamp:2019-12-20 14:54:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment cc99f3a9-afa8-4915-83c6-71b6f03e596d 0xc0022811f7 0xc0022811f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Dec 20 14:54:54.688: INFO: Pod "test-cleanup-deployment-55bbcbc84c-8k4bs" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-8k4bs,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-475,SelfLink:/api/v1/namespaces/deployment-475/pods/test-cleanup-deployment-55bbcbc84c-8k4bs,UID:e57ea804-e144-4307-892a-c30fd17fddf8,ResourceVersion:17403975,Generation:0,CreationTimestamp:2019-12-20 14:54:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 72c77eb0-95be-4ed8-aab6-7ffb465be34f 0xc0022817f7 0xc0022817f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cf74x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cf74x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-cf74x true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002281870} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002281890}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:54:44 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:54:53 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:54:53 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 14:54:44 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2019-12-20 14:54:44 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-20 14:54:52 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://a792138c75448d9ad8bf9cef6ba7a85655eee8f3dd5ec7801ee75b403c5edaa3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:54:54.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-475" for this suite.
Dec 20 14:55:00.737: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:55:00.899: INFO: namespace deployment-475 deletion completed in 6.205737083s

• [SLOW TEST:25.571 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:55:00.901: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 20 14:55:01.112: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:55:11.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7011" for this suite.
Dec 20 14:55:55.534: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:55:55.717: INFO: namespace pods-7011 deletion completed in 44.209863069s

• [SLOW TEST:54.816 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:55:55.723: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:56:07.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-6565" for this suite.
Dec 20 14:56:29.048: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:56:29.149: INFO: namespace replication-controller-6565 deletion completed in 22.134702874s

• [SLOW TEST:33.427 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:56:29.150: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W1220 14:56:44.681807       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 20 14:56:44.681: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:56:44.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5778" for this suite.
Dec 20 14:56:58.941: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:56:59.070: INFO: namespace gc-5778 deletion completed in 14.371983383s

• [SLOW TEST:29.920 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:56:59.071: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service endpoint-test2 in namespace services-6121
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6121 to expose endpoints map[]
Dec 20 14:56:59.925: INFO: Get endpoints failed (10.493491ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Dec 20 14:57:00.933: INFO: successfully validated that service endpoint-test2 in namespace services-6121 exposes endpoints map[] (1.01798199s elapsed)
STEP: Creating pod pod1 in namespace services-6121
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6121 to expose endpoints map[pod1:[80]]
Dec 20 14:57:05.168: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.218974519s elapsed, will retry)
Dec 20 14:57:10.328: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (9.379354334s elapsed, will retry)
Dec 20 14:57:13.373: INFO: successfully validated that service endpoint-test2 in namespace services-6121 exposes endpoints map[pod1:[80]] (12.423716468s elapsed)
STEP: Creating pod pod2 in namespace services-6121
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6121 to expose endpoints map[pod1:[80] pod2:[80]]
Dec 20 14:57:18.339: INFO: Unexpected endpoints: found map[aeb0eb86-a2a0-49e3-9be0-dacbec54fc4e:[80]], expected map[pod1:[80] pod2:[80]] (4.956810087s elapsed, will retry)
Dec 20 14:57:21.421: INFO: successfully validated that service endpoint-test2 in namespace services-6121 exposes endpoints map[pod1:[80] pod2:[80]] (8.039076459s elapsed)
STEP: Deleting pod pod1 in namespace services-6121
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6121 to expose endpoints map[pod2:[80]]
Dec 20 14:57:22.472: INFO: successfully validated that service endpoint-test2 in namespace services-6121 exposes endpoints map[pod2:[80]] (1.035627265s elapsed)
STEP: Deleting pod pod2 in namespace services-6121
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6121 to expose endpoints map[]
Dec 20 14:57:23.509: INFO: successfully validated that service endpoint-test2 in namespace services-6121 exposes endpoints map[] (1.017659881s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:57:24.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6121" for this suite.
Dec 20 14:57:46.767: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:57:46.853: INFO: namespace services-6121 deletion completed in 22.127483071s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:47.782 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:57:46.853: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's args
Dec 20 14:57:46.955: INFO: Waiting up to 5m0s for pod "var-expansion-f6c2130d-e414-4d9a-8611-9770e5acf31d" in namespace "var-expansion-5808" to be "success or failure"
Dec 20 14:57:46.965: INFO: Pod "var-expansion-f6c2130d-e414-4d9a-8611-9770e5acf31d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.531532ms
Dec 20 14:57:48.975: INFO: Pod "var-expansion-f6c2130d-e414-4d9a-8611-9770e5acf31d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020343648s
Dec 20 14:57:50.980: INFO: Pod "var-expansion-f6c2130d-e414-4d9a-8611-9770e5acf31d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025179737s
Dec 20 14:57:52.988: INFO: Pod "var-expansion-f6c2130d-e414-4d9a-8611-9770e5acf31d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03336906s
Dec 20 14:57:54.996: INFO: Pod "var-expansion-f6c2130d-e414-4d9a-8611-9770e5acf31d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.041441015s
Dec 20 14:57:57.001: INFO: Pod "var-expansion-f6c2130d-e414-4d9a-8611-9770e5acf31d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.046745634s
STEP: Saw pod success
Dec 20 14:57:57.002: INFO: Pod "var-expansion-f6c2130d-e414-4d9a-8611-9770e5acf31d" satisfied condition "success or failure"
Dec 20 14:57:57.004: INFO: Trying to get logs from node iruya-node pod var-expansion-f6c2130d-e414-4d9a-8611-9770e5acf31d container dapi-container: 
STEP: delete the pod
Dec 20 14:57:59.109: INFO: Waiting for pod var-expansion-f6c2130d-e414-4d9a-8611-9770e5acf31d to disappear
Dec 20 14:57:59.117: INFO: Pod var-expansion-f6c2130d-e414-4d9a-8611-9770e5acf31d no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 14:57:59.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-5808" for this suite.
Dec 20 14:58:05.771: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 14:58:05.986: INFO: namespace var-expansion-5808 deletion completed in 6.862410874s

• [SLOW TEST:19.133 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 14:58:05.987: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Dec 20 15:01:09.311: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 20 15:01:09.331: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 20 15:01:11.331: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 20 15:01:11.338: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 20 15:01:13.331: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 20 15:01:13.339: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 20 15:01:15.331: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 20 15:01:15.340: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 20 15:01:17.331: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 20 15:01:17.352: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 20 15:01:19.331: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 20 15:01:19.342: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 20 15:01:21.331: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 20 15:01:21.348: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 20 15:01:23.331: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 20 15:01:23.342: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 20 15:01:25.331: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 20 15:01:25.343: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 20 15:01:27.332: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 20 15:01:27.352: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 20 15:01:29.331: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 20 15:01:29.339: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 20 15:01:31.331: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 20 15:01:31.339: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 15:01:31.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-5291" for this suite.
Dec 20 15:01:53.388: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 15:01:53.533: INFO: namespace container-lifecycle-hook-5291 deletion completed in 22.180403174s

• [SLOW TEST:227.546 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 15:01:53.534: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Dec 20 15:01:53.643: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 15:02:07.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-4940" for this suite.
Dec 20 15:02:13.111: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 15:02:13.220: INFO: namespace init-container-4940 deletion completed in 6.130566813s

• [SLOW TEST:19.686 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 15:02:13.221: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Dec 20 15:02:13.418: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-9346,SelfLink:/api/v1/namespaces/watch-9346/configmaps/e2e-watch-test-watch-closed,UID:90691864-4b35-4b6e-890f-971be37e4e14,ResourceVersion:17404937,Generation:0,CreationTimestamp:2019-12-20 15:02:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 20 15:02:13.419: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-9346,SelfLink:/api/v1/namespaces/watch-9346/configmaps/e2e-watch-test-watch-closed,UID:90691864-4b35-4b6e-890f-971be37e4e14,ResourceVersion:17404938,Generation:0,CreationTimestamp:2019-12-20 15:02:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Dec 20 15:02:13.538: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-9346,SelfLink:/api/v1/namespaces/watch-9346/configmaps/e2e-watch-test-watch-closed,UID:90691864-4b35-4b6e-890f-971be37e4e14,ResourceVersion:17404939,Generation:0,CreationTimestamp:2019-12-20 15:02:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 20 15:02:13.539: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-9346,SelfLink:/api/v1/namespaces/watch-9346/configmaps/e2e-watch-test-watch-closed,UID:90691864-4b35-4b6e-890f-971be37e4e14,ResourceVersion:17404940,Generation:0,CreationTimestamp:2019-12-20 15:02:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 15:02:13.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-9346" for this suite.
Dec 20 15:02:19.579: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 15:02:19.752: INFO: namespace watch-9346 deletion completed in 6.201412184s

• [SLOW TEST:6.531 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 15:02:19.752: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-s9g9
STEP: Creating a pod to test atomic-volume-subpath
Dec 20 15:02:19.866: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-s9g9" in namespace "subpath-5888" to be "success or failure"
Dec 20 15:02:19.902: INFO: Pod "pod-subpath-test-configmap-s9g9": Phase="Pending", Reason="", readiness=false. Elapsed: 36.161138ms
Dec 20 15:02:21.914: INFO: Pod "pod-subpath-test-configmap-s9g9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047815492s
Dec 20 15:02:23.935: INFO: Pod "pod-subpath-test-configmap-s9g9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069026289s
Dec 20 15:02:25.941: INFO: Pod "pod-subpath-test-configmap-s9g9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.07468184s
Dec 20 15:02:27.949: INFO: Pod "pod-subpath-test-configmap-s9g9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.082585515s
Dec 20 15:02:29.956: INFO: Pod "pod-subpath-test-configmap-s9g9": Phase="Running", Reason="", readiness=true. Elapsed: 10.089965134s
Dec 20 15:02:31.971: INFO: Pod "pod-subpath-test-configmap-s9g9": Phase="Running", Reason="", readiness=true. Elapsed: 12.105455243s
Dec 20 15:02:33.988: INFO: Pod "pod-subpath-test-configmap-s9g9": Phase="Running", Reason="", readiness=true. Elapsed: 14.122531603s
Dec 20 15:02:35.995: INFO: Pod "pod-subpath-test-configmap-s9g9": Phase="Running", Reason="", readiness=true. Elapsed: 16.128716279s
Dec 20 15:02:38.012: INFO: Pod "pod-subpath-test-configmap-s9g9": Phase="Running", Reason="", readiness=true. Elapsed: 18.146289597s
Dec 20 15:02:40.021: INFO: Pod "pod-subpath-test-configmap-s9g9": Phase="Running", Reason="", readiness=true. Elapsed: 20.155016524s
Dec 20 15:02:42.032: INFO: Pod "pod-subpath-test-configmap-s9g9": Phase="Running", Reason="", readiness=true. Elapsed: 22.16615558s
Dec 20 15:02:44.046: INFO: Pod "pod-subpath-test-configmap-s9g9": Phase="Running", Reason="", readiness=true. Elapsed: 24.179611867s
Dec 20 15:02:46.052: INFO: Pod "pod-subpath-test-configmap-s9g9": Phase="Running", Reason="", readiness=true. Elapsed: 26.186147885s
Dec 20 15:02:48.061: INFO: Pod "pod-subpath-test-configmap-s9g9": Phase="Running", Reason="", readiness=true. Elapsed: 28.194831079s
Dec 20 15:02:50.070: INFO: Pod "pod-subpath-test-configmap-s9g9": Phase="Running", Reason="", readiness=true. Elapsed: 30.20393965s
Dec 20 15:02:52.079: INFO: Pod "pod-subpath-test-configmap-s9g9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.213461554s
STEP: Saw pod success
Dec 20 15:02:52.079: INFO: Pod "pod-subpath-test-configmap-s9g9" satisfied condition "success or failure"
Dec 20 15:02:52.105: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-s9g9 container test-container-subpath-configmap-s9g9: 
STEP: delete the pod
Dec 20 15:02:52.159: INFO: Waiting for pod pod-subpath-test-configmap-s9g9 to disappear
Dec 20 15:02:52.193: INFO: Pod pod-subpath-test-configmap-s9g9 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-s9g9
Dec 20 15:02:52.193: INFO: Deleting pod "pod-subpath-test-configmap-s9g9" in namespace "subpath-5888"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 15:02:52.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-5888" for this suite.
Dec 20 15:02:58.226: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 15:02:58.356: INFO: namespace subpath-5888 deletion completed in 6.15352462s

• [SLOW TEST:38.604 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 15:02:58.358: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-cf46d150-2217-45de-af0e-86ffb5430ccc in namespace container-probe-3222
Dec 20 15:03:06.550: INFO: Started pod liveness-cf46d150-2217-45de-af0e-86ffb5430ccc in namespace container-probe-3222
STEP: checking the pod's current state and verifying that restartCount is present
Dec 20 15:03:06.556: INFO: Initial restart count of pod liveness-cf46d150-2217-45de-af0e-86ffb5430ccc is 0
Dec 20 15:03:30.730: INFO: Restart count of pod container-probe-3222/liveness-cf46d150-2217-45de-af0e-86ffb5430ccc is now 1 (24.173610879s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 15:03:30.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3222" for this suite.
Dec 20 15:03:36.920: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 15:03:37.023: INFO: namespace container-probe-3222 deletion completed in 6.235821484s

• [SLOW TEST:38.665 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 15:03:37.024: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 15:04:32.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7649" for this suite.
Dec 20 15:04:38.608: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 15:04:38.744: INFO: namespace container-runtime-7649 deletion completed in 6.16850557s

• [SLOW TEST:61.720 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 15:04:38.745: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 20 15:04:38.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-523'
Dec 20 15:04:40.993: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 20 15:04:40.994: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562
Dec 20 15:04:43.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-523'
Dec 20 15:04:43.212: INFO: stderr: ""
Dec 20 15:04:43.213: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 15:04:43.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-523" for this suite.
Dec 20 15:04:49.262: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 15:04:49.419: INFO: namespace kubectl-523 deletion completed in 6.190472807s

• [SLOW TEST:10.674 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 15:04:49.420: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 20 15:04:49.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-7694'
Dec 20 15:04:49.713: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 20 15:04:49.714: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Dec 20 15:04:49.753: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-cp26k]
Dec 20 15:04:49.754: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-cp26k" in namespace "kubectl-7694" to be "running and ready"
Dec 20 15:04:49.764: INFO: Pod "e2e-test-nginx-rc-cp26k": Phase="Pending", Reason="", readiness=false. Elapsed: 10.449927ms
Dec 20 15:04:51.773: INFO: Pod "e2e-test-nginx-rc-cp26k": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01946577s
Dec 20 15:04:54.481: INFO: Pod "e2e-test-nginx-rc-cp26k": Phase="Pending", Reason="", readiness=false. Elapsed: 4.727331062s
Dec 20 15:04:56.505: INFO: Pod "e2e-test-nginx-rc-cp26k": Phase="Pending", Reason="", readiness=false. Elapsed: 6.75105248s
Dec 20 15:04:58.523: INFO: Pod "e2e-test-nginx-rc-cp26k": Phase="Pending", Reason="", readiness=false. Elapsed: 8.769149941s
Dec 20 15:05:00.544: INFO: Pod "e2e-test-nginx-rc-cp26k": Phase="Running", Reason="", readiness=true. Elapsed: 10.790516232s
Dec 20 15:05:00.544: INFO: Pod "e2e-test-nginx-rc-cp26k" satisfied condition "running and ready"
Dec 20 15:05:00.544: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-cp26k]
Dec 20 15:05:00.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-7694'
Dec 20 15:05:00.699: INFO: stderr: ""
Dec 20 15:05:00.699: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461
Dec 20 15:05:00.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-7694'
Dec 20 15:05:00.823: INFO: stderr: ""
Dec 20 15:05:00.823: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 15:05:00.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7694" for this suite.
Dec 20 15:05:23.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 15:05:23.276: INFO: namespace kubectl-7694 deletion completed in 22.431693429s

• [SLOW TEST:33.856 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 15:05:23.280: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-0fdaa599-bec0-423f-8744-0433e7487167 in namespace container-probe-761
Dec 20 15:05:33.425: INFO: Started pod busybox-0fdaa599-bec0-423f-8744-0433e7487167 in namespace container-probe-761
STEP: checking the pod's current state and verifying that restartCount is present
Dec 20 15:05:33.428: INFO: Initial restart count of pod busybox-0fdaa599-bec0-423f-8744-0433e7487167 is 0
Dec 20 15:06:25.767: INFO: Restart count of pod container-probe-761/busybox-0fdaa599-bec0-423f-8744-0433e7487167 is now 1 (52.338625816s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 15:06:25.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-761" for this suite.
Dec 20 15:06:31.976: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 15:06:32.107: INFO: namespace container-probe-761 deletion completed in 6.193961109s

• [SLOW TEST:68.827 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 15:06:32.108: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating secret secrets-5399/secret-test-4575c310-ba68-4be3-9b68-edb0768bb5a2
STEP: Creating a pod to test consume secrets
Dec 20 15:06:32.259: INFO: Waiting up to 5m0s for pod "pod-configmaps-0b653dcd-b0bb-4f32-acbc-ce2f06566e5a" in namespace "secrets-5399" to be "success or failure"
Dec 20 15:06:32.298: INFO: Pod "pod-configmaps-0b653dcd-b0bb-4f32-acbc-ce2f06566e5a": Phase="Pending", Reason="", readiness=false. Elapsed: 39.412248ms
Dec 20 15:06:34.311: INFO: Pod "pod-configmaps-0b653dcd-b0bb-4f32-acbc-ce2f06566e5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051827209s
Dec 20 15:06:36.421: INFO: Pod "pod-configmaps-0b653dcd-b0bb-4f32-acbc-ce2f06566e5a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.162493538s
Dec 20 15:06:38.431: INFO: Pod "pod-configmaps-0b653dcd-b0bb-4f32-acbc-ce2f06566e5a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.172206985s
Dec 20 15:06:40.476: INFO: Pod "pod-configmaps-0b653dcd-b0bb-4f32-acbc-ce2f06566e5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.217035608s
STEP: Saw pod success
Dec 20 15:06:40.476: INFO: Pod "pod-configmaps-0b653dcd-b0bb-4f32-acbc-ce2f06566e5a" satisfied condition "success or failure"
Dec 20 15:06:40.485: INFO: Trying to get logs from node iruya-node pod pod-configmaps-0b653dcd-b0bb-4f32-acbc-ce2f06566e5a container env-test: 
STEP: delete the pod
Dec 20 15:06:40.656: INFO: Waiting for pod pod-configmaps-0b653dcd-b0bb-4f32-acbc-ce2f06566e5a to disappear
Dec 20 15:06:40.674: INFO: Pod pod-configmaps-0b653dcd-b0bb-4f32-acbc-ce2f06566e5a no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 15:06:40.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5399" for this suite.
Dec 20 15:06:46.710: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 15:06:46.807: INFO: namespace secrets-5399 deletion completed in 6.123100032s

• [SLOW TEST:14.700 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 15:06:46.808: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating api versions
Dec 20 15:06:46.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Dec 20 15:06:47.166: INFO: stderr: ""
Dec 20 15:06:47.166: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 15:06:47.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6085" for this suite.
Dec 20 15:06:53.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 15:06:53.365: INFO: namespace kubectl-6085 deletion completed in 6.191359755s

• [SLOW TEST:6.557 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 15:06:53.366: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Dec 20 15:06:53.482: INFO: Waiting up to 5m0s for pod "pod-b0bcc1d3-b388-4aab-af74-c2142e0727fc" in namespace "emptydir-3426" to be "success or failure"
Dec 20 15:06:53.499: INFO: Pod "pod-b0bcc1d3-b388-4aab-af74-c2142e0727fc": Phase="Pending", Reason="", readiness=false. Elapsed: 16.023843ms
Dec 20 15:06:55.511: INFO: Pod "pod-b0bcc1d3-b388-4aab-af74-c2142e0727fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028926575s
Dec 20 15:06:57.538: INFO: Pod "pod-b0bcc1d3-b388-4aab-af74-c2142e0727fc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05501586s
Dec 20 15:06:59.545: INFO: Pod "pod-b0bcc1d3-b388-4aab-af74-c2142e0727fc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062647196s
Dec 20 15:07:01.560: INFO: Pod "pod-b0bcc1d3-b388-4aab-af74-c2142e0727fc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.077823127s
Dec 20 15:07:03.567: INFO: Pod "pod-b0bcc1d3-b388-4aab-af74-c2142e0727fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.08465205s
STEP: Saw pod success
Dec 20 15:07:03.567: INFO: Pod "pod-b0bcc1d3-b388-4aab-af74-c2142e0727fc" satisfied condition "success or failure"
Dec 20 15:07:03.573: INFO: Trying to get logs from node iruya-node pod pod-b0bcc1d3-b388-4aab-af74-c2142e0727fc container test-container: 
STEP: delete the pod
Dec 20 15:07:03.793: INFO: Waiting for pod pod-b0bcc1d3-b388-4aab-af74-c2142e0727fc to disappear
Dec 20 15:07:03.812: INFO: Pod pod-b0bcc1d3-b388-4aab-af74-c2142e0727fc no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 15:07:03.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3426" for this suite.
Dec 20 15:07:09.864: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 15:07:10.073: INFO: namespace emptydir-3426 deletion completed in 6.249134502s

• [SLOW TEST:16.707 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 15:07:10.074: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-2777, will wait for the garbage collector to delete the pods
Dec 20 15:07:22.261: INFO: Deleting Job.batch foo took: 9.252543ms
Dec 20 15:07:22.562: INFO: Terminating Job.batch foo pods took: 300.977808ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 15:08:00.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-2777" for this suite.
Dec 20 15:08:06.849: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 15:08:07.029: INFO: namespace job-2777 deletion completed in 6.238307639s

• [SLOW TEST:56.956 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 15:08:07.030: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Dec 20 15:08:16.326: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 15:08:16.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-5001" for this suite.
Dec 20 15:08:22.505: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 15:08:22.666: INFO: namespace container-runtime-5001 deletion completed in 6.189910969s

• [SLOW TEST:15.636 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 15:08:22.667: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-99e53669-6bd3-402a-9632-7a71443eefa2
STEP: Creating a pod to test consume configMaps
Dec 20 15:08:22.877: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-09bbb918-d930-4dfa-a25e-5dcfb98d5769" in namespace "projected-519" to be "success or failure"
Dec 20 15:08:22.900: INFO: Pod "pod-projected-configmaps-09bbb918-d930-4dfa-a25e-5dcfb98d5769": Phase="Pending", Reason="", readiness=false. Elapsed: 23.331493ms
Dec 20 15:08:24.911: INFO: Pod "pod-projected-configmaps-09bbb918-d930-4dfa-a25e-5dcfb98d5769": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034130405s
Dec 20 15:08:26.922: INFO: Pod "pod-projected-configmaps-09bbb918-d930-4dfa-a25e-5dcfb98d5769": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044627495s
Dec 20 15:08:28.931: INFO: Pod "pod-projected-configmaps-09bbb918-d930-4dfa-a25e-5dcfb98d5769": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053600414s
Dec 20 15:08:30.954: INFO: Pod "pod-projected-configmaps-09bbb918-d930-4dfa-a25e-5dcfb98d5769": Phase="Pending", Reason="", readiness=false. Elapsed: 8.077281636s
Dec 20 15:08:32.980: INFO: Pod "pod-projected-configmaps-09bbb918-d930-4dfa-a25e-5dcfb98d5769": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.103363829s
STEP: Saw pod success
Dec 20 15:08:32.981: INFO: Pod "pod-projected-configmaps-09bbb918-d930-4dfa-a25e-5dcfb98d5769" satisfied condition "success or failure"
Dec 20 15:08:32.989: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-09bbb918-d930-4dfa-a25e-5dcfb98d5769 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 20 15:08:33.091: INFO: Waiting for pod pod-projected-configmaps-09bbb918-d930-4dfa-a25e-5dcfb98d5769 to disappear
Dec 20 15:08:33.105: INFO: Pod pod-projected-configmaps-09bbb918-d930-4dfa-a25e-5dcfb98d5769 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 15:08:33.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-519" for this suite.
Dec 20 15:08:39.168: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 15:08:39.317: INFO: namespace projected-519 deletion completed in 6.206287785s

• [SLOW TEST:16.650 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 15:08:39.318: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Dec 20 15:08:48.100: INFO: Successfully updated pod "annotationupdate0d006e42-b26c-46b0-89aa-fc3f000735c2"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 15:08:52.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6550" for this suite.
Dec 20 15:09:14.261: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 15:09:14.375: INFO: namespace downward-api-6550 deletion completed in 22.1366517s

• [SLOW TEST:35.057 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 15:09:14.376: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on node default medium
Dec 20 15:09:15.125: INFO: Waiting up to 5m0s for pod "pod-ed2a046b-0892-42c0-ac70-b006d857dd7f" in namespace "emptydir-6704" to be "success or failure"
Dec 20 15:09:15.149: INFO: Pod "pod-ed2a046b-0892-42c0-ac70-b006d857dd7f": Phase="Pending", Reason="", readiness=false. Elapsed: 23.132639ms
Dec 20 15:09:17.156: INFO: Pod "pod-ed2a046b-0892-42c0-ac70-b006d857dd7f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030807506s
Dec 20 15:09:19.168: INFO: Pod "pod-ed2a046b-0892-42c0-ac70-b006d857dd7f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042380275s
Dec 20 15:09:21.179: INFO: Pod "pod-ed2a046b-0892-42c0-ac70-b006d857dd7f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053716168s
Dec 20 15:09:23.188: INFO: Pod "pod-ed2a046b-0892-42c0-ac70-b006d857dd7f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.062846851s
Dec 20 15:09:25.195: INFO: Pod "pod-ed2a046b-0892-42c0-ac70-b006d857dd7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.069602343s
STEP: Saw pod success
Dec 20 15:09:25.195: INFO: Pod "pod-ed2a046b-0892-42c0-ac70-b006d857dd7f" satisfied condition "success or failure"
Dec 20 15:09:25.199: INFO: Trying to get logs from node iruya-node pod pod-ed2a046b-0892-42c0-ac70-b006d857dd7f container test-container: 
STEP: delete the pod
Dec 20 15:09:25.268: INFO: Waiting for pod pod-ed2a046b-0892-42c0-ac70-b006d857dd7f to disappear
Dec 20 15:09:25.275: INFO: Pod pod-ed2a046b-0892-42c0-ac70-b006d857dd7f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 15:09:25.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6704" for this suite.
Dec 20 15:09:31.311: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 15:09:31.504: INFO: namespace emptydir-6704 deletion completed in 6.222365966s

• [SLOW TEST:17.129 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 15:09:31.506: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-1345
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 20 15:09:31.613: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 20 15:10:13.868: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-1345 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 20 15:10:13.868: INFO: >>> kubeConfig: /root/.kube/config
Dec 20 15:10:14.341: INFO: Waiting for endpoints: map[]
Dec 20 15:10:14.352: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-1345 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 20 15:10:14.352: INFO: >>> kubeConfig: /root/.kube/config
Dec 20 15:10:14.907: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 15:10:14.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-1345" for this suite.
Dec 20 15:10:38.969: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 15:10:39.068: INFO: namespace pod-network-test-1345 deletion completed in 24.145887414s

• [SLOW TEST:67.562 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 15:10:39.069: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting the proxy server
Dec 20 15:10:39.187: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 15:10:39.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5944" for this suite.
Dec 20 15:10:45.338: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 15:10:45.513: INFO: namespace kubectl-5944 deletion completed in 6.205278559s

• [SLOW TEST:6.445 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 15:10:45.514: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-e5df44b3-405b-4524-aaf3-8158eb48b6e7
STEP: Creating a pod to test consume configMaps
Dec 20 15:10:45.653: INFO: Waiting up to 5m0s for pod "pod-configmaps-d229a503-582e-4744-b741-da3f98d3ab29" in namespace "configmap-2236" to be "success or failure"
Dec 20 15:10:45.658: INFO: Pod "pod-configmaps-d229a503-582e-4744-b741-da3f98d3ab29": Phase="Pending", Reason="", readiness=false. Elapsed: 5.563831ms
Dec 20 15:10:47.667: INFO: Pod "pod-configmaps-d229a503-582e-4744-b741-da3f98d3ab29": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013926131s
Dec 20 15:10:49.675: INFO: Pod "pod-configmaps-d229a503-582e-4744-b741-da3f98d3ab29": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022031542s
Dec 20 15:10:51.687: INFO: Pod "pod-configmaps-d229a503-582e-4744-b741-da3f98d3ab29": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03425823s
Dec 20 15:10:53.700: INFO: Pod "pod-configmaps-d229a503-582e-4744-b741-da3f98d3ab29": Phase="Pending", Reason="", readiness=false. Elapsed: 8.047013061s
Dec 20 15:10:55.711: INFO: Pod "pod-configmaps-d229a503-582e-4744-b741-da3f98d3ab29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.05816911s
STEP: Saw pod success
Dec 20 15:10:55.711: INFO: Pod "pod-configmaps-d229a503-582e-4744-b741-da3f98d3ab29" satisfied condition "success or failure"
Dec 20 15:10:55.719: INFO: Trying to get logs from node iruya-node pod pod-configmaps-d229a503-582e-4744-b741-da3f98d3ab29 container configmap-volume-test: 
STEP: delete the pod
Dec 20 15:10:55.811: INFO: Waiting for pod pod-configmaps-d229a503-582e-4744-b741-da3f98d3ab29 to disappear
Dec 20 15:10:55.824: INFO: Pod pod-configmaps-d229a503-582e-4744-b741-da3f98d3ab29 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 15:10:55.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2236" for this suite.
Dec 20 15:11:01.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 15:11:02.087: INFO: namespace configmap-2236 deletion completed in 6.176864455s

• [SLOW TEST:16.572 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 15:11:02.087: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-ac9086aa-852c-4418-bc3b-7e49c7895586
STEP: Creating a pod to test consume configMaps
Dec 20 15:11:02.217: INFO: Waiting up to 5m0s for pod "pod-configmaps-832970d6-0674-47b7-af5b-be6effe09023" in namespace "configmap-1004" to be "success or failure"
Dec 20 15:11:02.228: INFO: Pod "pod-configmaps-832970d6-0674-47b7-af5b-be6effe09023": Phase="Pending", Reason="", readiness=false. Elapsed: 10.700845ms
Dec 20 15:11:04.238: INFO: Pod "pod-configmaps-832970d6-0674-47b7-af5b-be6effe09023": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02060718s
Dec 20 15:11:06.252: INFO: Pod "pod-configmaps-832970d6-0674-47b7-af5b-be6effe09023": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034592713s
Dec 20 15:11:08.264: INFO: Pod "pod-configmaps-832970d6-0674-47b7-af5b-be6effe09023": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046503039s
Dec 20 15:11:10.276: INFO: Pod "pod-configmaps-832970d6-0674-47b7-af5b-be6effe09023": Phase="Pending", Reason="", readiness=false. Elapsed: 8.058600889s
Dec 20 15:11:12.283: INFO: Pod "pod-configmaps-832970d6-0674-47b7-af5b-be6effe09023": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.065897849s
STEP: Saw pod success
Dec 20 15:11:12.283: INFO: Pod "pod-configmaps-832970d6-0674-47b7-af5b-be6effe09023" satisfied condition "success or failure"
Dec 20 15:11:12.286: INFO: Trying to get logs from node iruya-node pod pod-configmaps-832970d6-0674-47b7-af5b-be6effe09023 container configmap-volume-test: 
STEP: delete the pod
Dec 20 15:11:12.513: INFO: Waiting for pod pod-configmaps-832970d6-0674-47b7-af5b-be6effe09023 to disappear
Dec 20 15:11:12.533: INFO: Pod pod-configmaps-832970d6-0674-47b7-af5b-be6effe09023 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 15:11:12.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1004" for this suite.
Dec 20 15:11:18.608: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 15:11:18.738: INFO: namespace configmap-1004 deletion completed in 6.195038439s

• [SLOW TEST:16.651 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 15:11:18.738: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Dec 20 15:11:18.804: INFO: PodSpec: initContainers in spec.initContainers
Dec 20 15:12:23.471: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-44dcd199-fb5d-4ee2-b1f6-38446591bdaf", GenerateName:"", Namespace:"init-container-3957", SelfLink:"/api/v1/namespaces/init-container-3957/pods/pod-init-44dcd199-fb5d-4ee2-b1f6-38446591bdaf", UID:"dd764880-4f52-4d47-b1ba-90f96dff3881", ResourceVersion:"17406335", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63712451478, loc:(*time.Location)(0x7ea48a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"804163536"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-ngvl4", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0027f8000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-ngvl4", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-ngvl4", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-ngvl4", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002a08088), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0015031a0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002a08110)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002a08130)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002a08138), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002a0813c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712451479, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712451479, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712451479, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712451478, loc:(*time.Location)(0x7ea48a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.3.65", PodIP:"10.44.0.1", StartTime:(*v1.Time)(0xc002e90060), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00085a150)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00085aee0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://56d46462ae675013e2c062456f581f6a6fd55ed23f0c4f10a78bc404dfca77bc"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002e900e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002e900c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 15:12:23.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-3957" for this suite.
Dec 20 15:12:45.581: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 15:12:45.740: INFO: namespace init-container-3957 deletion completed in 22.23008511s

• [SLOW TEST:87.002 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 15:12:45.742: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 20 15:12:45.907: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bca0deb1-ff77-45d7-9de0-2400a008c64a" in namespace "downward-api-6601" to be "success or failure"
Dec 20 15:12:45.916: INFO: Pod "downwardapi-volume-bca0deb1-ff77-45d7-9de0-2400a008c64a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.772825ms
Dec 20 15:12:47.929: INFO: Pod "downwardapi-volume-bca0deb1-ff77-45d7-9de0-2400a008c64a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0216917s
Dec 20 15:12:49.939: INFO: Pod "downwardapi-volume-bca0deb1-ff77-45d7-9de0-2400a008c64a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031904413s
Dec 20 15:12:51.947: INFO: Pod "downwardapi-volume-bca0deb1-ff77-45d7-9de0-2400a008c64a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039835147s
Dec 20 15:12:53.955: INFO: Pod "downwardapi-volume-bca0deb1-ff77-45d7-9de0-2400a008c64a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.047790822s
Dec 20 15:12:55.962: INFO: Pod "downwardapi-volume-bca0deb1-ff77-45d7-9de0-2400a008c64a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.055333528s
STEP: Saw pod success
Dec 20 15:12:55.963: INFO: Pod "downwardapi-volume-bca0deb1-ff77-45d7-9de0-2400a008c64a" satisfied condition "success or failure"
Dec 20 15:12:55.966: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-bca0deb1-ff77-45d7-9de0-2400a008c64a container client-container: 
STEP: delete the pod
Dec 20 15:12:56.012: INFO: Waiting for pod downwardapi-volume-bca0deb1-ff77-45d7-9de0-2400a008c64a to disappear
Dec 20 15:12:56.016: INFO: Pod downwardapi-volume-bca0deb1-ff77-45d7-9de0-2400a008c64a no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 15:12:56.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6601" for this suite.
Dec 20 15:13:02.054: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 15:13:02.196: INFO: namespace downward-api-6601 deletion completed in 6.175987215s

• [SLOW TEST:16.454 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 15:13:02.198: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-1b9639f0-a4a8-4104-8469-aefd73ae83ca
STEP: Creating a pod to test consume secrets
Dec 20 15:13:02.652: INFO: Waiting up to 5m0s for pod "pod-secrets-f1b19166-1438-43e3-830f-5817fbc7480a" in namespace "secrets-2278" to be "success or failure"
Dec 20 15:13:02.699: INFO: Pod "pod-secrets-f1b19166-1438-43e3-830f-5817fbc7480a": Phase="Pending", Reason="", readiness=false. Elapsed: 46.318383ms
Dec 20 15:13:04.707: INFO: Pod "pod-secrets-f1b19166-1438-43e3-830f-5817fbc7480a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054613628s
Dec 20 15:13:06.734: INFO: Pod "pod-secrets-f1b19166-1438-43e3-830f-5817fbc7480a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081433505s
Dec 20 15:13:08.742: INFO: Pod "pod-secrets-f1b19166-1438-43e3-830f-5817fbc7480a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.090019501s
Dec 20 15:13:10.750: INFO: Pod "pod-secrets-f1b19166-1438-43e3-830f-5817fbc7480a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.097353553s
STEP: Saw pod success
Dec 20 15:13:10.750: INFO: Pod "pod-secrets-f1b19166-1438-43e3-830f-5817fbc7480a" satisfied condition "success or failure"
Dec 20 15:13:10.755: INFO: Trying to get logs from node iruya-node pod pod-secrets-f1b19166-1438-43e3-830f-5817fbc7480a container secret-volume-test: 
STEP: delete the pod
Dec 20 15:13:10.833: INFO: Waiting for pod pod-secrets-f1b19166-1438-43e3-830f-5817fbc7480a to disappear
Dec 20 15:13:10.855: INFO: Pod pod-secrets-f1b19166-1438-43e3-830f-5817fbc7480a no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 15:13:10.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2278" for this suite.
Dec 20 15:13:16.922: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 15:13:17.078: INFO: namespace secrets-2278 deletion completed in 6.214023251s
STEP: Destroying namespace "secret-namespace-4032" for this suite.
Dec 20 15:13:23.104: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 15:13:23.237: INFO: namespace secret-namespace-4032 deletion completed in 6.159077683s

• [SLOW TEST:21.039 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 15:13:23.237: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 20 15:13:23.386: INFO: Waiting up to 5m0s for pod "downwardapi-volume-803132d7-ed49-4a43-ab48-7feb9c03ad7e" in namespace "downward-api-7724" to be "success or failure"
Dec 20 15:13:23.395: INFO: Pod "downwardapi-volume-803132d7-ed49-4a43-ab48-7feb9c03ad7e": Phase="Pending", Reason="", readiness=false. Elapsed: 9.064245ms
Dec 20 15:13:25.403: INFO: Pod "downwardapi-volume-803132d7-ed49-4a43-ab48-7feb9c03ad7e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01661895s
Dec 20 15:13:27.415: INFO: Pod "downwardapi-volume-803132d7-ed49-4a43-ab48-7feb9c03ad7e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028377393s
Dec 20 15:13:29.431: INFO: Pod "downwardapi-volume-803132d7-ed49-4a43-ab48-7feb9c03ad7e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044297952s
Dec 20 15:13:31.441: INFO: Pod "downwardapi-volume-803132d7-ed49-4a43-ab48-7feb9c03ad7e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.054504756s
Dec 20 15:13:33.450: INFO: Pod "downwardapi-volume-803132d7-ed49-4a43-ab48-7feb9c03ad7e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.063958769s
STEP: Saw pod success
Dec 20 15:13:33.451: INFO: Pod "downwardapi-volume-803132d7-ed49-4a43-ab48-7feb9c03ad7e" satisfied condition "success or failure"
Dec 20 15:13:33.458: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-803132d7-ed49-4a43-ab48-7feb9c03ad7e container client-container: 
STEP: delete the pod
Dec 20 15:13:33.591: INFO: Waiting for pod downwardapi-volume-803132d7-ed49-4a43-ab48-7feb9c03ad7e to disappear
Dec 20 15:13:33.641: INFO: Pod downwardapi-volume-803132d7-ed49-4a43-ab48-7feb9c03ad7e no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 15:13:33.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7724" for this suite.
Dec 20 15:13:39.695: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 15:13:39.842: INFO: namespace downward-api-7724 deletion completed in 6.184755093s

• [SLOW TEST:16.604 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 15:13:39.845: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Dec 20 15:13:50.067: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-5d15bf53-0f8a-429e-844a-762029efa2f6,GenerateName:,Namespace:events-9918,SelfLink:/api/v1/namespaces/events-9918/pods/send-events-5d15bf53-0f8a-429e-844a-762029efa2f6,UID:c49ec49c-ae3e-4423-8297-1842351874d7,ResourceVersion:17406555,Generation:0,CreationTimestamp:2019-12-20 15:13:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 16248282,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5tmps {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5tmps,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-5tmps true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002871220} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002871240}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 15:13:40 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 15:13:49 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 15:13:49 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 15:13:40 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2019-12-20 15:13:40 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2019-12-20 15:13:47 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://826c54bacc9df08ebb8be9d0bff9fa48df4935d408fa9cdeb7ccb07efc77ea10}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Dec 20 15:13:52.075: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Dec 20 15:13:54.086: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 15:13:54.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-9918" for this suite.
Dec 20 15:14:34.220: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 15:14:34.322: INFO: namespace events-9918 deletion completed in 40.149067905s

• [SLOW TEST:54.477 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 20 15:14:34.323: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-1579
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 20 15:14:34.388: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 20 15:15:18.659: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-1579 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 20 15:15:18.659: INFO: >>> kubeConfig: /root/.kube/config
Dec 20 15:15:19.042: INFO: Found all expected endpoints: [netserver-0]
Dec 20 15:15:19.052: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-1579 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 20 15:15:19.053: INFO: >>> kubeConfig: /root/.kube/config
Dec 20 15:15:19.352: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 20 15:15:19.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-1579" for this suite.
Dec 20 15:15:43.392: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 15:15:43.560: INFO: namespace pod-network-test-1579 deletion completed in 24.197603871s

• [SLOW TEST:69.237 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSDec 20 15:15:43.561: INFO: Running AfterSuite actions on all nodes
Dec 20 15:15:43.561: INFO: Running AfterSuite actions on node 1
Dec 20 15:15:43.561: INFO: Skipping dumping logs from cluster


Summarizing 1 Failure:

[Fail] [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] [It] Should recreate evicted statefulset [Conformance] 
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:769

Ran 215 of 4412 Specs in 8372.774 seconds
FAIL! -- 214 Passed | 1 Failed | 0 Pending | 4197 Skipped
--- FAIL: TestE2E (8373.17s)
FAIL