I0502 12:55:44.380431 6 e2e.go:243] Starting e2e run "b8b5c9d6-06db-4cba-9bac-9e0da1d5633f" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1588424143 - Will randomize all specs Will run 215 of 4412 specs May 2 12:55:44.566: INFO: >>> kubeConfig: /root/.kube/config May 2 12:55:44.570: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 2 12:55:44.593: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 2 12:55:44.624: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 2 12:55:44.624: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 2 12:55:44.624: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 2 12:55:44.633: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 2 12:55:44.633: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 2 12:55:44.633: INFO: e2e test version: v1.15.11 May 2 12:55:44.634: INFO: kube-apiserver version: v1.15.7 SSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 12:55:44.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook May 2 12:55:44.711: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 2 12:55:52.764: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 2 12:55:52.774: INFO: Pod pod-with-poststart-exec-hook still exists May 2 12:55:54.774: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 2 12:55:54.779: INFO: Pod pod-with-poststart-exec-hook still exists May 2 12:55:56.774: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 2 12:55:56.778: INFO: Pod pod-with-poststart-exec-hook still exists May 2 12:55:58.774: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 2 12:55:58.779: INFO: Pod pod-with-poststart-exec-hook still exists May 2 12:56:00.774: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 2 12:56:00.779: INFO: Pod pod-with-poststart-exec-hook still exists May 2 12:56:02.774: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 2 12:56:02.779: INFO: Pod pod-with-poststart-exec-hook still exists May 2 12:56:04.774: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 2 12:56:04.779: INFO: Pod pod-with-poststart-exec-hook still exists May 2 12:56:06.774: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 2 12:56:06.779: INFO: Pod pod-with-poststart-exec-hook still exists May 2 12:56:08.774: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 2 12:56:08.779: INFO: Pod pod-with-poststart-exec-hook still exists May 2 12:56:10.774: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 2 12:56:10.778: INFO: Pod pod-with-poststart-exec-hook still exists May 2 12:56:12.774: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 2 12:56:12.779: INFO: Pod pod-with-poststart-exec-hook still exists May 2 12:56:14.774: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 2 12:56:14.779: INFO: Pod pod-with-poststart-exec-hook still exists May 2 12:56:16.774: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 2 12:56:16.778: INFO: Pod pod-with-poststart-exec-hook still exists May 2 12:56:18.774: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 2 12:56:18.779: INFO: Pod pod-with-poststart-exec-hook still exists May 2 12:56:20.774: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 2 12:56:20.778: INFO: Pod pod-with-poststart-exec-hook still exists May 2 12:56:22.774: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 2 12:56:22.779: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 12:56:22.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9896" for this suite. May 2 12:56:46.796: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:56:46.877: INFO: namespace container-lifecycle-hook-9896 deletion completed in 24.094862772s • [SLOW TEST:62.243 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 12:56:46.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-23fd815d-00db-430b-8a8a-45c57ff5e5f5 STEP: Creating configMap with name cm-test-opt-upd-0519b8ee-5c0b-4443-911e-3f9bf731fc93 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-23fd815d-00db-430b-8a8a-45c57ff5e5f5 STEP: Updating configmap cm-test-opt-upd-0519b8ee-5c0b-4443-911e-3f9bf731fc93 STEP: Creating configMap with name cm-test-opt-create-3bcbfd05-2d06-4c1b-86dc-2a3f25db7592 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 12:57:59.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5048" for this suite. May 2 12:58:21.426: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:58:21.528: INFO: namespace configmap-5048 deletion completed in 22.147953901s • [SLOW TEST:94.650 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 12:58:21.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode May 2 12:58:21.633: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-4782" to be "success or failure" May 2 12:58:21.687: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 53.665226ms May 2 12:58:23.691: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058041771s May 2 12:58:25.695: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061655615s May 2 12:58:27.698: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.065135063s STEP: Saw pod success May 2 12:58:27.699: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" May 2 12:58:27.701: INFO: Trying to get logs from node iruya-worker pod pod-host-path-test container test-container-1: STEP: delete the pod May 2 12:58:27.718: INFO: Waiting for pod pod-host-path-test to disappear May 2 12:58:27.728: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 12:58:27.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-4782" for this suite. May 2 12:58:33.759: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:58:33.855: INFO: namespace hostpath-4782 deletion completed in 6.12273517s • [SLOW TEST:12.327 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 12:58:33.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs May 2 12:58:34.040: INFO: Waiting up to 5m0s for pod "pod-11af94ab-7673-408d-abb2-46a5d00bfab2" in namespace "emptydir-786" to be "success or failure" May 2 12:58:34.077: INFO: Pod "pod-11af94ab-7673-408d-abb2-46a5d00bfab2": Phase="Pending", Reason="", readiness=false. Elapsed: 36.598056ms May 2 12:58:36.234: INFO: Pod "pod-11af94ab-7673-408d-abb2-46a5d00bfab2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.19430912s May 2 12:58:38.239: INFO: Pod "pod-11af94ab-7673-408d-abb2-46a5d00bfab2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.19871931s STEP: Saw pod success May 2 12:58:38.239: INFO: Pod "pod-11af94ab-7673-408d-abb2-46a5d00bfab2" satisfied condition "success or failure" May 2 12:58:38.242: INFO: Trying to get logs from node iruya-worker2 pod pod-11af94ab-7673-408d-abb2-46a5d00bfab2 container test-container: STEP: delete the pod May 2 12:58:38.315: INFO: Waiting for pod pod-11af94ab-7673-408d-abb2-46a5d00bfab2 to disappear May 2 12:58:38.340: INFO: Pod pod-11af94ab-7673-408d-abb2-46a5d00bfab2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 12:58:38.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-786" for this suite. May 2 12:58:44.361: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:58:44.432: INFO: namespace emptydir-786 deletion completed in 6.089366372s • [SLOW TEST:10.577 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 12:58:44.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 2 12:58:44.518: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 12:58:48.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5966" for this suite. May 2 12:59:28.583: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:59:28.664: INFO: namespace pods-5966 deletion completed in 40.099123457s • [SLOW TEST:44.232 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 12:59:28.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs May 2 12:59:28.726: INFO: Waiting up to 5m0s for pod "pod-4b89b4e6-241d-434d-91a8-88b53cd6c7ab" in namespace "emptydir-5268" to be "success or failure" May 2 12:59:28.730: INFO: Pod "pod-4b89b4e6-241d-434d-91a8-88b53cd6c7ab": Phase="Pending", Reason="", readiness=false. Elapsed: 3.638601ms May 2 12:59:30.734: INFO: Pod "pod-4b89b4e6-241d-434d-91a8-88b53cd6c7ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008063113s May 2 12:59:32.738: INFO: Pod "pod-4b89b4e6-241d-434d-91a8-88b53cd6c7ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01215263s STEP: Saw pod success May 2 12:59:32.738: INFO: Pod "pod-4b89b4e6-241d-434d-91a8-88b53cd6c7ab" satisfied condition "success or failure" May 2 12:59:32.741: INFO: Trying to get logs from node iruya-worker pod pod-4b89b4e6-241d-434d-91a8-88b53cd6c7ab container test-container: STEP: delete the pod May 2 12:59:32.775: INFO: Waiting for pod pod-4b89b4e6-241d-434d-91a8-88b53cd6c7ab to disappear May 2 12:59:32.799: INFO: Pod pod-4b89b4e6-241d-434d-91a8-88b53cd6c7ab no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 12:59:32.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5268" for this suite. May 2 12:59:38.824: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:59:38.900: INFO: namespace emptydir-5268 deletion completed in 6.096689621s • [SLOW TEST:10.235 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 12:59:38.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod May 2 12:59:38.931: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 12:59:46.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3901" for this suite. May 2 12:59:52.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:59:52.887: INFO: namespace init-container-3901 deletion completed in 6.085852288s • [SLOW TEST:13.986 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 12:59:52.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-593ae13f-89c4-4009-8a26-4e611626c54a STEP: Creating a pod to test consume configMaps May 2 12:59:52.984: INFO: Waiting up to 5m0s for pod "pod-configmaps-e00512fd-4492-4316-966f-e455a5c4e830" in namespace "configmap-665" to be "success or failure" May 2 12:59:53.031: INFO: Pod "pod-configmaps-e00512fd-4492-4316-966f-e455a5c4e830": Phase="Pending", Reason="", readiness=false. Elapsed: 47.092155ms May 2 12:59:55.087: INFO: Pod "pod-configmaps-e00512fd-4492-4316-966f-e455a5c4e830": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102166029s May 2 12:59:57.090: INFO: Pod "pod-configmaps-e00512fd-4492-4316-966f-e455a5c4e830": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.105575998s STEP: Saw pod success May 2 12:59:57.090: INFO: Pod "pod-configmaps-e00512fd-4492-4316-966f-e455a5c4e830" satisfied condition "success or failure" May 2 12:59:57.093: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-e00512fd-4492-4316-966f-e455a5c4e830 container configmap-volume-test: STEP: delete the pod May 2 12:59:57.122: INFO: Waiting for pod pod-configmaps-e00512fd-4492-4316-966f-e455a5c4e830 to disappear May 2 12:59:57.143: INFO: Pod pod-configmaps-e00512fd-4492-4316-966f-e455a5c4e830 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 12:59:57.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-665" for this suite. May 2 13:00:03.170: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:00:03.241: INFO: namespace configmap-665 deletion completed in 6.093979353s • [SLOW TEST:10.353 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:00:03.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs May 2 13:00:03.332: INFO: Waiting up to 5m0s for pod "pod-33830247-3ea3-4fd0-b24a-d534cd75aa51" in namespace "emptydir-4448" to be "success or failure" May 2 13:00:03.354: INFO: Pod "pod-33830247-3ea3-4fd0-b24a-d534cd75aa51": Phase="Pending", Reason="", readiness=false. Elapsed: 22.118481ms May 2 13:00:05.428: INFO: Pod "pod-33830247-3ea3-4fd0-b24a-d534cd75aa51": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096462725s May 2 13:00:07.432: INFO: Pod "pod-33830247-3ea3-4fd0-b24a-d534cd75aa51": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.100576838s STEP: Saw pod success May 2 13:00:07.432: INFO: Pod "pod-33830247-3ea3-4fd0-b24a-d534cd75aa51" satisfied condition "success or failure" May 2 13:00:07.436: INFO: Trying to get logs from node iruya-worker pod pod-33830247-3ea3-4fd0-b24a-d534cd75aa51 container test-container: STEP: delete the pod May 2 13:00:07.506: INFO: Waiting for pod pod-33830247-3ea3-4fd0-b24a-d534cd75aa51 to disappear May 2 13:00:07.509: INFO: Pod pod-33830247-3ea3-4fd0-b24a-d534cd75aa51 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:00:07.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4448" for this suite. May 2 13:00:13.531: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:00:13.609: INFO: namespace emptydir-4448 deletion completed in 6.09527493s • [SLOW TEST:10.368 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:00:13.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 2 13:00:13.758: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f8b8b6fb-b01e-453d-a9a1-e3ef5031147b" in namespace "projected-6549" to be "success or failure" May 2 13:00:13.787: INFO: Pod "downwardapi-volume-f8b8b6fb-b01e-453d-a9a1-e3ef5031147b": Phase="Pending", Reason="", readiness=false. Elapsed: 28.637626ms May 2 13:00:15.791: INFO: Pod "downwardapi-volume-f8b8b6fb-b01e-453d-a9a1-e3ef5031147b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032786658s May 2 13:00:17.794: INFO: Pod "downwardapi-volume-f8b8b6fb-b01e-453d-a9a1-e3ef5031147b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036135851s STEP: Saw pod success May 2 13:00:17.795: INFO: Pod "downwardapi-volume-f8b8b6fb-b01e-453d-a9a1-e3ef5031147b" satisfied condition "success or failure" May 2 13:00:17.797: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-f8b8b6fb-b01e-453d-a9a1-e3ef5031147b container client-container: STEP: delete the pod May 2 13:00:17.817: INFO: Waiting for pod downwardapi-volume-f8b8b6fb-b01e-453d-a9a1-e3ef5031147b to disappear May 2 13:00:17.821: INFO: Pod downwardapi-volume-f8b8b6fb-b01e-453d-a9a1-e3ef5031147b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:00:17.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6549" for this suite. May 2 13:00:23.837: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:00:23.904: INFO: namespace projected-6549 deletion completed in 6.08017638s • [SLOW TEST:10.294 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:00:23.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 2 13:00:23.977: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 2 13:00:24.007: INFO: Number of nodes with available pods: 0 May 2 13:00:24.007: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 2 13:00:24.081: INFO: Number of nodes with available pods: 0 May 2 13:00:24.082: INFO: Node iruya-worker is running more than one daemon pod May 2 13:00:25.086: INFO: Number of nodes with available pods: 0 May 2 13:00:25.086: INFO: Node iruya-worker is running more than one daemon pod May 2 13:00:26.086: INFO: Number of nodes with available pods: 0 May 2 13:00:26.086: INFO: Node iruya-worker is running more than one daemon pod May 2 13:00:27.089: INFO: Number of nodes with available pods: 1 May 2 13:00:27.089: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 2 13:00:27.121: INFO: Number of nodes with available pods: 1 May 2 13:00:27.121: INFO: Number of running nodes: 0, number of available pods: 1 May 2 13:00:28.127: INFO: Number of nodes with available pods: 0 May 2 13:00:28.127: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 2 13:00:28.163: INFO: Number of nodes with available pods: 0 May 2 13:00:28.163: INFO: Node iruya-worker is running more than one daemon pod May 2 13:00:29.167: INFO: Number of nodes with available pods: 0 May 2 13:00:29.167: INFO: Node iruya-worker is running more than one daemon pod May 2 13:00:30.167: INFO: Number of nodes with available pods: 0 May 2 13:00:30.167: INFO: Node iruya-worker is running more than one daemon pod May 2 13:00:31.166: INFO: Number of nodes with available pods: 0 May 2 13:00:31.166: INFO: Node iruya-worker is running more than one daemon pod May 2 13:00:32.170: INFO: Number of nodes with available pods: 0 May 2 13:00:32.170: INFO: Node iruya-worker is running more than one daemon pod May 2 13:00:33.167: INFO: Number of nodes with available pods: 0 May 2 13:00:33.167: INFO: Node iruya-worker is running more than one daemon pod May 2 13:00:34.166: INFO: Number of nodes with available pods: 1 May 2 13:00:34.166: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8394, will wait for the garbage collector to delete the pods May 2 13:00:34.231: INFO: Deleting DaemonSet.extensions daemon-set took: 6.635167ms May 2 13:00:34.532: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.268413ms May 2 13:00:42.235: INFO: Number of nodes with available pods: 0 May 2 13:00:42.235: INFO: Number of running nodes: 0, number of available pods: 0 May 2 13:00:42.242: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8394/daemonsets","resourceVersion":"8618422"},"items":null} May 2 13:00:42.245: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8394/pods","resourceVersion":"8618422"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:00:42.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8394" for this suite. May 2 13:00:48.318: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:00:48.437: INFO: namespace daemonsets-8394 deletion completed in 6.148695448s • [SLOW TEST:24.533 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:00:48.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-540fa12e-ebf1-4b67-acbc-ddc354facebb STEP: Creating a pod to test consume secrets May 2 13:00:48.574: INFO: Waiting up to 5m0s for pod "pod-secrets-c8008a74-dfc6-455d-aab0-a7fe79ff2107" in namespace "secrets-416" to be "success or failure" May 2 13:00:48.589: INFO: Pod "pod-secrets-c8008a74-dfc6-455d-aab0-a7fe79ff2107": Phase="Pending", Reason="", readiness=false. Elapsed: 15.030532ms May 2 13:00:50.860: INFO: Pod "pod-secrets-c8008a74-dfc6-455d-aab0-a7fe79ff2107": Phase="Pending", Reason="", readiness=false. Elapsed: 2.285681701s May 2 13:00:52.864: INFO: Pod "pod-secrets-c8008a74-dfc6-455d-aab0-a7fe79ff2107": Phase="Running", Reason="", readiness=true. Elapsed: 4.289712142s May 2 13:00:54.867: INFO: Pod "pod-secrets-c8008a74-dfc6-455d-aab0-a7fe79ff2107": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.293201684s STEP: Saw pod success May 2 13:00:54.867: INFO: Pod "pod-secrets-c8008a74-dfc6-455d-aab0-a7fe79ff2107" satisfied condition "success or failure" May 2 13:00:54.870: INFO: Trying to get logs from node iruya-worker pod pod-secrets-c8008a74-dfc6-455d-aab0-a7fe79ff2107 container secret-volume-test: STEP: delete the pod May 2 13:00:54.907: INFO: Waiting for pod pod-secrets-c8008a74-dfc6-455d-aab0-a7fe79ff2107 to disappear May 2 13:00:54.918: INFO: Pod pod-secrets-c8008a74-dfc6-455d-aab0-a7fe79ff2107 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:00:54.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-416" for this suite. May 2 13:01:00.934: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:01:01.047: INFO: namespace secrets-416 deletion completed in 6.126474891s • [SLOW TEST:12.610 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:01:01.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server May 2 13:01:01.100: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:01:01.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-756" for this suite. May 2 13:01:07.219: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:01:07.306: INFO: namespace kubectl-756 deletion completed in 6.102163124s • [SLOW TEST:6.259 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:01:07.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-b7c45535-cf0e-46d3-bef2-8ed1b7f3b8eb STEP: Creating a pod to test consume secrets May 2 13:01:07.398: INFO: Waiting up to 5m0s for pod "pod-secrets-fd12d234-e240-4137-88fe-f5df3521c9fe" in namespace "secrets-1664" to be "success or failure" May 2 13:01:07.413: INFO: Pod "pod-secrets-fd12d234-e240-4137-88fe-f5df3521c9fe": Phase="Pending", Reason="", readiness=false. Elapsed: 14.477616ms May 2 13:01:09.440: INFO: Pod "pod-secrets-fd12d234-e240-4137-88fe-f5df3521c9fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041942198s May 2 13:01:11.446: INFO: Pod "pod-secrets-fd12d234-e240-4137-88fe-f5df3521c9fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047005324s STEP: Saw pod success May 2 13:01:11.446: INFO: Pod "pod-secrets-fd12d234-e240-4137-88fe-f5df3521c9fe" satisfied condition "success or failure" May 2 13:01:11.449: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-fd12d234-e240-4137-88fe-f5df3521c9fe container secret-volume-test: STEP: delete the pod May 2 13:01:11.630: INFO: Waiting for pod pod-secrets-fd12d234-e240-4137-88fe-f5df3521c9fe to disappear May 2 13:01:11.636: INFO: Pod pod-secrets-fd12d234-e240-4137-88fe-f5df3521c9fe no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:01:11.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1664" for this suite. May 2 13:01:17.651: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:01:17.742: INFO: namespace secrets-1664 deletion completed in 6.102355015s • [SLOW TEST:10.435 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:01:17.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:01:21.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3859" for this suite. May 2 13:02:03.928: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:02:04.003: INFO: namespace kubelet-test-3859 deletion completed in 42.091479673s • [SLOW TEST:46.261 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:02:04.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0502 13:02:05.126669 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 2 13:02:05.126: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:02:05.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4032" for this suite. May 2 13:02:11.143: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:02:11.222: INFO: namespace gc-4032 deletion completed in 6.091856375s • [SLOW TEST:7.218 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:02:11.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 2 13:02:14.331: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:02:14.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7828" for this suite. May 2 13:02:20.409: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:02:20.492: INFO: namespace container-runtime-7828 deletion completed in 6.094218149s • [SLOW TEST:9.269 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:02:20.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-893f203d-d140-48f3-8ead-b9329f67c313 in namespace container-probe-1249 May 2 13:02:24.565: INFO: Started pod busybox-893f203d-d140-48f3-8ead-b9329f67c313 in namespace container-probe-1249 STEP: checking the pod's current state and verifying that restartCount is present May 2 13:02:24.568: INFO: Initial restart count of pod busybox-893f203d-d140-48f3-8ead-b9329f67c313 is 0 May 2 13:03:18.691: INFO: Restart count of pod container-probe-1249/busybox-893f203d-d140-48f3-8ead-b9329f67c313 is now 1 (54.122958764s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:03:18.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1249" for this suite. May 2 13:03:24.811: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:03:24.878: INFO: namespace container-probe-1249 deletion completed in 6.129321731s • [SLOW TEST:64.386 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:03:24.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-9447c813-3947-4f37-9130-242d2cf5fed7 STEP: Creating a pod to test consume configMaps May 2 13:03:24.947: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3fd675b6-8ec7-43f4-8550-18fbc2f4840c" in namespace "projected-1848" to be "success or failure" May 2 13:03:24.967: INFO: Pod "pod-projected-configmaps-3fd675b6-8ec7-43f4-8550-18fbc2f4840c": Phase="Pending", Reason="", readiness=false. Elapsed: 19.264941ms May 2 13:03:26.971: INFO: Pod "pod-projected-configmaps-3fd675b6-8ec7-43f4-8550-18fbc2f4840c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023339177s May 2 13:03:28.975: INFO: Pod "pod-projected-configmaps-3fd675b6-8ec7-43f4-8550-18fbc2f4840c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027714581s STEP: Saw pod success May 2 13:03:28.975: INFO: Pod "pod-projected-configmaps-3fd675b6-8ec7-43f4-8550-18fbc2f4840c" satisfied condition "success or failure" May 2 13:03:28.979: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-3fd675b6-8ec7-43f4-8550-18fbc2f4840c container projected-configmap-volume-test: STEP: delete the pod May 2 13:03:29.029: INFO: Waiting for pod pod-projected-configmaps-3fd675b6-8ec7-43f4-8550-18fbc2f4840c to disappear May 2 13:03:29.035: INFO: Pod pod-projected-configmaps-3fd675b6-8ec7-43f4-8550-18fbc2f4840c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:03:29.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1848" for this suite. May 2 13:03:35.051: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:03:35.181: INFO: namespace projected-1848 deletion completed in 6.14256171s • [SLOW TEST:10.303 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:03:35.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 2 13:03:35.261: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-9440,SelfLink:/api/v1/namespaces/watch-9440/configmaps/e2e-watch-test-watch-closed,UID:f3f519f3-d527-4d86-a61c-55d7a55b8944,ResourceVersion:8618983,Generation:0,CreationTimestamp:2020-05-02 13:03:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 2 13:03:35.261: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-9440,SelfLink:/api/v1/namespaces/watch-9440/configmaps/e2e-watch-test-watch-closed,UID:f3f519f3-d527-4d86-a61c-55d7a55b8944,ResourceVersion:8618984,Generation:0,CreationTimestamp:2020-05-02 13:03:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 2 13:03:35.279: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-9440,SelfLink:/api/v1/namespaces/watch-9440/configmaps/e2e-watch-test-watch-closed,UID:f3f519f3-d527-4d86-a61c-55d7a55b8944,ResourceVersion:8618985,Generation:0,CreationTimestamp:2020-05-02 13:03:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 2 13:03:35.279: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-9440,SelfLink:/api/v1/namespaces/watch-9440/configmaps/e2e-watch-test-watch-closed,UID:f3f519f3-d527-4d86-a61c-55d7a55b8944,ResourceVersion:8618986,Generation:0,CreationTimestamp:2020-05-02 13:03:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:03:35.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9440" for this suite. May 2 13:03:41.315: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:03:41.392: INFO: namespace watch-9440 deletion completed in 6.092554046s • [SLOW TEST:6.210 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:03:41.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4804.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4804.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4804.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4804.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 2 13:03:47.476: INFO: DNS probes using dns-test-d064ac42-3789-4b4d-aaa7-87a6d84f3439 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4804.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4804.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4804.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4804.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 2 13:03:53.605: INFO: File jessie_udp@dns-test-service-3.dns-4804.svc.cluster.local from pod dns-4804/dns-test-64e8b562-0560-46f0-a07d-422ee7d04969 contains 'foo.example.com. ' instead of 'bar.example.com.' May 2 13:03:53.605: INFO: Lookups using dns-4804/dns-test-64e8b562-0560-46f0-a07d-422ee7d04969 failed for: [jessie_udp@dns-test-service-3.dns-4804.svc.cluster.local] May 2 13:03:58.614: INFO: File jessie_udp@dns-test-service-3.dns-4804.svc.cluster.local from pod dns-4804/dns-test-64e8b562-0560-46f0-a07d-422ee7d04969 contains 'foo.example.com. ' instead of 'bar.example.com.' May 2 13:03:58.614: INFO: Lookups using dns-4804/dns-test-64e8b562-0560-46f0-a07d-422ee7d04969 failed for: [jessie_udp@dns-test-service-3.dns-4804.svc.cluster.local] May 2 13:04:03.615: INFO: File jessie_udp@dns-test-service-3.dns-4804.svc.cluster.local from pod dns-4804/dns-test-64e8b562-0560-46f0-a07d-422ee7d04969 contains 'foo.example.com. ' instead of 'bar.example.com.' May 2 13:04:03.615: INFO: Lookups using dns-4804/dns-test-64e8b562-0560-46f0-a07d-422ee7d04969 failed for: [jessie_udp@dns-test-service-3.dns-4804.svc.cluster.local] May 2 13:04:08.610: INFO: File wheezy_udp@dns-test-service-3.dns-4804.svc.cluster.local from pod dns-4804/dns-test-64e8b562-0560-46f0-a07d-422ee7d04969 contains 'foo.example.com. ' instead of 'bar.example.com.' May 2 13:04:08.614: INFO: File jessie_udp@dns-test-service-3.dns-4804.svc.cluster.local from pod dns-4804/dns-test-64e8b562-0560-46f0-a07d-422ee7d04969 contains 'foo.example.com. ' instead of 'bar.example.com.' May 2 13:04:08.614: INFO: Lookups using dns-4804/dns-test-64e8b562-0560-46f0-a07d-422ee7d04969 failed for: [wheezy_udp@dns-test-service-3.dns-4804.svc.cluster.local jessie_udp@dns-test-service-3.dns-4804.svc.cluster.local] May 2 13:04:13.615: INFO: DNS probes using dns-test-64e8b562-0560-46f0-a07d-422ee7d04969 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4804.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-4804.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4804.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-4804.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 2 13:04:20.237: INFO: DNS probes using dns-test-36f46455-961f-4808-8894-a08522d4badd succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:04:20.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4804" for this suite. May 2 13:04:28.333: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:04:28.409: INFO: namespace dns-4804 deletion completed in 8.098571924s • [SLOW TEST:47.017 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:04:28.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 2 13:04:28.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9443' May 2 13:04:31.130: INFO: stderr: "" May 2 13:04:31.130: INFO: stdout: "replicationcontroller/redis-master created\n" May 2 13:04:31.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9443' May 2 13:04:31.441: INFO: stderr: "" May 2 13:04:31.441: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. May 2 13:04:32.570: INFO: Selector matched 1 pods for map[app:redis] May 2 13:04:32.570: INFO: Found 0 / 1 May 2 13:04:33.446: INFO: Selector matched 1 pods for map[app:redis] May 2 13:04:33.446: INFO: Found 0 / 1 May 2 13:04:34.474: INFO: Selector matched 1 pods for map[app:redis] May 2 13:04:34.474: INFO: Found 0 / 1 May 2 13:04:35.447: INFO: Selector matched 1 pods for map[app:redis] May 2 13:04:35.447: INFO: Found 1 / 1 May 2 13:04:35.447: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 2 13:04:35.450: INFO: Selector matched 1 pods for map[app:redis] May 2 13:04:35.450: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 2 13:04:35.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-27p4t --namespace=kubectl-9443' May 2 13:04:35.566: INFO: stderr: "" May 2 13:04:35.566: INFO: stdout: "Name: redis-master-27p4t\nNamespace: kubectl-9443\nPriority: 0\nNode: iruya-worker2/172.17.0.5\nStart Time: Sat, 02 May 2020 13:04:31 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.37\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://a288b92529a1a878debe40cb4fc6b6afea989a84a1ea8f4d2dd16a54ce426e04\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Sat, 02 May 2020 13:04:34 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-v758j (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-v758j:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-v758j\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned kubectl-9443/redis-master-27p4t to iruya-worker2\n Normal Pulled 3s kubelet, iruya-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 2s kubelet, iruya-worker2 Created container redis-master\n Normal Started 1s kubelet, iruya-worker2 Started container redis-master\n" May 2 13:04:35.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-9443' May 2 13:04:35.686: INFO: stderr: "" May 2 13:04:35.686: INFO: stdout: "Name: redis-master\nNamespace: kubectl-9443\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: redis-master-27p4t\n" May 2 13:04:35.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-9443' May 2 13:04:35.788: INFO: stderr: "" May 2 13:04:35.788: INFO: stdout: "Name: redis-master\nNamespace: kubectl-9443\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.96.202.202\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.1.37:6379\nSession Affinity: None\nEvents: \n" May 2 13:04:35.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-control-plane' May 2 13:04:35.925: INFO: stderr: "" May 2 13:04:35.925: INFO: stdout: "Name: iruya-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:24:20 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Sat, 02 May 2020 13:04:09 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sat, 02 May 2020 13:04:09 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sat, 02 May 2020 13:04:09 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sat, 02 May 2020 13:04:09 +0000 Sun, 15 Mar 2020 18:25:00 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.7\n Hostname: iruya-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 09f14f6f4d1640fcaab2243401c9f154\n System UUID: 7c6ca533-492e-400c-b058-c282f97a69ec\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.15.7\n Kube-Proxy Version: v1.15.7\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-iruya-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 47d\n kube-system kindnet-zn8sx 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 47d\n kube-system kube-apiserver-iruya-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 47d\n kube-system kube-controller-manager-iruya-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 47d\n kube-system kube-proxy-46nsr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 47d\n kube-system kube-scheduler-iruya-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 47d\n local-path-storage local-path-provisioner-d4947b89c-72frh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 47d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" May 2 13:04:35.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-9443' May 2 13:04:36.034: INFO: stderr: "" May 2 13:04:36.034: INFO: stdout: "Name: kubectl-9443\nLabels: e2e-framework=kubectl\n e2e-run=b8b5c9d6-06db-4cba-9bac-9e0da1d5633f\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:04:36.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9443" for this suite. May 2 13:04:48.053: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:04:48.152: INFO: namespace kubectl-9443 deletion completed in 12.115102053s • [SLOW TEST:19.743 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:04:48.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin May 2 13:04:48.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1250 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' May 2 13:04:52.733: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0502 13:04:52.656349 204 log.go:172] (0xc000a50160) (0xc0005e48c0) Create stream\nI0502 13:04:52.656417 204 log.go:172] (0xc000a50160) (0xc0005e48c0) Stream added, broadcasting: 1\nI0502 13:04:52.659860 204 log.go:172] (0xc000a50160) Reply frame received for 1\nI0502 13:04:52.659916 204 log.go:172] (0xc000a50160) (0xc000600000) Create stream\nI0502 13:04:52.659934 204 log.go:172] (0xc000a50160) (0xc000600000) Stream added, broadcasting: 3\nI0502 13:04:52.661052 204 log.go:172] (0xc000a50160) Reply frame received for 3\nI0502 13:04:52.661295 204 log.go:172] (0xc000a50160) (0xc00060a000) Create stream\nI0502 13:04:52.661334 204 log.go:172] (0xc000a50160) (0xc00060a000) Stream added, broadcasting: 5\nI0502 13:04:52.662591 204 log.go:172] (0xc000a50160) Reply frame received for 5\nI0502 13:04:52.662626 204 log.go:172] (0xc000a50160) (0xc0005e4960) Create stream\nI0502 13:04:52.662636 204 log.go:172] (0xc000a50160) (0xc0005e4960) Stream added, broadcasting: 7\nI0502 13:04:52.663604 204 log.go:172] (0xc000a50160) Reply frame received for 7\nI0502 13:04:52.663738 204 log.go:172] (0xc000600000) (3) Writing data frame\nI0502 13:04:52.663816 204 log.go:172] (0xc000600000) (3) Writing data frame\nI0502 13:04:52.664827 204 log.go:172] (0xc000a50160) Data frame received for 5\nI0502 13:04:52.664878 204 log.go:172] (0xc00060a000) (5) Data frame handling\nI0502 13:04:52.664924 204 log.go:172] (0xc00060a000) (5) Data frame sent\nI0502 13:04:52.665970 204 log.go:172] (0xc000a50160) Data frame received for 5\nI0502 13:04:52.665992 204 log.go:172] (0xc00060a000) (5) Data frame handling\nI0502 13:04:52.666008 204 log.go:172] (0xc00060a000) (5) Data frame sent\nI0502 13:04:52.708969 204 log.go:172] (0xc000a50160) Data frame received for 5\nI0502 13:04:52.709012 204 log.go:172] (0xc00060a000) (5) Data frame handling\nI0502 13:04:52.709046 204 log.go:172] (0xc000a50160) Data frame received for 7\nI0502 13:04:52.709072 204 log.go:172] (0xc0005e4960) (7) Data frame handling\nI0502 13:04:52.709872 204 log.go:172] (0xc000a50160) Data frame received for 1\nI0502 13:04:52.709910 204 log.go:172] (0xc0005e48c0) (1) Data frame handling\nI0502 13:04:52.709945 204 log.go:172] (0xc0005e48c0) (1) Data frame sent\nI0502 13:04:52.709980 204 log.go:172] (0xc000a50160) (0xc0005e48c0) Stream removed, broadcasting: 1\nI0502 13:04:52.710090 204 log.go:172] (0xc000a50160) (0xc0005e48c0) Stream removed, broadcasting: 1\nI0502 13:04:52.710120 204 log.go:172] (0xc000a50160) (0xc000600000) Stream removed, broadcasting: 3\nI0502 13:04:52.710150 204 log.go:172] (0xc000a50160) (0xc00060a000) Stream removed, broadcasting: 5\nI0502 13:04:52.710367 204 log.go:172] (0xc000a50160) (0xc0005e4960) Stream removed, broadcasting: 7\nI0502 13:04:52.710467 204 log.go:172] (0xc000a50160) Go away received\n" May 2 13:04:52.733: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:04:54.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1250" for this suite. May 2 13:05:00.758: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:05:00.839: INFO: namespace kubectl-1250 deletion completed in 6.096527936s • [SLOW TEST:12.687 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:05:00.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium May 2 13:05:00.918: INFO: Waiting up to 5m0s for pod "pod-96b7c754-b10d-4020-a3f6-3e3c72d1d95f" in namespace "emptydir-7037" to be "success or failure" May 2 13:05:00.922: INFO: Pod "pod-96b7c754-b10d-4020-a3f6-3e3c72d1d95f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.771445ms May 2 13:05:02.926: INFO: Pod "pod-96b7c754-b10d-4020-a3f6-3e3c72d1d95f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008445284s May 2 13:05:04.930: INFO: Pod "pod-96b7c754-b10d-4020-a3f6-3e3c72d1d95f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012428865s STEP: Saw pod success May 2 13:05:04.930: INFO: Pod "pod-96b7c754-b10d-4020-a3f6-3e3c72d1d95f" satisfied condition "success or failure" May 2 13:05:04.933: INFO: Trying to get logs from node iruya-worker2 pod pod-96b7c754-b10d-4020-a3f6-3e3c72d1d95f container test-container: STEP: delete the pod May 2 13:05:05.035: INFO: Waiting for pod pod-96b7c754-b10d-4020-a3f6-3e3c72d1d95f to disappear May 2 13:05:05.099: INFO: Pod pod-96b7c754-b10d-4020-a3f6-3e3c72d1d95f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:05:05.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7037" for this suite. May 2 13:05:11.288: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:05:11.351: INFO: namespace emptydir-7037 deletion completed in 6.246327789s • [SLOW TEST:10.511 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:05:11.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC May 2 13:05:11.466: INFO: namespace kubectl-1741 May 2 13:05:11.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1741' May 2 13:05:11.864: INFO: stderr: "" May 2 13:05:11.864: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 2 13:05:12.868: INFO: Selector matched 1 pods for map[app:redis] May 2 13:05:12.868: INFO: Found 0 / 1 May 2 13:05:13.868: INFO: Selector matched 1 pods for map[app:redis] May 2 13:05:13.868: INFO: Found 0 / 1 May 2 13:05:14.867: INFO: Selector matched 1 pods for map[app:redis] May 2 13:05:14.867: INFO: Found 0 / 1 May 2 13:05:15.868: INFO: Selector matched 1 pods for map[app:redis] May 2 13:05:15.868: INFO: Found 1 / 1 May 2 13:05:15.868: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 2 13:05:15.872: INFO: Selector matched 1 pods for map[app:redis] May 2 13:05:15.872: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 2 13:05:15.872: INFO: wait on redis-master startup in kubectl-1741 May 2 13:05:15.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-p6rhx redis-master --namespace=kubectl-1741' May 2 13:05:15.975: INFO: stderr: "" May 2 13:05:15.975: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 02 May 13:05:14.534 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 02 May 13:05:14.534 # Server started, Redis version 3.2.12\n1:M 02 May 13:05:14.534 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 02 May 13:05:14.534 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC May 2 13:05:15.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-1741' May 2 13:05:16.129: INFO: stderr: "" May 2 13:05:16.129: INFO: stdout: "service/rm2 exposed\n" May 2 13:05:16.205: INFO: Service rm2 in namespace kubectl-1741 found. STEP: exposing service May 2 13:05:18.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-1741' May 2 13:05:18.353: INFO: stderr: "" May 2 13:05:18.353: INFO: stdout: "service/rm3 exposed\n" May 2 13:05:18.369: INFO: Service rm3 in namespace kubectl-1741 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:05:20.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1741" for this suite. May 2 13:05:44.400: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:05:44.472: INFO: namespace kubectl-1741 deletion completed in 24.089358377s • [SLOW TEST:33.121 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:05:44.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments May 2 13:05:44.599: INFO: Waiting up to 5m0s for pod "client-containers-fed2f121-7c79-4f95-b3d0-12ecba9d4a97" in namespace "containers-6331" to be "success or failure" May 2 13:05:44.603: INFO: Pod "client-containers-fed2f121-7c79-4f95-b3d0-12ecba9d4a97": Phase="Pending", Reason="", readiness=false. Elapsed: 4.324466ms May 2 13:05:46.608: INFO: Pod "client-containers-fed2f121-7c79-4f95-b3d0-12ecba9d4a97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008979253s May 2 13:05:48.612: INFO: Pod "client-containers-fed2f121-7c79-4f95-b3d0-12ecba9d4a97": Phase="Running", Reason="", readiness=true. Elapsed: 4.013197408s May 2 13:05:50.616: INFO: Pod "client-containers-fed2f121-7c79-4f95-b3d0-12ecba9d4a97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01724283s STEP: Saw pod success May 2 13:05:50.616: INFO: Pod "client-containers-fed2f121-7c79-4f95-b3d0-12ecba9d4a97" satisfied condition "success or failure" May 2 13:05:50.620: INFO: Trying to get logs from node iruya-worker2 pod client-containers-fed2f121-7c79-4f95-b3d0-12ecba9d4a97 container test-container: STEP: delete the pod May 2 13:05:50.661: INFO: Waiting for pod client-containers-fed2f121-7c79-4f95-b3d0-12ecba9d4a97 to disappear May 2 13:05:50.669: INFO: Pod client-containers-fed2f121-7c79-4f95-b3d0-12ecba9d4a97 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:05:50.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6331" for this suite. May 2 13:05:56.685: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:05:56.761: INFO: namespace containers-6331 deletion completed in 6.087732746s • [SLOW TEST:12.288 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:05:56.761: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 2 13:05:56.840: INFO: Creating ReplicaSet my-hostname-basic-15a878be-aa3b-42ad-be74-49cadb7ca64a May 2 13:05:56.888: INFO: Pod name my-hostname-basic-15a878be-aa3b-42ad-be74-49cadb7ca64a: Found 0 pods out of 1 May 2 13:06:01.893: INFO: Pod name my-hostname-basic-15a878be-aa3b-42ad-be74-49cadb7ca64a: Found 1 pods out of 1 May 2 13:06:01.893: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-15a878be-aa3b-42ad-be74-49cadb7ca64a" is running May 2 13:06:01.897: INFO: Pod "my-hostname-basic-15a878be-aa3b-42ad-be74-49cadb7ca64a-kggf9" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-02 13:05:57 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-02 13:05:59 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-02 13:05:59 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-02 13:05:56 +0000 UTC Reason: Message:}]) May 2 13:06:01.897: INFO: Trying to dial the pod May 2 13:06:06.909: INFO: Controller my-hostname-basic-15a878be-aa3b-42ad-be74-49cadb7ca64a: Got expected result from replica 1 [my-hostname-basic-15a878be-aa3b-42ad-be74-49cadb7ca64a-kggf9]: "my-hostname-basic-15a878be-aa3b-42ad-be74-49cadb7ca64a-kggf9", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:06:06.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-7247" for this suite. May 2 13:06:12.975: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:06:13.106: INFO: namespace replicaset-7247 deletion completed in 6.186671411s • [SLOW TEST:16.345 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:06:13.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 2 13:06:13.177: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-5224' May 2 13:06:13.273: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 2 13:06:13.273: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 May 2 13:06:17.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-5224' May 2 13:06:17.416: INFO: stderr: "" May 2 13:06:17.416: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:06:17.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5224" for this suite. May 2 13:06:39.470: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:06:39.548: INFO: namespace kubectl-5224 deletion completed in 22.128106012s • [SLOW TEST:26.441 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:06:39.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium May 2 13:06:39.728: INFO: Waiting up to 5m0s for pod "pod-ef0b8039-9fe8-4bf3-973c-a3755b859307" in namespace "emptydir-5639" to be "success or failure" May 2 13:06:39.736: INFO: Pod "pod-ef0b8039-9fe8-4bf3-973c-a3755b859307": Phase="Pending", Reason="", readiness=false. Elapsed: 8.63953ms May 2 13:06:41.741: INFO: Pod "pod-ef0b8039-9fe8-4bf3-973c-a3755b859307": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012807715s May 2 13:06:43.745: INFO: Pod "pod-ef0b8039-9fe8-4bf3-973c-a3755b859307": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017583649s STEP: Saw pod success May 2 13:06:43.745: INFO: Pod "pod-ef0b8039-9fe8-4bf3-973c-a3755b859307" satisfied condition "success or failure" May 2 13:06:43.748: INFO: Trying to get logs from node iruya-worker pod pod-ef0b8039-9fe8-4bf3-973c-a3755b859307 container test-container: STEP: delete the pod May 2 13:06:43.820: INFO: Waiting for pod pod-ef0b8039-9fe8-4bf3-973c-a3755b859307 to disappear May 2 13:06:44.009: INFO: Pod pod-ef0b8039-9fe8-4bf3-973c-a3755b859307 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:06:44.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5639" for this suite. May 2 13:06:50.048: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:06:50.163: INFO: namespace emptydir-5639 deletion completed in 6.149401509s • [SLOW TEST:10.614 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:06:50.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-7177 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet May 2 13:06:50.278: INFO: Found 0 stateful pods, waiting for 3 May 2 13:07:00.283: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 2 13:07:00.283: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 2 13:07:00.283: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 2 13:07:10.284: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 2 13:07:10.284: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 2 13:07:10.284: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 2 13:07:10.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7177 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 2 13:07:10.553: INFO: stderr: "I0502 13:07:10.433088 353 log.go:172] (0xc0007cea50) (0xc00033a820) Create stream\nI0502 13:07:10.433308 353 log.go:172] (0xc0007cea50) (0xc00033a820) Stream added, broadcasting: 1\nI0502 13:07:10.436160 353 log.go:172] (0xc0007cea50) Reply frame received for 1\nI0502 13:07:10.436210 353 log.go:172] (0xc0007cea50) (0xc00033a8c0) Create stream\nI0502 13:07:10.436227 353 log.go:172] (0xc0007cea50) (0xc00033a8c0) Stream added, broadcasting: 3\nI0502 13:07:10.437405 353 log.go:172] (0xc0007cea50) Reply frame received for 3\nI0502 13:07:10.437450 353 log.go:172] (0xc0007cea50) (0xc0009ba000) Create stream\nI0502 13:07:10.437468 353 log.go:172] (0xc0007cea50) (0xc0009ba000) Stream added, broadcasting: 5\nI0502 13:07:10.438446 353 log.go:172] (0xc0007cea50) Reply frame received for 5\nI0502 13:07:10.513510 353 log.go:172] (0xc0007cea50) Data frame received for 5\nI0502 13:07:10.513541 353 log.go:172] (0xc0009ba000) (5) Data frame handling\nI0502 13:07:10.513562 353 log.go:172] (0xc0009ba000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0502 13:07:10.544669 353 log.go:172] (0xc0007cea50) Data frame received for 3\nI0502 13:07:10.544716 353 log.go:172] (0xc00033a8c0) (3) Data frame handling\nI0502 13:07:10.544735 353 log.go:172] (0xc00033a8c0) (3) Data frame sent\nI0502 13:07:10.544798 353 log.go:172] (0xc0007cea50) Data frame received for 5\nI0502 13:07:10.544824 353 log.go:172] (0xc0009ba000) (5) Data frame handling\nI0502 13:07:10.545076 353 log.go:172] (0xc0007cea50) Data frame received for 3\nI0502 13:07:10.545103 353 log.go:172] (0xc00033a8c0) (3) Data frame handling\nI0502 13:07:10.547660 353 log.go:172] (0xc0007cea50) Data frame received for 1\nI0502 13:07:10.547697 353 log.go:172] (0xc00033a820) (1) Data frame handling\nI0502 13:07:10.547719 353 log.go:172] (0xc00033a820) (1) Data frame sent\nI0502 13:07:10.547741 353 log.go:172] (0xc0007cea50) (0xc00033a820) Stream removed, broadcasting: 1\nI0502 13:07:10.547911 353 log.go:172] (0xc0007cea50) Go away received\nI0502 13:07:10.548161 353 log.go:172] (0xc0007cea50) (0xc00033a820) Stream removed, broadcasting: 1\nI0502 13:07:10.548180 353 log.go:172] (0xc0007cea50) (0xc00033a8c0) Stream removed, broadcasting: 3\nI0502 13:07:10.548192 353 log.go:172] (0xc0007cea50) (0xc0009ba000) Stream removed, broadcasting: 5\n" May 2 13:07:10.553: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 2 13:07:10.553: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 2 13:07:20.586: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 2 13:07:30.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7177 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 2 13:07:30.871: INFO: stderr: "I0502 13:07:30.778571 373 log.go:172] (0xc0009e8370) (0xc0008c85a0) Create stream\nI0502 13:07:30.778643 373 log.go:172] (0xc0009e8370) (0xc0008c85a0) Stream added, broadcasting: 1\nI0502 13:07:30.781502 373 log.go:172] (0xc0009e8370) Reply frame received for 1\nI0502 13:07:30.781565 373 log.go:172] (0xc0009e8370) (0xc0006e2320) Create stream\nI0502 13:07:30.781583 373 log.go:172] (0xc0009e8370) (0xc0006e2320) Stream added, broadcasting: 3\nI0502 13:07:30.782638 373 log.go:172] (0xc0009e8370) Reply frame received for 3\nI0502 13:07:30.782686 373 log.go:172] (0xc0009e8370) (0xc000a06000) Create stream\nI0502 13:07:30.782699 373 log.go:172] (0xc0009e8370) (0xc000a06000) Stream added, broadcasting: 5\nI0502 13:07:30.783610 373 log.go:172] (0xc0009e8370) Reply frame received for 5\nI0502 13:07:30.864122 373 log.go:172] (0xc0009e8370) Data frame received for 5\nI0502 13:07:30.864155 373 log.go:172] (0xc000a06000) (5) Data frame handling\nI0502 13:07:30.864166 373 log.go:172] (0xc000a06000) (5) Data frame sent\nI0502 13:07:30.864178 373 log.go:172] (0xc0009e8370) Data frame received for 5\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0502 13:07:30.864191 373 log.go:172] (0xc000a06000) (5) Data frame handling\nI0502 13:07:30.864296 373 log.go:172] (0xc0009e8370) Data frame received for 3\nI0502 13:07:30.864338 373 log.go:172] (0xc0006e2320) (3) Data frame handling\nI0502 13:07:30.864365 373 log.go:172] (0xc0006e2320) (3) Data frame sent\nI0502 13:07:30.864383 373 log.go:172] (0xc0009e8370) Data frame received for 3\nI0502 13:07:30.864398 373 log.go:172] (0xc0006e2320) (3) Data frame handling\nI0502 13:07:30.866244 373 log.go:172] (0xc0009e8370) Data frame received for 1\nI0502 13:07:30.866269 373 log.go:172] (0xc0008c85a0) (1) Data frame handling\nI0502 13:07:30.866282 373 log.go:172] (0xc0008c85a0) (1) Data frame sent\nI0502 13:07:30.866294 373 log.go:172] (0xc0009e8370) (0xc0008c85a0) Stream removed, broadcasting: 1\nI0502 13:07:30.866407 373 log.go:172] (0xc0009e8370) Go away received\nI0502 13:07:30.866623 373 log.go:172] (0xc0009e8370) (0xc0008c85a0) Stream removed, broadcasting: 1\nI0502 13:07:30.866642 373 log.go:172] (0xc0009e8370) (0xc0006e2320) Stream removed, broadcasting: 3\nI0502 13:07:30.866651 373 log.go:172] (0xc0009e8370) (0xc000a06000) Stream removed, broadcasting: 5\n" May 2 13:07:30.871: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 2 13:07:30.871: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 2 13:07:40.894: INFO: Waiting for StatefulSet statefulset-7177/ss2 to complete update May 2 13:07:40.894: INFO: Waiting for Pod statefulset-7177/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 2 13:07:40.894: INFO: Waiting for Pod statefulset-7177/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 2 13:07:40.894: INFO: Waiting for Pod statefulset-7177/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 2 13:07:50.902: INFO: Waiting for StatefulSet statefulset-7177/ss2 to complete update May 2 13:07:50.902: INFO: Waiting for Pod statefulset-7177/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 2 13:07:50.902: INFO: Waiting for Pod statefulset-7177/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 2 13:08:00.903: INFO: Waiting for StatefulSet statefulset-7177/ss2 to complete update May 2 13:08:00.903: INFO: Waiting for Pod statefulset-7177/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Rolling back to a previous revision May 2 13:08:10.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7177 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 2 13:08:11.252: INFO: stderr: "I0502 13:08:11.031775 394 log.go:172] (0xc000116790) (0xc00052c820) Create stream\nI0502 13:08:11.031842 394 log.go:172] (0xc000116790) (0xc00052c820) Stream added, broadcasting: 1\nI0502 13:08:11.034758 394 log.go:172] (0xc000116790) Reply frame received for 1\nI0502 13:08:11.034795 394 log.go:172] (0xc000116790) (0xc000674000) Create stream\nI0502 13:08:11.034812 394 log.go:172] (0xc000116790) (0xc000674000) Stream added, broadcasting: 3\nI0502 13:08:11.035667 394 log.go:172] (0xc000116790) Reply frame received for 3\nI0502 13:08:11.035699 394 log.go:172] (0xc000116790) (0xc00052c8c0) Create stream\nI0502 13:08:11.035708 394 log.go:172] (0xc000116790) (0xc00052c8c0) Stream added, broadcasting: 5\nI0502 13:08:11.036576 394 log.go:172] (0xc000116790) Reply frame received for 5\nI0502 13:08:11.139113 394 log.go:172] (0xc000116790) Data frame received for 5\nI0502 13:08:11.139161 394 log.go:172] (0xc00052c8c0) (5) Data frame handling\nI0502 13:08:11.139693 394 log.go:172] (0xc00052c8c0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0502 13:08:11.244601 394 log.go:172] (0xc000116790) Data frame received for 5\nI0502 13:08:11.244711 394 log.go:172] (0xc00052c8c0) (5) Data frame handling\nI0502 13:08:11.244756 394 log.go:172] (0xc000116790) Data frame received for 3\nI0502 13:08:11.244805 394 log.go:172] (0xc000674000) (3) Data frame handling\nI0502 13:08:11.244837 394 log.go:172] (0xc000674000) (3) Data frame sent\nI0502 13:08:11.244855 394 log.go:172] (0xc000116790) Data frame received for 3\nI0502 13:08:11.244866 394 log.go:172] (0xc000674000) (3) Data frame handling\nI0502 13:08:11.247616 394 log.go:172] (0xc000116790) Data frame received for 1\nI0502 13:08:11.247663 394 log.go:172] (0xc00052c820) (1) Data frame handling\nI0502 13:08:11.247681 394 log.go:172] (0xc00052c820) (1) Data frame sent\nI0502 13:08:11.247706 394 log.go:172] (0xc000116790) (0xc00052c820) Stream removed, broadcasting: 1\nI0502 13:08:11.247882 394 log.go:172] (0xc000116790) Go away received\nI0502 13:08:11.248216 394 log.go:172] (0xc000116790) (0xc00052c820) Stream removed, broadcasting: 1\nI0502 13:08:11.248254 394 log.go:172] (0xc000116790) (0xc000674000) Stream removed, broadcasting: 3\nI0502 13:08:11.248295 394 log.go:172] (0xc000116790) (0xc00052c8c0) Stream removed, broadcasting: 5\n" May 2 13:08:11.252: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 2 13:08:11.252: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 2 13:08:21.284: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 2 13:08:31.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7177 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 2 13:08:31.575: INFO: stderr: "I0502 13:08:31.476787 414 log.go:172] (0xc000a2c420) (0xc0006ea6e0) Create stream\nI0502 13:08:31.476880 414 log.go:172] (0xc000a2c420) (0xc0006ea6e0) Stream added, broadcasting: 1\nI0502 13:08:31.480568 414 log.go:172] (0xc000a2c420) Reply frame received for 1\nI0502 13:08:31.480623 414 log.go:172] (0xc000a2c420) (0xc0006ea000) Create stream\nI0502 13:08:31.480646 414 log.go:172] (0xc000a2c420) (0xc0006ea000) Stream added, broadcasting: 3\nI0502 13:08:31.481915 414 log.go:172] (0xc000a2c420) Reply frame received for 3\nI0502 13:08:31.481959 414 log.go:172] (0xc000a2c420) (0xc00066a3c0) Create stream\nI0502 13:08:31.481972 414 log.go:172] (0xc000a2c420) (0xc00066a3c0) Stream added, broadcasting: 5\nI0502 13:08:31.482792 414 log.go:172] (0xc000a2c420) Reply frame received for 5\nI0502 13:08:31.570665 414 log.go:172] (0xc000a2c420) Data frame received for 3\nI0502 13:08:31.570707 414 log.go:172] (0xc0006ea000) (3) Data frame handling\nI0502 13:08:31.570718 414 log.go:172] (0xc0006ea000) (3) Data frame sent\nI0502 13:08:31.570731 414 log.go:172] (0xc000a2c420) Data frame received for 3\nI0502 13:08:31.570738 414 log.go:172] (0xc0006ea000) (3) Data frame handling\nI0502 13:08:31.570751 414 log.go:172] (0xc000a2c420) Data frame received for 5\nI0502 13:08:31.570756 414 log.go:172] (0xc00066a3c0) (5) Data frame handling\nI0502 13:08:31.570762 414 log.go:172] (0xc00066a3c0) (5) Data frame sent\nI0502 13:08:31.570768 414 log.go:172] (0xc000a2c420) Data frame received for 5\nI0502 13:08:31.570777 414 log.go:172] (0xc00066a3c0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0502 13:08:31.571866 414 log.go:172] (0xc000a2c420) Data frame received for 1\nI0502 13:08:31.571893 414 log.go:172] (0xc0006ea6e0) (1) Data frame handling\nI0502 13:08:31.571909 414 log.go:172] (0xc0006ea6e0) (1) Data frame sent\nI0502 13:08:31.571919 414 log.go:172] (0xc000a2c420) (0xc0006ea6e0) Stream removed, broadcasting: 1\nI0502 13:08:31.571945 414 log.go:172] (0xc000a2c420) Go away received\nI0502 13:08:31.572179 414 log.go:172] (0xc000a2c420) (0xc0006ea6e0) Stream removed, broadcasting: 1\nI0502 13:08:31.572193 414 log.go:172] (0xc000a2c420) (0xc0006ea000) Stream removed, broadcasting: 3\nI0502 13:08:31.572201 414 log.go:172] (0xc000a2c420) (0xc00066a3c0) Stream removed, broadcasting: 5\n" May 2 13:08:31.575: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 2 13:08:31.575: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 2 13:09:01.596: INFO: Waiting for StatefulSet statefulset-7177/ss2 to complete update May 2 13:09:01.596: INFO: Waiting for Pod statefulset-7177/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 2 13:09:11.604: INFO: Deleting all statefulset in ns statefulset-7177 May 2 13:09:11.607: INFO: Scaling statefulset ss2 to 0 May 2 13:09:31.631: INFO: Waiting for statefulset status.replicas updated to 0 May 2 13:09:31.635: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:09:31.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7177" for this suite. May 2 13:09:39.722: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:09:39.804: INFO: namespace statefulset-7177 deletion completed in 8.117447272s • [SLOW TEST:169.640 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:09:39.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 2 13:09:39.912: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5c51b61e-4118-47bc-8b56-5644401923f5" in namespace "projected-5966" to be "success or failure" May 2 13:09:39.927: INFO: Pod "downwardapi-volume-5c51b61e-4118-47bc-8b56-5644401923f5": Phase="Pending", Reason="", readiness=false. Elapsed: 15.018235ms May 2 13:09:42.154: INFO: Pod "downwardapi-volume-5c51b61e-4118-47bc-8b56-5644401923f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.241989823s May 2 13:09:44.159: INFO: Pod "downwardapi-volume-5c51b61e-4118-47bc-8b56-5644401923f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.246460957s STEP: Saw pod success May 2 13:09:44.159: INFO: Pod "downwardapi-volume-5c51b61e-4118-47bc-8b56-5644401923f5" satisfied condition "success or failure" May 2 13:09:44.161: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-5c51b61e-4118-47bc-8b56-5644401923f5 container client-container: STEP: delete the pod May 2 13:09:44.288: INFO: Waiting for pod downwardapi-volume-5c51b61e-4118-47bc-8b56-5644401923f5 to disappear May 2 13:09:44.300: INFO: Pod downwardapi-volume-5c51b61e-4118-47bc-8b56-5644401923f5 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:09:44.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5966" for this suite. May 2 13:09:50.322: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:09:50.392: INFO: namespace projected-5966 deletion completed in 6.089401028s • [SLOW TEST:10.588 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:09:50.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium May 2 13:09:50.462: INFO: Waiting up to 5m0s for pod "pod-f26c3b4f-1f27-46da-bd07-c30ff8a5b221" in namespace "emptydir-9814" to be "success or failure" May 2 13:09:50.466: INFO: Pod "pod-f26c3b4f-1f27-46da-bd07-c30ff8a5b221": Phase="Pending", Reason="", readiness=false. Elapsed: 3.483932ms May 2 13:09:52.484: INFO: Pod "pod-f26c3b4f-1f27-46da-bd07-c30ff8a5b221": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021431225s May 2 13:09:54.488: INFO: Pod "pod-f26c3b4f-1f27-46da-bd07-c30ff8a5b221": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025125313s STEP: Saw pod success May 2 13:09:54.488: INFO: Pod "pod-f26c3b4f-1f27-46da-bd07-c30ff8a5b221" satisfied condition "success or failure" May 2 13:09:54.490: INFO: Trying to get logs from node iruya-worker2 pod pod-f26c3b4f-1f27-46da-bd07-c30ff8a5b221 container test-container: STEP: delete the pod May 2 13:09:54.515: INFO: Waiting for pod pod-f26c3b4f-1f27-46da-bd07-c30ff8a5b221 to disappear May 2 13:09:54.544: INFO: Pod pod-f26c3b4f-1f27-46da-bd07-c30ff8a5b221 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:09:54.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9814" for this suite. May 2 13:10:00.571: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:10:00.644: INFO: namespace emptydir-9814 deletion completed in 6.096682026s • [SLOW TEST:10.251 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:10:00.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 May 2 13:10:00.714: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 2 13:10:00.727: INFO: Waiting for terminating namespaces to be deleted... May 2 13:10:00.730: INFO: Logging pods the kubelet thinks is on node iruya-worker before test May 2 13:10:00.735: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 2 13:10:00.735: INFO: Container kube-proxy ready: true, restart count 0 May 2 13:10:00.735: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 2 13:10:00.735: INFO: Container kindnet-cni ready: true, restart count 0 May 2 13:10:00.735: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test May 2 13:10:00.740: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) May 2 13:10:00.740: INFO: Container coredns ready: true, restart count 0 May 2 13:10:00.740: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) May 2 13:10:00.740: INFO: Container coredns ready: true, restart count 0 May 2 13:10:00.740: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) May 2 13:10:00.740: INFO: Container kube-proxy ready: true, restart count 0 May 2 13:10:00.740: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) May 2 13:10:00.740: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.160b381f541d67fb], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:10:01.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6915" for this suite. May 2 13:10:07.811: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:10:07.880: INFO: namespace sched-pred-6915 deletion completed in 6.087992333s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:7.236 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:10:07.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium May 2 13:10:07.941: INFO: Waiting up to 5m0s for pod "pod-d43809b2-426b-4df4-b2ac-eaf58a22e17b" in namespace "emptydir-7785" to be "success or failure" May 2 13:10:07.960: INFO: Pod "pod-d43809b2-426b-4df4-b2ac-eaf58a22e17b": Phase="Pending", Reason="", readiness=false. Elapsed: 19.040064ms May 2 13:10:09.964: INFO: Pod "pod-d43809b2-426b-4df4-b2ac-eaf58a22e17b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023484214s May 2 13:10:11.968: INFO: Pod "pod-d43809b2-426b-4df4-b2ac-eaf58a22e17b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027328731s STEP: Saw pod success May 2 13:10:11.968: INFO: Pod "pod-d43809b2-426b-4df4-b2ac-eaf58a22e17b" satisfied condition "success or failure" May 2 13:10:11.971: INFO: Trying to get logs from node iruya-worker pod pod-d43809b2-426b-4df4-b2ac-eaf58a22e17b container test-container: STEP: delete the pod May 2 13:10:12.005: INFO: Waiting for pod pod-d43809b2-426b-4df4-b2ac-eaf58a22e17b to disappear May 2 13:10:12.031: INFO: Pod pod-d43809b2-426b-4df4-b2ac-eaf58a22e17b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:10:12.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7785" for this suite. May 2 13:10:18.065: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:10:18.175: INFO: namespace emptydir-7785 deletion completed in 6.140238915s • [SLOW TEST:10.295 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:10:18.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args May 2 13:10:18.260: INFO: Waiting up to 5m0s for pod "var-expansion-902d721b-8fcd-4770-85dc-aa221093009d" in namespace "var-expansion-7975" to be "success or failure" May 2 13:10:18.264: INFO: Pod "var-expansion-902d721b-8fcd-4770-85dc-aa221093009d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.870986ms May 2 13:10:20.268: INFO: Pod "var-expansion-902d721b-8fcd-4770-85dc-aa221093009d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008152941s May 2 13:10:22.273: INFO: Pod "var-expansion-902d721b-8fcd-4770-85dc-aa221093009d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012442775s STEP: Saw pod success May 2 13:10:22.273: INFO: Pod "var-expansion-902d721b-8fcd-4770-85dc-aa221093009d" satisfied condition "success or failure" May 2 13:10:22.276: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-902d721b-8fcd-4770-85dc-aa221093009d container dapi-container: STEP: delete the pod May 2 13:10:22.348: INFO: Waiting for pod var-expansion-902d721b-8fcd-4770-85dc-aa221093009d to disappear May 2 13:10:22.379: INFO: Pod var-expansion-902d721b-8fcd-4770-85dc-aa221093009d no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:10:22.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7975" for this suite. May 2 13:10:28.407: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:10:28.487: INFO: namespace var-expansion-7975 deletion completed in 6.103925413s • [SLOW TEST:10.312 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:10:28.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 2 13:10:28.561: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5b5ffd5a-0e89-4502-a98d-37aa9ca7d008" in namespace "projected-3714" to be "success or failure" May 2 13:10:28.577: INFO: Pod "downwardapi-volume-5b5ffd5a-0e89-4502-a98d-37aa9ca7d008": Phase="Pending", Reason="", readiness=false. Elapsed: 15.315053ms May 2 13:10:30.582: INFO: Pod "downwardapi-volume-5b5ffd5a-0e89-4502-a98d-37aa9ca7d008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020248428s May 2 13:10:32.586: INFO: Pod "downwardapi-volume-5b5ffd5a-0e89-4502-a98d-37aa9ca7d008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024643718s STEP: Saw pod success May 2 13:10:32.586: INFO: Pod "downwardapi-volume-5b5ffd5a-0e89-4502-a98d-37aa9ca7d008" satisfied condition "success or failure" May 2 13:10:32.590: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-5b5ffd5a-0e89-4502-a98d-37aa9ca7d008 container client-container: STEP: delete the pod May 2 13:10:32.696: INFO: Waiting for pod downwardapi-volume-5b5ffd5a-0e89-4502-a98d-37aa9ca7d008 to disappear May 2 13:10:32.712: INFO: Pod downwardapi-volume-5b5ffd5a-0e89-4502-a98d-37aa9ca7d008 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:10:32.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3714" for this suite. May 2 13:10:38.728: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:10:38.808: INFO: namespace projected-3714 deletion completed in 6.092074424s • [SLOW TEST:10.321 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:10:38.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 2 13:10:38.877: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4bbf26bf-74af-4ab6-ae0a-623458984d0c" in namespace "projected-3006" to be "success or failure" May 2 13:10:38.880: INFO: Pod "downwardapi-volume-4bbf26bf-74af-4ab6-ae0a-623458984d0c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.845704ms May 2 13:10:40.884: INFO: Pod "downwardapi-volume-4bbf26bf-74af-4ab6-ae0a-623458984d0c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007635565s May 2 13:10:42.889: INFO: Pod "downwardapi-volume-4bbf26bf-74af-4ab6-ae0a-623458984d0c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012292307s STEP: Saw pod success May 2 13:10:42.889: INFO: Pod "downwardapi-volume-4bbf26bf-74af-4ab6-ae0a-623458984d0c" satisfied condition "success or failure" May 2 13:10:42.892: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-4bbf26bf-74af-4ab6-ae0a-623458984d0c container client-container: STEP: delete the pod May 2 13:10:42.939: INFO: Waiting for pod downwardapi-volume-4bbf26bf-74af-4ab6-ae0a-623458984d0c to disappear May 2 13:10:42.976: INFO: Pod downwardapi-volume-4bbf26bf-74af-4ab6-ae0a-623458984d0c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:10:42.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3006" for this suite. May 2 13:10:48.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:10:49.083: INFO: namespace projected-3006 deletion completed in 6.10266071s • [SLOW TEST:10.275 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:10:49.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-3aab0ad1-b12c-4254-bfb4-7f7043c5392b STEP: Creating a pod to test consume secrets May 2 13:10:49.213: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b639013e-ef10-4041-81a3-234e3a70b147" in namespace "projected-425" to be "success or failure" May 2 13:10:49.247: INFO: Pod "pod-projected-secrets-b639013e-ef10-4041-81a3-234e3a70b147": Phase="Pending", Reason="", readiness=false. Elapsed: 34.188725ms May 2 13:10:51.251: INFO: Pod "pod-projected-secrets-b639013e-ef10-4041-81a3-234e3a70b147": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03765833s May 2 13:10:53.342: INFO: Pod "pod-projected-secrets-b639013e-ef10-4041-81a3-234e3a70b147": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.129068331s STEP: Saw pod success May 2 13:10:53.342: INFO: Pod "pod-projected-secrets-b639013e-ef10-4041-81a3-234e3a70b147" satisfied condition "success or failure" May 2 13:10:53.346: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-b639013e-ef10-4041-81a3-234e3a70b147 container projected-secret-volume-test: STEP: delete the pod May 2 13:10:53.529: INFO: Waiting for pod pod-projected-secrets-b639013e-ef10-4041-81a3-234e3a70b147 to disappear May 2 13:10:53.589: INFO: Pod pod-projected-secrets-b639013e-ef10-4041-81a3-234e3a70b147 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:10:53.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-425" for this suite. May 2 13:10:59.648: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:10:59.723: INFO: namespace projected-425 deletion completed in 6.129708738s • [SLOW TEST:10.640 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:10:59.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 2 13:10:59.801: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2b11ae8f-d44c-48f4-af55-73061ae9bb9c" in namespace "downward-api-7900" to be "success or failure" May 2 13:10:59.804: INFO: Pod "downwardapi-volume-2b11ae8f-d44c-48f4-af55-73061ae9bb9c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.130972ms May 2 13:11:01.808: INFO: Pod "downwardapi-volume-2b11ae8f-d44c-48f4-af55-73061ae9bb9c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006523175s May 2 13:11:03.815: INFO: Pod "downwardapi-volume-2b11ae8f-d44c-48f4-af55-73061ae9bb9c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013292317s STEP: Saw pod success May 2 13:11:03.815: INFO: Pod "downwardapi-volume-2b11ae8f-d44c-48f4-af55-73061ae9bb9c" satisfied condition "success or failure" May 2 13:11:03.817: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-2b11ae8f-d44c-48f4-af55-73061ae9bb9c container client-container: STEP: delete the pod May 2 13:11:03.836: INFO: Waiting for pod downwardapi-volume-2b11ae8f-d44c-48f4-af55-73061ae9bb9c to disappear May 2 13:11:03.852: INFO: Pod downwardapi-volume-2b11ae8f-d44c-48f4-af55-73061ae9bb9c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:11:03.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7900" for this suite. May 2 13:11:09.880: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:11:09.956: INFO: namespace downward-api-7900 deletion completed in 6.100565416s • [SLOW TEST:10.232 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:11:09.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 2 13:11:10.155: INFO: Waiting up to 5m0s for pod "downward-api-78f75f60-c63c-4fa4-a2b8-8d0cf4c906ff" in namespace "downward-api-5881" to be "success or failure" May 2 13:11:10.163: INFO: Pod "downward-api-78f75f60-c63c-4fa4-a2b8-8d0cf4c906ff": Phase="Pending", Reason="", readiness=false. Elapsed: 8.067362ms May 2 13:11:12.167: INFO: Pod "downward-api-78f75f60-c63c-4fa4-a2b8-8d0cf4c906ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011670337s May 2 13:11:14.191: INFO: Pod "downward-api-78f75f60-c63c-4fa4-a2b8-8d0cf4c906ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035589818s STEP: Saw pod success May 2 13:11:14.191: INFO: Pod "downward-api-78f75f60-c63c-4fa4-a2b8-8d0cf4c906ff" satisfied condition "success or failure" May 2 13:11:14.193: INFO: Trying to get logs from node iruya-worker pod downward-api-78f75f60-c63c-4fa4-a2b8-8d0cf4c906ff container dapi-container: STEP: delete the pod May 2 13:11:14.212: INFO: Waiting for pod downward-api-78f75f60-c63c-4fa4-a2b8-8d0cf4c906ff to disappear May 2 13:11:14.216: INFO: Pod downward-api-78f75f60-c63c-4fa4-a2b8-8d0cf4c906ff no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:11:14.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5881" for this suite. May 2 13:11:20.232: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:11:20.307: INFO: namespace downward-api-5881 deletion completed in 6.086736322s • [SLOW TEST:10.351 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:11:20.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 2 13:11:20.379: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 2 13:11:20.387: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 13:11:20.426: INFO: Number of nodes with available pods: 0 May 2 13:11:20.426: INFO: Node iruya-worker is running more than one daemon pod May 2 13:11:21.468: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 13:11:21.472: INFO: Number of nodes with available pods: 0 May 2 13:11:21.472: INFO: Node iruya-worker is running more than one daemon pod May 2 13:11:22.631: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 13:11:22.655: INFO: Number of nodes with available pods: 0 May 2 13:11:22.656: INFO: Node iruya-worker is running more than one daemon pod May 2 13:11:23.596: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 13:11:23.650: INFO: Number of nodes with available pods: 0 May 2 13:11:23.650: INFO: Node iruya-worker is running more than one daemon pod May 2 13:11:24.431: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 13:11:24.434: INFO: Number of nodes with available pods: 0 May 2 13:11:24.434: INFO: Node iruya-worker is running more than one daemon pod May 2 13:11:25.432: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 13:11:25.436: INFO: Number of nodes with available pods: 2 May 2 13:11:25.436: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 2 13:11:25.469: INFO: Wrong image for pod: daemon-set-8w2dr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 2 13:11:25.469: INFO: Wrong image for pod: daemon-set-mt77w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 2 13:11:25.483: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 13:11:26.488: INFO: Wrong image for pod: daemon-set-8w2dr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 2 13:11:26.488: INFO: Wrong image for pod: daemon-set-mt77w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 2 13:11:26.492: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 13:11:27.487: INFO: Wrong image for pod: daemon-set-8w2dr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 2 13:11:27.487: INFO: Wrong image for pod: daemon-set-mt77w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 2 13:11:27.490: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 13:11:28.488: INFO: Wrong image for pod: daemon-set-8w2dr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 2 13:11:28.488: INFO: Wrong image for pod: daemon-set-mt77w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 2 13:11:28.492: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 13:11:29.488: INFO: Wrong image for pod: daemon-set-8w2dr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 2 13:11:29.488: INFO: Wrong image for pod: daemon-set-mt77w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 2 13:11:29.488: INFO: Pod daemon-set-mt77w is not available May 2 13:11:29.492: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 13:11:30.488: INFO: Wrong image for pod: daemon-set-8w2dr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 2 13:11:30.488: INFO: Wrong image for pod: daemon-set-mt77w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 2 13:11:30.488: INFO: Pod daemon-set-mt77w is not available May 2 13:11:30.492: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 13:11:31.488: INFO: Wrong image for pod: daemon-set-8w2dr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 2 13:11:31.488: INFO: Wrong image for pod: daemon-set-mt77w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 2 13:11:31.488: INFO: Pod daemon-set-mt77w is not available May 2 13:11:31.492: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 13:11:32.488: INFO: Wrong image for pod: daemon-set-8w2dr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 2 13:11:32.488: INFO: Pod daemon-set-x8vbs is not available May 2 13:11:32.492: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 13:11:33.487: INFO: Wrong image for pod: daemon-set-8w2dr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 2 13:11:33.487: INFO: Pod daemon-set-x8vbs is not available May 2 13:11:33.490: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 13:11:34.487: INFO: Wrong image for pod: daemon-set-8w2dr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 2 13:11:34.487: INFO: Pod daemon-set-x8vbs is not available May 2 13:11:34.500: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 13:11:35.487: INFO: Wrong image for pod: daemon-set-8w2dr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 2 13:11:35.491: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 13:11:36.487: INFO: Wrong image for pod: daemon-set-8w2dr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 2 13:11:36.487: INFO: Pod daemon-set-8w2dr is not available May 2 13:11:36.491: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 13:11:37.487: INFO: Wrong image for pod: daemon-set-8w2dr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 2 13:11:37.487: INFO: Pod daemon-set-8w2dr is not available May 2 13:11:37.491: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 13:11:38.488: INFO: Wrong image for pod: daemon-set-8w2dr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 2 13:11:38.488: INFO: Pod daemon-set-8w2dr is not available May 2 13:11:38.491: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 13:11:39.488: INFO: Wrong image for pod: daemon-set-8w2dr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 2 13:11:39.488: INFO: Pod daemon-set-8w2dr is not available May 2 13:11:39.491: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 13:11:40.488: INFO: Wrong image for pod: daemon-set-8w2dr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 2 13:11:40.488: INFO: Pod daemon-set-8w2dr is not available May 2 13:11:40.492: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 13:11:41.488: INFO: Wrong image for pod: daemon-set-8w2dr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 2 13:11:41.488: INFO: Pod daemon-set-8w2dr is not available May 2 13:11:41.491: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 13:11:42.488: INFO: Pod daemon-set-d5cj6 is not available May 2 13:11:42.492: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 2 13:11:42.495: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 13:11:42.498: INFO: Number of nodes with available pods: 1 May 2 13:11:42.498: INFO: Node iruya-worker is running more than one daemon pod May 2 13:11:43.503: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 13:11:43.507: INFO: Number of nodes with available pods: 1 May 2 13:11:43.507: INFO: Node iruya-worker is running more than one daemon pod May 2 13:11:44.503: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 13:11:44.516: INFO: Number of nodes with available pods: 1 May 2 13:11:44.516: INFO: Node iruya-worker is running more than one daemon pod May 2 13:11:45.503: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 13:11:45.507: INFO: Number of nodes with available pods: 2 May 2 13:11:45.507: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7521, will wait for the garbage collector to delete the pods May 2 13:11:45.582: INFO: Deleting DaemonSet.extensions daemon-set took: 6.666459ms May 2 13:11:45.883: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.303685ms May 2 13:11:52.198: INFO: Number of nodes with available pods: 0 May 2 13:11:52.198: INFO: Number of running nodes: 0, number of available pods: 0 May 2 13:11:52.200: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7521/daemonsets","resourceVersion":"8620980"},"items":null} May 2 13:11:52.202: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7521/pods","resourceVersion":"8620980"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:11:52.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7521" for this suite. May 2 13:11:58.247: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:11:58.330: INFO: namespace daemonsets-7521 deletion completed in 6.117435228s • [SLOW TEST:38.023 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:11:58.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-4850 I0502 13:11:58.399124 6 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-4850, replica count: 1 I0502 13:11:59.449636 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0502 13:12:00.449886 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0502 13:12:01.450122 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 2 13:12:01.618: INFO: Created: latency-svc-2jqfm May 2 13:12:01.664: INFO: Got endpoints: latency-svc-2jqfm [114.463748ms] May 2 13:12:01.700: INFO: Created: latency-svc-2czkh May 2 13:12:01.717: INFO: Got endpoints: latency-svc-2czkh [52.201623ms] May 2 13:12:01.786: INFO: Created: latency-svc-rkxvx May 2 13:12:01.813: INFO: Got endpoints: latency-svc-rkxvx [148.483137ms] May 2 13:12:01.839: INFO: Created: latency-svc-k6qpc May 2 13:12:01.849: INFO: Got endpoints: latency-svc-k6qpc [184.514368ms] May 2 13:12:01.874: INFO: Created: latency-svc-7gqms May 2 13:12:01.923: INFO: Got endpoints: latency-svc-7gqms [258.591995ms] May 2 13:12:01.934: INFO: Created: latency-svc-hkmmp May 2 13:12:01.952: INFO: Got endpoints: latency-svc-hkmmp [286.719912ms] May 2 13:12:01.976: INFO: Created: latency-svc-hnwwr May 2 13:12:02.006: INFO: Got endpoints: latency-svc-hnwwr [341.309463ms] May 2 13:12:02.086: INFO: Created: latency-svc-f5zll May 2 13:12:02.089: INFO: Got endpoints: latency-svc-f5zll [424.475417ms] May 2 13:12:02.127: INFO: Created: latency-svc-7gc2v May 2 13:12:02.146: INFO: Got endpoints: latency-svc-7gc2v [481.171716ms] May 2 13:12:02.168: INFO: Created: latency-svc-ls2ht May 2 13:12:02.259: INFO: Got endpoints: latency-svc-ls2ht [593.912598ms] May 2 13:12:02.277: INFO: Created: latency-svc-fxhtg May 2 13:12:02.306: INFO: Got endpoints: latency-svc-fxhtg [641.351703ms] May 2 13:12:02.342: INFO: Created: latency-svc-9pn9f May 2 13:12:02.396: INFO: Got endpoints: latency-svc-9pn9f [731.032284ms] May 2 13:12:02.426: INFO: Created: latency-svc-qr9zd May 2 13:12:02.441: INFO: Got endpoints: latency-svc-qr9zd [775.727263ms] May 2 13:12:02.463: INFO: Created: latency-svc-sqffm May 2 13:12:02.477: INFO: Got endpoints: latency-svc-sqffm [812.231015ms] May 2 13:12:02.534: INFO: Created: latency-svc-6j9sg May 2 13:12:02.538: INFO: Got endpoints: latency-svc-6j9sg [873.393182ms] May 2 13:12:02.597: INFO: Created: latency-svc-vqb56 May 2 13:12:02.610: INFO: Got endpoints: latency-svc-vqb56 [945.245304ms] May 2 13:12:02.630: INFO: Created: latency-svc-98mbl May 2 13:12:02.677: INFO: Got endpoints: latency-svc-98mbl [960.405665ms] May 2 13:12:02.684: INFO: Created: latency-svc-ghcxn May 2 13:12:02.710: INFO: Got endpoints: latency-svc-ghcxn [897.248756ms] May 2 13:12:02.743: INFO: Created: latency-svc-vwpgf May 2 13:12:02.773: INFO: Got endpoints: latency-svc-vwpgf [923.875558ms] May 2 13:12:02.834: INFO: Created: latency-svc-h8v99 May 2 13:12:02.852: INFO: Got endpoints: latency-svc-h8v99 [928.593391ms] May 2 13:12:02.889: INFO: Created: latency-svc-62qz8 May 2 13:12:02.924: INFO: Got endpoints: latency-svc-62qz8 [972.151521ms] May 2 13:12:02.996: INFO: Created: latency-svc-6g8r4 May 2 13:12:03.007: INFO: Got endpoints: latency-svc-6g8r4 [1.001087839s] May 2 13:12:03.044: INFO: Created: latency-svc-x7s5b May 2 13:12:03.063: INFO: Got endpoints: latency-svc-x7s5b [973.408731ms] May 2 13:12:03.139: INFO: Created: latency-svc-zmktx May 2 13:12:03.142: INFO: Got endpoints: latency-svc-zmktx [996.472435ms] May 2 13:12:03.218: INFO: Created: latency-svc-wbtr9 May 2 13:12:03.283: INFO: Got endpoints: latency-svc-wbtr9 [1.023786034s] May 2 13:12:03.308: INFO: Created: latency-svc-vgtl2 May 2 13:12:03.321: INFO: Got endpoints: latency-svc-vgtl2 [1.015139595s] May 2 13:12:03.362: INFO: Created: latency-svc-hd4gm May 2 13:12:03.409: INFO: Got endpoints: latency-svc-hd4gm [1.013381595s] May 2 13:12:03.446: INFO: Created: latency-svc-tcdqp May 2 13:12:03.460: INFO: Got endpoints: latency-svc-tcdqp [1.019015013s] May 2 13:12:03.506: INFO: Created: latency-svc-gflmg May 2 13:12:03.546: INFO: Got endpoints: latency-svc-gflmg [1.068780234s] May 2 13:12:03.571: INFO: Created: latency-svc-5v4kd May 2 13:12:03.580: INFO: Got endpoints: latency-svc-5v4kd [1.042457444s] May 2 13:12:03.619: INFO: Created: latency-svc-gh9c6 May 2 13:12:03.708: INFO: Got endpoints: latency-svc-gh9c6 [1.097489416s] May 2 13:12:03.710: INFO: Created: latency-svc-9hdtp May 2 13:12:03.725: INFO: Got endpoints: latency-svc-9hdtp [1.048188383s] May 2 13:12:03.758: INFO: Created: latency-svc-9j2n8 May 2 13:12:03.774: INFO: Got endpoints: latency-svc-9j2n8 [1.063265346s] May 2 13:12:03.799: INFO: Created: latency-svc-nn8k9 May 2 13:12:03.845: INFO: Got endpoints: latency-svc-nn8k9 [1.072265864s] May 2 13:12:03.859: INFO: Created: latency-svc-vq8nt May 2 13:12:03.877: INFO: Got endpoints: latency-svc-vq8nt [1.02476541s] May 2 13:12:03.902: INFO: Created: latency-svc-vsm64 May 2 13:12:03.919: INFO: Got endpoints: latency-svc-vsm64 [995.011196ms] May 2 13:12:03.983: INFO: Created: latency-svc-hfq69 May 2 13:12:03.997: INFO: Got endpoints: latency-svc-hfq69 [989.562189ms] May 2 13:12:04.035: INFO: Created: latency-svc-4z45l May 2 13:12:04.046: INFO: Got endpoints: latency-svc-4z45l [982.908569ms] May 2 13:12:04.076: INFO: Created: latency-svc-7t5c5 May 2 13:12:04.115: INFO: Got endpoints: latency-svc-7t5c5 [972.436458ms] May 2 13:12:04.135: INFO: Created: latency-svc-5wg5n May 2 13:12:04.167: INFO: Got endpoints: latency-svc-5wg5n [884.672875ms] May 2 13:12:04.195: INFO: Created: latency-svc-mn5pc May 2 13:12:04.209: INFO: Got endpoints: latency-svc-mn5pc [887.98783ms] May 2 13:12:04.285: INFO: Created: latency-svc-b5rnx May 2 13:12:04.315: INFO: Got endpoints: latency-svc-b5rnx [906.01032ms] May 2 13:12:04.439: INFO: Created: latency-svc-gc9kh May 2 13:12:04.472: INFO: Got endpoints: latency-svc-gc9kh [1.012036515s] May 2 13:12:04.474: INFO: Created: latency-svc-j9zls May 2 13:12:04.486: INFO: Got endpoints: latency-svc-j9zls [939.996446ms] May 2 13:12:04.508: INFO: Created: latency-svc-r9qxc May 2 13:12:04.522: INFO: Got endpoints: latency-svc-r9qxc [941.871857ms] May 2 13:12:04.584: INFO: Created: latency-svc-8sr9w May 2 13:12:04.587: INFO: Got endpoints: latency-svc-8sr9w [879.575402ms] May 2 13:12:04.621: INFO: Created: latency-svc-rz662 May 2 13:12:04.631: INFO: Got endpoints: latency-svc-rz662 [905.423523ms] May 2 13:12:04.658: INFO: Created: latency-svc-xdd67 May 2 13:12:04.674: INFO: Got endpoints: latency-svc-xdd67 [900.10487ms] May 2 13:12:04.732: INFO: Created: latency-svc-np9rm May 2 13:12:04.735: INFO: Got endpoints: latency-svc-np9rm [889.548485ms] May 2 13:12:04.778: INFO: Created: latency-svc-htjxx May 2 13:12:04.801: INFO: Got endpoints: latency-svc-htjxx [924.401855ms] May 2 13:12:04.911: INFO: Created: latency-svc-ljmcl May 2 13:12:04.914: INFO: Got endpoints: latency-svc-ljmcl [995.030603ms] May 2 13:12:04.963: INFO: Created: latency-svc-484kr May 2 13:12:04.981: INFO: Got endpoints: latency-svc-484kr [984.733728ms] May 2 13:12:05.011: INFO: Created: latency-svc-djp54 May 2 13:12:05.091: INFO: Got endpoints: latency-svc-djp54 [1.044973968s] May 2 13:12:05.114: INFO: Created: latency-svc-njk5j May 2 13:12:05.116: INFO: Got endpoints: latency-svc-njk5j [1.001123844s] May 2 13:12:05.155: INFO: Created: latency-svc-22znb May 2 13:12:05.168: INFO: Got endpoints: latency-svc-22znb [1.000918229s] May 2 13:12:05.235: INFO: Created: latency-svc-zmtp4 May 2 13:12:05.238: INFO: Got endpoints: latency-svc-zmtp4 [1.029039176s] May 2 13:12:05.317: INFO: Created: latency-svc-gwfkq May 2 13:12:05.331: INFO: Got endpoints: latency-svc-gwfkq [1.015451498s] May 2 13:12:05.407: INFO: Created: latency-svc-9mf7c May 2 13:12:05.516: INFO: Got endpoints: latency-svc-9mf7c [1.044395809s] May 2 13:12:05.545: INFO: Created: latency-svc-d8ktl May 2 13:12:05.559: INFO: Got endpoints: latency-svc-d8ktl [1.072423046s] May 2 13:12:05.599: INFO: Created: latency-svc-szkvn May 2 13:12:05.649: INFO: Got endpoints: latency-svc-szkvn [1.126791597s] May 2 13:12:05.659: INFO: Created: latency-svc-4nhmj May 2 13:12:05.674: INFO: Got endpoints: latency-svc-4nhmj [1.086262142s] May 2 13:12:05.713: INFO: Created: latency-svc-pldqf May 2 13:12:05.734: INFO: Got endpoints: latency-svc-pldqf [1.102756999s] May 2 13:12:05.779: INFO: Created: latency-svc-6zr9h May 2 13:12:05.802: INFO: Got endpoints: latency-svc-6zr9h [1.128358706s] May 2 13:12:05.832: INFO: Created: latency-svc-dssgq May 2 13:12:05.849: INFO: Got endpoints: latency-svc-dssgq [1.11383459s] May 2 13:12:05.869: INFO: Created: latency-svc-tm5qw May 2 13:12:05.918: INFO: Got endpoints: latency-svc-tm5qw [1.116640643s] May 2 13:12:05.928: INFO: Created: latency-svc-tdtvc May 2 13:12:05.946: INFO: Got endpoints: latency-svc-tdtvc [1.031450866s] May 2 13:12:05.971: INFO: Created: latency-svc-h9d88 May 2 13:12:05.987: INFO: Got endpoints: latency-svc-h9d88 [1.005838928s] May 2 13:12:06.013: INFO: Created: latency-svc-qrrgl May 2 13:12:06.061: INFO: Got endpoints: latency-svc-qrrgl [970.085723ms] May 2 13:12:06.078: INFO: Created: latency-svc-thzcd May 2 13:12:06.114: INFO: Got endpoints: latency-svc-thzcd [998.316354ms] May 2 13:12:06.193: INFO: Created: latency-svc-hjttw May 2 13:12:06.204: INFO: Got endpoints: latency-svc-hjttw [1.0357579s] May 2 13:12:06.349: INFO: Created: latency-svc-wpnzr May 2 13:12:06.361: INFO: Got endpoints: latency-svc-wpnzr [1.122542385s] May 2 13:12:06.388: INFO: Created: latency-svc-hbth6 May 2 13:12:06.409: INFO: Got endpoints: latency-svc-hbth6 [1.078042872s] May 2 13:12:06.432: INFO: Created: latency-svc-cpzkx May 2 13:12:06.446: INFO: Got endpoints: latency-svc-cpzkx [929.084369ms] May 2 13:12:06.504: INFO: Created: latency-svc-hn6st May 2 13:12:06.515: INFO: Got endpoints: latency-svc-hn6st [956.227229ms] May 2 13:12:06.547: INFO: Created: latency-svc-ffkdr May 2 13:12:06.560: INFO: Got endpoints: latency-svc-ffkdr [911.052377ms] May 2 13:12:06.583: INFO: Created: latency-svc-dd9b8 May 2 13:12:06.591: INFO: Got endpoints: latency-svc-dd9b8 [916.911971ms] May 2 13:12:06.648: INFO: Created: latency-svc-zsc7p May 2 13:12:06.650: INFO: Got endpoints: latency-svc-zsc7p [916.481909ms] May 2 13:12:06.687: INFO: Created: latency-svc-szmnv May 2 13:12:06.700: INFO: Got endpoints: latency-svc-szmnv [897.490838ms] May 2 13:12:06.720: INFO: Created: latency-svc-hrwsf May 2 13:12:06.797: INFO: Got endpoints: latency-svc-hrwsf [948.4005ms] May 2 13:12:06.804: INFO: Created: latency-svc-tmk29 May 2 13:12:06.820: INFO: Got endpoints: latency-svc-tmk29 [901.860792ms] May 2 13:12:06.863: INFO: Created: latency-svc-nlnhx May 2 13:12:06.870: INFO: Got endpoints: latency-svc-nlnhx [924.376914ms] May 2 13:12:06.936: INFO: Created: latency-svc-vlx9b May 2 13:12:06.965: INFO: Got endpoints: latency-svc-vlx9b [977.371239ms] May 2 13:12:07.008: INFO: Created: latency-svc-bvms4 May 2 13:12:07.031: INFO: Got endpoints: latency-svc-bvms4 [970.512823ms] May 2 13:12:07.091: INFO: Created: latency-svc-69wb5 May 2 13:12:07.134: INFO: Got endpoints: latency-svc-69wb5 [1.019269188s] May 2 13:12:07.176: INFO: Created: latency-svc-64lk8 May 2 13:12:07.234: INFO: Got endpoints: latency-svc-64lk8 [1.029564297s] May 2 13:12:07.254: INFO: Created: latency-svc-2dhss May 2 13:12:07.272: INFO: Got endpoints: latency-svc-2dhss [911.476069ms] May 2 13:12:07.309: INFO: Created: latency-svc-clrgk May 2 13:12:07.325: INFO: Got endpoints: latency-svc-clrgk [916.28731ms] May 2 13:12:07.418: INFO: Created: latency-svc-4krp5 May 2 13:12:07.441: INFO: Got endpoints: latency-svc-4krp5 [995.448137ms] May 2 13:12:07.465: INFO: Created: latency-svc-p9t2q May 2 13:12:07.477: INFO: Got endpoints: latency-svc-p9t2q [962.21407ms] May 2 13:12:07.558: INFO: Created: latency-svc-fhwjz May 2 13:12:07.584: INFO: Got endpoints: latency-svc-fhwjz [1.023687701s] May 2 13:12:07.632: INFO: Created: latency-svc-cnfpc May 2 13:12:07.646: INFO: Got endpoints: latency-svc-cnfpc [1.054880202s] May 2 13:12:07.702: INFO: Created: latency-svc-rcp7l May 2 13:12:07.712: INFO: Got endpoints: latency-svc-rcp7l [1.06153831s] May 2 13:12:07.758: INFO: Created: latency-svc-s8thc May 2 13:12:07.784: INFO: Got endpoints: latency-svc-s8thc [1.084751164s] May 2 13:12:07.848: INFO: Created: latency-svc-pnbfj May 2 13:12:07.871: INFO: Got endpoints: latency-svc-pnbfj [1.074045662s] May 2 13:12:07.872: INFO: Created: latency-svc-lrlwr May 2 13:12:07.881: INFO: Got endpoints: latency-svc-lrlwr [1.061403787s] May 2 13:12:07.902: INFO: Created: latency-svc-tlflv May 2 13:12:07.925: INFO: Got endpoints: latency-svc-tlflv [1.054886828s] May 2 13:12:07.983: INFO: Created: latency-svc-qjt5f May 2 13:12:07.986: INFO: Got endpoints: latency-svc-qjt5f [1.021545387s] May 2 13:12:08.021: INFO: Created: latency-svc-bfl6t May 2 13:12:08.039: INFO: Got endpoints: latency-svc-bfl6t [1.007118251s] May 2 13:12:08.063: INFO: Created: latency-svc-jt5tj May 2 13:12:08.081: INFO: Got endpoints: latency-svc-jt5tj [947.450101ms] May 2 13:12:08.133: INFO: Created: latency-svc-dmt55 May 2 13:12:08.142: INFO: Got endpoints: latency-svc-dmt55 [907.987102ms] May 2 13:12:08.172: INFO: Created: latency-svc-rwvsw May 2 13:12:08.183: INFO: Got endpoints: latency-svc-rwvsw [911.034444ms] May 2 13:12:08.231: INFO: Created: latency-svc-wx5br May 2 13:12:08.282: INFO: Got endpoints: latency-svc-wx5br [956.499556ms] May 2 13:12:08.334: INFO: Created: latency-svc-jb47c May 2 13:12:08.358: INFO: Got endpoints: latency-svc-jb47c [917.375498ms] May 2 13:12:08.451: INFO: Created: latency-svc-9cg8v May 2 13:12:08.478: INFO: Got endpoints: latency-svc-9cg8v [1.000704632s] May 2 13:12:08.478: INFO: Created: latency-svc-tjzkp May 2 13:12:08.491: INFO: Got endpoints: latency-svc-tjzkp [906.494763ms] May 2 13:12:08.543: INFO: Created: latency-svc-sl7pf May 2 13:12:08.606: INFO: Got endpoints: latency-svc-sl7pf [960.169619ms] May 2 13:12:08.635: INFO: Created: latency-svc-xqnd7 May 2 13:12:08.648: INFO: Got endpoints: latency-svc-xqnd7 [935.727123ms] May 2 13:12:08.671: INFO: Created: latency-svc-2xjtl May 2 13:12:08.678: INFO: Got endpoints: latency-svc-2xjtl [893.288987ms] May 2 13:12:08.702: INFO: Created: latency-svc-z7chx May 2 13:12:08.750: INFO: Got endpoints: latency-svc-z7chx [878.286178ms] May 2 13:12:08.753: INFO: Created: latency-svc-dzbts May 2 13:12:08.769: INFO: Got endpoints: latency-svc-dzbts [887.619006ms] May 2 13:12:08.790: INFO: Created: latency-svc-wrz65 May 2 13:12:08.819: INFO: Got endpoints: latency-svc-wrz65 [893.610032ms] May 2 13:12:08.849: INFO: Created: latency-svc-ljh25 May 2 13:12:08.911: INFO: Got endpoints: latency-svc-ljh25 [924.497659ms] May 2 13:12:08.958: INFO: Created: latency-svc-ckksb May 2 13:12:08.980: INFO: Got endpoints: latency-svc-ckksb [941.235846ms] May 2 13:12:09.012: INFO: Created: latency-svc-77lhm May 2 13:12:09.103: INFO: Got endpoints: latency-svc-77lhm [1.021509804s] May 2 13:12:09.155: INFO: Created: latency-svc-qgpbb May 2 13:12:09.174: INFO: Got endpoints: latency-svc-qgpbb [1.031483335s] May 2 13:12:09.241: INFO: Created: latency-svc-264bm May 2 13:12:09.280: INFO: Got endpoints: latency-svc-264bm [1.096890451s] May 2 13:12:09.282: INFO: Created: latency-svc-qrtd5 May 2 13:12:09.299: INFO: Got endpoints: latency-svc-qrtd5 [1.017347904s] May 2 13:12:09.390: INFO: Created: latency-svc-6d8s2 May 2 13:12:09.437: INFO: Got endpoints: latency-svc-6d8s2 [1.078771676s] May 2 13:12:09.437: INFO: Created: latency-svc-2gh5h May 2 13:12:09.456: INFO: Got endpoints: latency-svc-2gh5h [978.226214ms] May 2 13:12:09.547: INFO: Created: latency-svc-nr9kq May 2 13:12:09.550: INFO: Got endpoints: latency-svc-nr9kq [1.059594109s] May 2 13:12:09.611: INFO: Created: latency-svc-89wsd May 2 13:12:09.713: INFO: Got endpoints: latency-svc-89wsd [1.107544378s] May 2 13:12:09.716: INFO: Created: latency-svc-pg2mq May 2 13:12:09.726: INFO: Got endpoints: latency-svc-pg2mq [1.078545926s] May 2 13:12:09.750: INFO: Created: latency-svc-n6bth May 2 13:12:09.766: INFO: Got endpoints: latency-svc-n6bth [1.088488278s] May 2 13:12:09.796: INFO: Created: latency-svc-5ttmh May 2 13:12:09.811: INFO: Got endpoints: latency-svc-5ttmh [1.061379448s] May 2 13:12:09.863: INFO: Created: latency-svc-xtv99 May 2 13:12:09.872: INFO: Got endpoints: latency-svc-xtv99 [1.102727842s] May 2 13:12:09.898: INFO: Created: latency-svc-njzhr May 2 13:12:09.914: INFO: Got endpoints: latency-svc-njzhr [1.09545174s] May 2 13:12:09.934: INFO: Created: latency-svc-zw8j5 May 2 13:12:09.950: INFO: Got endpoints: latency-svc-zw8j5 [1.039472646s] May 2 13:12:10.004: INFO: Created: latency-svc-9bxkj May 2 13:12:10.004: INFO: Got endpoints: latency-svc-9bxkj [1.024097842s] May 2 13:12:10.061: INFO: Created: latency-svc-jdsm5 May 2 13:12:10.078: INFO: Got endpoints: latency-svc-jdsm5 [975.063295ms] May 2 13:12:10.096: INFO: Created: latency-svc-hxpps May 2 13:12:10.150: INFO: Got endpoints: latency-svc-hxpps [976.810221ms] May 2 13:12:10.162: INFO: Created: latency-svc-vcq2n May 2 13:12:10.181: INFO: Got endpoints: latency-svc-vcq2n [900.516417ms] May 2 13:12:10.223: INFO: Created: latency-svc-dshk9 May 2 13:12:10.294: INFO: Got endpoints: latency-svc-dshk9 [994.852094ms] May 2 13:12:10.326: INFO: Created: latency-svc-7mqqw May 2 13:12:10.360: INFO: Got endpoints: latency-svc-7mqqw [922.348951ms] May 2 13:12:10.432: INFO: Created: latency-svc-zhz8n May 2 13:12:10.462: INFO: Got endpoints: latency-svc-zhz8n [1.005651198s] May 2 13:12:10.463: INFO: Created: latency-svc-2flrk May 2 13:12:10.482: INFO: Got endpoints: latency-svc-2flrk [931.288499ms] May 2 13:12:10.504: INFO: Created: latency-svc-hxfwh May 2 13:12:10.518: INFO: Got endpoints: latency-svc-hxfwh [804.081467ms] May 2 13:12:10.588: INFO: Created: latency-svc-m79px May 2 13:12:10.594: INFO: Got endpoints: latency-svc-m79px [868.039012ms] May 2 13:12:10.637: INFO: Created: latency-svc-789sf May 2 13:12:10.651: INFO: Got endpoints: latency-svc-789sf [884.534073ms] May 2 13:12:10.678: INFO: Created: latency-svc-kx6cf May 2 13:12:10.687: INFO: Got endpoints: latency-svc-kx6cf [876.101252ms] May 2 13:12:10.745: INFO: Created: latency-svc-lvgk6 May 2 13:12:10.747: INFO: Got endpoints: latency-svc-lvgk6 [874.654708ms] May 2 13:12:10.774: INFO: Created: latency-svc-s4h7w May 2 13:12:10.790: INFO: Got endpoints: latency-svc-s4h7w [876.266568ms] May 2 13:12:10.816: INFO: Created: latency-svc-259m4 May 2 13:12:10.893: INFO: Got endpoints: latency-svc-259m4 [942.409543ms] May 2 13:12:10.899: INFO: Created: latency-svc-q92zz May 2 13:12:10.917: INFO: Got endpoints: latency-svc-q92zz [912.972235ms] May 2 13:12:10.972: INFO: Created: latency-svc-kqcmk May 2 13:12:10.983: INFO: Got endpoints: latency-svc-kqcmk [905.115465ms] May 2 13:12:11.050: INFO: Created: latency-svc-4q4f9 May 2 13:12:11.052: INFO: Got endpoints: latency-svc-4q4f9 [901.508429ms] May 2 13:12:11.098: INFO: Created: latency-svc-g6s7w May 2 13:12:11.104: INFO: Got endpoints: latency-svc-g6s7w [922.583862ms] May 2 13:12:11.140: INFO: Created: latency-svc-szx64 May 2 13:12:11.146: INFO: Got endpoints: latency-svc-szx64 [852.085307ms] May 2 13:12:11.206: INFO: Created: latency-svc-4xgdd May 2 13:12:11.212: INFO: Got endpoints: latency-svc-4xgdd [852.753687ms] May 2 13:12:11.255: INFO: Created: latency-svc-mf22q May 2 13:12:11.262: INFO: Got endpoints: latency-svc-mf22q [800.300604ms] May 2 13:12:11.370: INFO: Created: latency-svc-42z9t May 2 13:12:11.370: INFO: Got endpoints: latency-svc-42z9t [888.817157ms] May 2 13:12:11.446: INFO: Created: latency-svc-vv8nt May 2 13:12:11.460: INFO: Got endpoints: latency-svc-vv8nt [942.115127ms] May 2 13:12:11.516: INFO: Created: latency-svc-jpqhz May 2 13:12:11.555: INFO: Got endpoints: latency-svc-jpqhz [960.184429ms] May 2 13:12:11.602: INFO: Created: latency-svc-w46ns May 2 13:12:11.696: INFO: Got endpoints: latency-svc-w46ns [1.044676359s] May 2 13:12:11.700: INFO: Created: latency-svc-9m8f5 May 2 13:12:11.706: INFO: Got endpoints: latency-svc-9m8f5 [1.019106646s] May 2 13:12:11.753: INFO: Created: latency-svc-m5kkt May 2 13:12:11.767: INFO: Got endpoints: latency-svc-m5kkt [1.020237465s] May 2 13:12:11.834: INFO: Created: latency-svc-wlqxm May 2 13:12:11.845: INFO: Got endpoints: latency-svc-wlqxm [1.054920241s] May 2 13:12:11.872: INFO: Created: latency-svc-lnxlk May 2 13:12:11.902: INFO: Got endpoints: latency-svc-lnxlk [1.009046475s] May 2 13:12:11.932: INFO: Created: latency-svc-wnzd9 May 2 13:12:11.977: INFO: Got endpoints: latency-svc-wnzd9 [1.059738345s] May 2 13:12:11.991: INFO: Created: latency-svc-r47jw May 2 13:12:12.008: INFO: Got endpoints: latency-svc-r47jw [1.025419506s] May 2 13:12:12.033: INFO: Created: latency-svc-5s9nx May 2 13:12:12.045: INFO: Got endpoints: latency-svc-5s9nx [992.736721ms] May 2 13:12:12.070: INFO: Created: latency-svc-fjt68 May 2 13:12:12.115: INFO: Got endpoints: latency-svc-fjt68 [1.011017937s] May 2 13:12:12.130: INFO: Created: latency-svc-gzmkf May 2 13:12:12.160: INFO: Got endpoints: latency-svc-gzmkf [1.013054732s] May 2 13:12:12.202: INFO: Created: latency-svc-c8kpf May 2 13:12:12.246: INFO: Got endpoints: latency-svc-c8kpf [1.033751247s] May 2 13:12:12.291: INFO: Created: latency-svc-7vqf2 May 2 13:12:12.323: INFO: Got endpoints: latency-svc-7vqf2 [1.060525015s] May 2 13:12:12.405: INFO: Created: latency-svc-bflmm May 2 13:12:12.424: INFO: Got endpoints: latency-svc-bflmm [1.053743092s] May 2 13:12:12.484: INFO: Created: latency-svc-dhhvl May 2 13:12:12.528: INFO: Got endpoints: latency-svc-dhhvl [1.067881697s] May 2 13:12:12.543: INFO: Created: latency-svc-cdjps May 2 13:12:12.557: INFO: Got endpoints: latency-svc-cdjps [1.00248614s] May 2 13:12:12.591: INFO: Created: latency-svc-4qhvt May 2 13:12:12.605: INFO: Got endpoints: latency-svc-4qhvt [909.518952ms] May 2 13:12:12.660: INFO: Created: latency-svc-dfsnr May 2 13:12:12.662: INFO: Got endpoints: latency-svc-dfsnr [955.754908ms] May 2 13:12:12.693: INFO: Created: latency-svc-ckrvw May 2 13:12:12.702: INFO: Got endpoints: latency-svc-ckrvw [935.410634ms] May 2 13:12:12.729: INFO: Created: latency-svc-6j2l8 May 2 13:12:12.759: INFO: Got endpoints: latency-svc-6j2l8 [913.613162ms] May 2 13:12:12.816: INFO: Created: latency-svc-bzstw May 2 13:12:12.818: INFO: Got endpoints: latency-svc-bzstw [916.332465ms] May 2 13:12:12.849: INFO: Created: latency-svc-rhbbl May 2 13:12:12.866: INFO: Got endpoints: latency-svc-rhbbl [888.953259ms] May 2 13:12:12.891: INFO: Created: latency-svc-7bkr2 May 2 13:12:12.959: INFO: Got endpoints: latency-svc-7bkr2 [950.903325ms] May 2 13:12:12.975: INFO: Created: latency-svc-8wn7z May 2 13:12:12.992: INFO: Got endpoints: latency-svc-8wn7z [947.239976ms] May 2 13:12:13.030: INFO: Created: latency-svc-cqv25 May 2 13:12:13.109: INFO: Got endpoints: latency-svc-cqv25 [994.023954ms] May 2 13:12:13.124: INFO: Created: latency-svc-7q6tb May 2 13:12:13.143: INFO: Got endpoints: latency-svc-7q6tb [983.069944ms] May 2 13:12:13.198: INFO: Created: latency-svc-ndnl7 May 2 13:12:13.252: INFO: Got endpoints: latency-svc-ndnl7 [1.006097476s] May 2 13:12:13.287: INFO: Created: latency-svc-wd262 May 2 13:12:13.311: INFO: Got endpoints: latency-svc-wd262 [988.320063ms] May 2 13:12:13.335: INFO: Created: latency-svc-4fbsq May 2 13:12:13.396: INFO: Got endpoints: latency-svc-4fbsq [971.770049ms] May 2 13:12:13.424: INFO: Created: latency-svc-dvb96 May 2 13:12:13.450: INFO: Got endpoints: latency-svc-dvb96 [922.228865ms] May 2 13:12:13.491: INFO: Created: latency-svc-p49bq May 2 13:12:13.534: INFO: Got endpoints: latency-svc-p49bq [976.54635ms] May 2 13:12:13.575: INFO: Created: latency-svc-l9k7j May 2 13:12:13.601: INFO: Got endpoints: latency-svc-l9k7j [995.173247ms] May 2 13:12:13.690: INFO: Created: latency-svc-nxrmz May 2 13:12:13.697: INFO: Got endpoints: latency-svc-nxrmz [1.034892428s] May 2 13:12:13.731: INFO: Created: latency-svc-xp4hw May 2 13:12:13.751: INFO: Got endpoints: latency-svc-xp4hw [1.048812558s] May 2 13:12:13.779: INFO: Created: latency-svc-8b868 May 2 13:12:13.827: INFO: Got endpoints: latency-svc-8b868 [1.067826804s] May 2 13:12:13.832: INFO: Created: latency-svc-mz2z6 May 2 13:12:13.848: INFO: Got endpoints: latency-svc-mz2z6 [1.02941463s] May 2 13:12:13.881: INFO: Created: latency-svc-9s7xc May 2 13:12:13.911: INFO: Got endpoints: latency-svc-9s7xc [1.045524111s] May 2 13:12:13.953: INFO: Created: latency-svc-8s8mz May 2 13:12:13.958: INFO: Got endpoints: latency-svc-8s8mz [998.617923ms] May 2 13:12:13.988: INFO: Created: latency-svc-hmj4k May 2 13:12:13.999: INFO: Got endpoints: latency-svc-hmj4k [1.007245695s] May 2 13:12:14.025: INFO: Created: latency-svc-gnwlx May 2 13:12:14.042: INFO: Got endpoints: latency-svc-gnwlx [932.848405ms] May 2 13:12:14.097: INFO: Created: latency-svc-cz7lq May 2 13:12:14.120: INFO: Got endpoints: latency-svc-cz7lq [977.183465ms] May 2 13:12:14.144: INFO: Created: latency-svc-qhqn9 May 2 13:12:14.163: INFO: Got endpoints: latency-svc-qhqn9 [910.094401ms] May 2 13:12:14.187: INFO: Created: latency-svc-wvg4v May 2 13:12:14.258: INFO: Got endpoints: latency-svc-wvg4v [947.304542ms] May 2 13:12:14.277: INFO: Created: latency-svc-5x84v May 2 13:12:14.289: INFO: Got endpoints: latency-svc-5x84v [892.664489ms] May 2 13:12:14.326: INFO: Created: latency-svc-dk8fg May 2 13:12:14.379: INFO: Got endpoints: latency-svc-dk8fg [928.670164ms] May 2 13:12:14.418: INFO: Created: latency-svc-zh7bl May 2 13:12:14.434: INFO: Got endpoints: latency-svc-zh7bl [900.207414ms] May 2 13:12:14.469: INFO: Created: latency-svc-vrkpn May 2 13:12:14.516: INFO: Got endpoints: latency-svc-vrkpn [915.144312ms] May 2 13:12:14.541: INFO: Created: latency-svc-rbtj5 May 2 13:12:14.554: INFO: Got endpoints: latency-svc-rbtj5 [856.876626ms] May 2 13:12:14.583: INFO: Created: latency-svc-qfm22 May 2 13:12:14.713: INFO: Got endpoints: latency-svc-qfm22 [962.261586ms] May 2 13:12:14.745: INFO: Created: latency-svc-w686q May 2 13:12:14.959: INFO: Got endpoints: latency-svc-w686q [1.131853303s] May 2 13:12:14.959: INFO: Latencies: [52.201623ms 148.483137ms 184.514368ms 258.591995ms 286.719912ms 341.309463ms 424.475417ms 481.171716ms 593.912598ms 641.351703ms 731.032284ms 775.727263ms 800.300604ms 804.081467ms 812.231015ms 852.085307ms 852.753687ms 856.876626ms 868.039012ms 873.393182ms 874.654708ms 876.101252ms 876.266568ms 878.286178ms 879.575402ms 884.534073ms 884.672875ms 887.619006ms 887.98783ms 888.817157ms 888.953259ms 889.548485ms 892.664489ms 893.288987ms 893.610032ms 897.248756ms 897.490838ms 900.10487ms 900.207414ms 900.516417ms 901.508429ms 901.860792ms 905.115465ms 905.423523ms 906.01032ms 906.494763ms 907.987102ms 909.518952ms 910.094401ms 911.034444ms 911.052377ms 911.476069ms 912.972235ms 913.613162ms 915.144312ms 916.28731ms 916.332465ms 916.481909ms 916.911971ms 917.375498ms 922.228865ms 922.348951ms 922.583862ms 923.875558ms 924.376914ms 924.401855ms 924.497659ms 928.593391ms 928.670164ms 929.084369ms 931.288499ms 932.848405ms 935.410634ms 935.727123ms 939.996446ms 941.235846ms 941.871857ms 942.115127ms 942.409543ms 945.245304ms 947.239976ms 947.304542ms 947.450101ms 948.4005ms 950.903325ms 955.754908ms 956.227229ms 956.499556ms 960.169619ms 960.184429ms 960.405665ms 962.21407ms 962.261586ms 970.085723ms 970.512823ms 971.770049ms 972.151521ms 972.436458ms 973.408731ms 975.063295ms 976.54635ms 976.810221ms 977.183465ms 977.371239ms 978.226214ms 982.908569ms 983.069944ms 984.733728ms 988.320063ms 989.562189ms 992.736721ms 994.023954ms 994.852094ms 995.011196ms 995.030603ms 995.173247ms 995.448137ms 996.472435ms 998.316354ms 998.617923ms 1.000704632s 1.000918229s 1.001087839s 1.001123844s 1.00248614s 1.005651198s 1.005838928s 1.006097476s 1.007118251s 1.007245695s 1.009046475s 1.011017937s 1.012036515s 1.013054732s 1.013381595s 1.015139595s 1.015451498s 1.017347904s 1.019015013s 1.019106646s 1.019269188s 1.020237465s 1.021509804s 1.021545387s 1.023687701s 1.023786034s 1.024097842s 1.02476541s 1.025419506s 1.029039176s 1.02941463s 1.029564297s 1.031450866s 1.031483335s 1.033751247s 1.034892428s 1.0357579s 1.039472646s 1.042457444s 1.044395809s 1.044676359s 1.044973968s 1.045524111s 1.048188383s 1.048812558s 1.053743092s 1.054880202s 1.054886828s 1.054920241s 1.059594109s 1.059738345s 1.060525015s 1.061379448s 1.061403787s 1.06153831s 1.063265346s 1.067826804s 1.067881697s 1.068780234s 1.072265864s 1.072423046s 1.074045662s 1.078042872s 1.078545926s 1.078771676s 1.084751164s 1.086262142s 1.088488278s 1.09545174s 1.096890451s 1.097489416s 1.102727842s 1.102756999s 1.107544378s 1.11383459s 1.116640643s 1.122542385s 1.126791597s 1.128358706s 1.131853303s] May 2 13:12:14.959: INFO: 50 %ile: 976.54635ms May 2 13:12:14.959: INFO: 90 %ile: 1.072423046s May 2 13:12:14.959: INFO: 99 %ile: 1.128358706s May 2 13:12:14.959: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:12:14.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-4850" for this suite. May 2 13:13:02.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:13:03.068: INFO: namespace svc-latency-4850 deletion completed in 48.09339224s • [SLOW TEST:64.738 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:13:03.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 2 13:13:07.692: INFO: Successfully updated pod "pod-update-activedeadlineseconds-7447885f-b969-430b-8c21-645f6c54ea6a" May 2 13:13:07.692: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-7447885f-b969-430b-8c21-645f6c54ea6a" in namespace "pods-8013" to be "terminated due to deadline exceeded" May 2 13:13:07.696: INFO: Pod "pod-update-activedeadlineseconds-7447885f-b969-430b-8c21-645f6c54ea6a": Phase="Running", Reason="", readiness=true. Elapsed: 3.782933ms May 2 13:13:09.700: INFO: Pod "pod-update-activedeadlineseconds-7447885f-b969-430b-8c21-645f6c54ea6a": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.008442812s May 2 13:13:09.700: INFO: Pod "pod-update-activedeadlineseconds-7447885f-b969-430b-8c21-645f6c54ea6a" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:13:09.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8013" for this suite. May 2 13:13:15.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:13:15.824: INFO: namespace pods-8013 deletion completed in 6.118405963s • [SLOW TEST:12.755 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:13:15.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 2 13:13:15.980: INFO: Waiting up to 5m0s for pod "downwardapi-volume-129eabd3-673d-4c18-a8e3-9793feda43c4" in namespace "downward-api-3516" to be "success or failure" May 2 13:13:15.996: INFO: Pod "downwardapi-volume-129eabd3-673d-4c18-a8e3-9793feda43c4": Phase="Pending", Reason="", readiness=false. Elapsed: 15.787802ms May 2 13:13:18.014: INFO: Pod "downwardapi-volume-129eabd3-673d-4c18-a8e3-9793feda43c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034219952s May 2 13:13:20.018: INFO: Pod "downwardapi-volume-129eabd3-673d-4c18-a8e3-9793feda43c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038477908s STEP: Saw pod success May 2 13:13:20.018: INFO: Pod "downwardapi-volume-129eabd3-673d-4c18-a8e3-9793feda43c4" satisfied condition "success or failure" May 2 13:13:20.022: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-129eabd3-673d-4c18-a8e3-9793feda43c4 container client-container: STEP: delete the pod May 2 13:13:20.052: INFO: Waiting for pod downwardapi-volume-129eabd3-673d-4c18-a8e3-9793feda43c4 to disappear May 2 13:13:20.068: INFO: Pod downwardapi-volume-129eabd3-673d-4c18-a8e3-9793feda43c4 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:13:20.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3516" for this suite. May 2 13:13:26.083: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:13:26.162: INFO: namespace downward-api-3516 deletion completed in 6.090326779s • [SLOW TEST:10.338 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:13:26.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-2f9f326c-da09-4b31-897c-9ca4652040a1 STEP: Creating a pod to test consume configMaps May 2 13:13:26.232: INFO: Waiting up to 5m0s for pod "pod-configmaps-fba81b6e-22c8-4e90-86ae-22d7f6e26b2e" in namespace "configmap-2089" to be "success or failure" May 2 13:13:26.236: INFO: Pod "pod-configmaps-fba81b6e-22c8-4e90-86ae-22d7f6e26b2e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.708468ms May 2 13:13:28.240: INFO: Pod "pod-configmaps-fba81b6e-22c8-4e90-86ae-22d7f6e26b2e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007498623s May 2 13:13:30.244: INFO: Pod "pod-configmaps-fba81b6e-22c8-4e90-86ae-22d7f6e26b2e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012028222s STEP: Saw pod success May 2 13:13:30.244: INFO: Pod "pod-configmaps-fba81b6e-22c8-4e90-86ae-22d7f6e26b2e" satisfied condition "success or failure" May 2 13:13:30.247: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-fba81b6e-22c8-4e90-86ae-22d7f6e26b2e container configmap-volume-test: STEP: delete the pod May 2 13:13:30.274: INFO: Waiting for pod pod-configmaps-fba81b6e-22c8-4e90-86ae-22d7f6e26b2e to disappear May 2 13:13:30.278: INFO: Pod pod-configmaps-fba81b6e-22c8-4e90-86ae-22d7f6e26b2e no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:13:30.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2089" for this suite. May 2 13:13:36.294: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:13:36.370: INFO: namespace configmap-2089 deletion completed in 6.089134522s • [SLOW TEST:10.208 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:13:36.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all May 2 13:13:36.495: INFO: Waiting up to 5m0s for pod "client-containers-61321b1d-a21f-4d08-aa7b-c3cc24d11178" in namespace "containers-4375" to be "success or failure" May 2 13:13:36.506: INFO: Pod "client-containers-61321b1d-a21f-4d08-aa7b-c3cc24d11178": Phase="Pending", Reason="", readiness=false. Elapsed: 10.312963ms May 2 13:13:38.510: INFO: Pod "client-containers-61321b1d-a21f-4d08-aa7b-c3cc24d11178": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014403923s May 2 13:13:40.515: INFO: Pod "client-containers-61321b1d-a21f-4d08-aa7b-c3cc24d11178": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019256033s STEP: Saw pod success May 2 13:13:40.515: INFO: Pod "client-containers-61321b1d-a21f-4d08-aa7b-c3cc24d11178" satisfied condition "success or failure" May 2 13:13:40.518: INFO: Trying to get logs from node iruya-worker2 pod client-containers-61321b1d-a21f-4d08-aa7b-c3cc24d11178 container test-container: STEP: delete the pod May 2 13:13:40.554: INFO: Waiting for pod client-containers-61321b1d-a21f-4d08-aa7b-c3cc24d11178 to disappear May 2 13:13:40.572: INFO: Pod client-containers-61321b1d-a21f-4d08-aa7b-c3cc24d11178 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:13:40.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4375" for this suite. May 2 13:13:46.588: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:13:46.699: INFO: namespace containers-4375 deletion completed in 6.123537269s • [SLOW TEST:10.328 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:13:46.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-a6c68d3e-4c77-40f5-8fba-6bbf7f8cfeea STEP: Creating a pod to test consume secrets May 2 13:13:46.765: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-548794db-163f-4dbe-8daa-ae5762c4fb9f" in namespace "projected-362" to be "success or failure" May 2 13:13:46.805: INFO: Pod "pod-projected-secrets-548794db-163f-4dbe-8daa-ae5762c4fb9f": Phase="Pending", Reason="", readiness=false. Elapsed: 40.144905ms May 2 13:13:48.841: INFO: Pod "pod-projected-secrets-548794db-163f-4dbe-8daa-ae5762c4fb9f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076045458s May 2 13:13:50.845: INFO: Pod "pod-projected-secrets-548794db-163f-4dbe-8daa-ae5762c4fb9f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.079828674s STEP: Saw pod success May 2 13:13:50.845: INFO: Pod "pod-projected-secrets-548794db-163f-4dbe-8daa-ae5762c4fb9f" satisfied condition "success or failure" May 2 13:13:50.847: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-548794db-163f-4dbe-8daa-ae5762c4fb9f container projected-secret-volume-test: STEP: delete the pod May 2 13:13:50.911: INFO: Waiting for pod pod-projected-secrets-548794db-163f-4dbe-8daa-ae5762c4fb9f to disappear May 2 13:13:50.967: INFO: Pod pod-projected-secrets-548794db-163f-4dbe-8daa-ae5762c4fb9f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:13:50.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-362" for this suite. May 2 13:13:57.001: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:13:57.078: INFO: namespace projected-362 deletion completed in 6.10762324s • [SLOW TEST:10.379 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:13:57.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-cd12bae6-31aa-47dc-a48e-db0909c1f53c STEP: Creating a pod to test consume secrets May 2 13:13:57.190: INFO: Waiting up to 5m0s for pod "pod-secrets-42db0d7a-41b7-40a7-8d50-0d5ced8db7e9" in namespace "secrets-6415" to be "success or failure" May 2 13:13:57.210: INFO: Pod "pod-secrets-42db0d7a-41b7-40a7-8d50-0d5ced8db7e9": Phase="Pending", Reason="", readiness=false. Elapsed: 20.022976ms May 2 13:13:59.266: INFO: Pod "pod-secrets-42db0d7a-41b7-40a7-8d50-0d5ced8db7e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076034763s May 2 13:14:01.270: INFO: Pod "pod-secrets-42db0d7a-41b7-40a7-8d50-0d5ced8db7e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.079326569s STEP: Saw pod success May 2 13:14:01.270: INFO: Pod "pod-secrets-42db0d7a-41b7-40a7-8d50-0d5ced8db7e9" satisfied condition "success or failure" May 2 13:14:01.272: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-42db0d7a-41b7-40a7-8d50-0d5ced8db7e9 container secret-env-test: STEP: delete the pod May 2 13:14:01.328: INFO: Waiting for pod pod-secrets-42db0d7a-41b7-40a7-8d50-0d5ced8db7e9 to disappear May 2 13:14:01.446: INFO: Pod pod-secrets-42db0d7a-41b7-40a7-8d50-0d5ced8db7e9 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:14:01.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6415" for this suite. May 2 13:14:07.486: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:14:07.606: INFO: namespace secrets-6415 deletion completed in 6.156542196s • [SLOW TEST:10.527 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:14:07.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-d3c70bd5-ae34-49d6-8346-5e4cf6b8ba9a STEP: Creating a pod to test consume configMaps May 2 13:14:07.748: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5d278ebc-87ae-4759-82e7-34b1a6ddb244" in namespace "projected-2219" to be "success or failure" May 2 13:14:07.753: INFO: Pod "pod-projected-configmaps-5d278ebc-87ae-4759-82e7-34b1a6ddb244": Phase="Pending", Reason="", readiness=false. Elapsed: 4.628821ms May 2 13:14:09.757: INFO: Pod "pod-projected-configmaps-5d278ebc-87ae-4759-82e7-34b1a6ddb244": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008499642s May 2 13:14:11.761: INFO: Pod "pod-projected-configmaps-5d278ebc-87ae-4759-82e7-34b1a6ddb244": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012824977s STEP: Saw pod success May 2 13:14:11.761: INFO: Pod "pod-projected-configmaps-5d278ebc-87ae-4759-82e7-34b1a6ddb244" satisfied condition "success or failure" May 2 13:14:11.764: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-5d278ebc-87ae-4759-82e7-34b1a6ddb244 container projected-configmap-volume-test: STEP: delete the pod May 2 13:14:11.904: INFO: Waiting for pod pod-projected-configmaps-5d278ebc-87ae-4759-82e7-34b1a6ddb244 to disappear May 2 13:14:11.938: INFO: Pod pod-projected-configmaps-5d278ebc-87ae-4759-82e7-34b1a6ddb244 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:14:11.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2219" for this suite. May 2 13:14:17.985: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:14:18.062: INFO: namespace projected-2219 deletion completed in 6.121022223s • [SLOW TEST:10.456 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:14:18.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0502 13:14:28.185066 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 2 13:14:28.185: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:14:28.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4165" for this suite. May 2 13:14:34.204: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:14:34.281: INFO: namespace gc-4165 deletion completed in 6.09261331s • [SLOW TEST:16.219 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:14:34.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:14:34.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6957" for this suite. May 2 13:14:40.373: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:14:40.461: INFO: namespace services-6957 deletion completed in 6.101241001s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.180 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:14:40.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 2 13:14:41.340: INFO: Pod name wrapped-volume-race-3783f7bb-f26f-4325-a9e5-28506fd70976: Found 0 pods out of 5 May 2 13:14:46.348: INFO: Pod name wrapped-volume-race-3783f7bb-f26f-4325-a9e5-28506fd70976: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-3783f7bb-f26f-4325-a9e5-28506fd70976 in namespace emptydir-wrapper-90, will wait for the garbage collector to delete the pods May 2 13:15:02.456: INFO: Deleting ReplicationController wrapped-volume-race-3783f7bb-f26f-4325-a9e5-28506fd70976 took: 22.435627ms May 2 13:15:02.756: INFO: Terminating ReplicationController wrapped-volume-race-3783f7bb-f26f-4325-a9e5-28506fd70976 pods took: 300.224692ms STEP: Creating RC which spawns configmap-volume pods May 2 13:15:52.765: INFO: Pod name wrapped-volume-race-4dc42807-0edc-493f-9916-a35d40361828: Found 0 pods out of 5 May 2 13:15:57.775: INFO: Pod name wrapped-volume-race-4dc42807-0edc-493f-9916-a35d40361828: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-4dc42807-0edc-493f-9916-a35d40361828 in namespace emptydir-wrapper-90, will wait for the garbage collector to delete the pods May 2 13:16:11.883: INFO: Deleting ReplicationController wrapped-volume-race-4dc42807-0edc-493f-9916-a35d40361828 took: 7.804346ms May 2 13:16:12.184: INFO: Terminating ReplicationController wrapped-volume-race-4dc42807-0edc-493f-9916-a35d40361828 pods took: 300.256099ms STEP: Creating RC which spawns configmap-volume pods May 2 13:16:52.432: INFO: Pod name wrapped-volume-race-219e4b8d-c0f7-4add-bf8e-4c2efed291cf: Found 0 pods out of 5 May 2 13:16:57.456: INFO: Pod name wrapped-volume-race-219e4b8d-c0f7-4add-bf8e-4c2efed291cf: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-219e4b8d-c0f7-4add-bf8e-4c2efed291cf in namespace emptydir-wrapper-90, will wait for the garbage collector to delete the pods May 2 13:17:11.536: INFO: Deleting ReplicationController wrapped-volume-race-219e4b8d-c0f7-4add-bf8e-4c2efed291cf took: 7.164058ms May 2 13:17:11.837: INFO: Terminating ReplicationController wrapped-volume-race-219e4b8d-c0f7-4add-bf8e-4c2efed291cf pods took: 300.276853ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:17:53.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-90" for this suite. May 2 13:18:01.176: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:18:01.254: INFO: namespace emptydir-wrapper-90 deletion completed in 8.086951015s • [SLOW TEST:200.791 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:18:01.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy May 2 13:18:01.301: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix108697685/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:18:01.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2349" for this suite. May 2 13:18:07.401: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:18:07.510: INFO: namespace kubectl-2349 deletion completed in 6.138649336s • [SLOW TEST:6.256 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:18:07.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 2 13:18:07.564: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7677fd00-4a70-463d-bf4b-e5c6a1b46aad" in namespace "projected-6859" to be "success or failure" May 2 13:18:07.635: INFO: Pod "downwardapi-volume-7677fd00-4a70-463d-bf4b-e5c6a1b46aad": Phase="Pending", Reason="", readiness=false. Elapsed: 71.682868ms May 2 13:18:09.665: INFO: Pod "downwardapi-volume-7677fd00-4a70-463d-bf4b-e5c6a1b46aad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101512077s May 2 13:18:11.669: INFO: Pod "downwardapi-volume-7677fd00-4a70-463d-bf4b-e5c6a1b46aad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.105601211s STEP: Saw pod success May 2 13:18:11.669: INFO: Pod "downwardapi-volume-7677fd00-4a70-463d-bf4b-e5c6a1b46aad" satisfied condition "success or failure" May 2 13:18:11.672: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-7677fd00-4a70-463d-bf4b-e5c6a1b46aad container client-container: STEP: delete the pod May 2 13:18:11.801: INFO: Waiting for pod downwardapi-volume-7677fd00-4a70-463d-bf4b-e5c6a1b46aad to disappear May 2 13:18:11.868: INFO: Pod downwardapi-volume-7677fd00-4a70-463d-bf4b-e5c6a1b46aad no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:18:11.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6859" for this suite. May 2 13:18:17.913: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:18:18.001: INFO: namespace projected-6859 deletion completed in 6.129018961s • [SLOW TEST:10.491 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:18:18.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4362.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-4362.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4362.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4362.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-4362.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4362.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 2 13:18:24.151: INFO: DNS probes using dns-4362/dns-test-ef61a8f4-3a2a-43cf-a7d2-d94890a27514 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:18:24.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4362" for this suite. May 2 13:18:30.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:18:30.340: INFO: namespace dns-4362 deletion completed in 6.132369025s • [SLOW TEST:12.338 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:18:30.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 2 13:18:30.425: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2740,SelfLink:/api/v1/namespaces/watch-2740/configmaps/e2e-watch-test-configmap-a,UID:5fb03e6b-b6de-439e-a852-f5785079a01c,ResourceVersion:8624173,Generation:0,CreationTimestamp:2020-05-02 13:18:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 2 13:18:30.426: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2740,SelfLink:/api/v1/namespaces/watch-2740/configmaps/e2e-watch-test-configmap-a,UID:5fb03e6b-b6de-439e-a852-f5785079a01c,ResourceVersion:8624173,Generation:0,CreationTimestamp:2020-05-02 13:18:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 2 13:18:40.433: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2740,SelfLink:/api/v1/namespaces/watch-2740/configmaps/e2e-watch-test-configmap-a,UID:5fb03e6b-b6de-439e-a852-f5785079a01c,ResourceVersion:8624192,Generation:0,CreationTimestamp:2020-05-02 13:18:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 2 13:18:40.434: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2740,SelfLink:/api/v1/namespaces/watch-2740/configmaps/e2e-watch-test-configmap-a,UID:5fb03e6b-b6de-439e-a852-f5785079a01c,ResourceVersion:8624192,Generation:0,CreationTimestamp:2020-05-02 13:18:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 2 13:18:50.440: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2740,SelfLink:/api/v1/namespaces/watch-2740/configmaps/e2e-watch-test-configmap-a,UID:5fb03e6b-b6de-439e-a852-f5785079a01c,ResourceVersion:8624213,Generation:0,CreationTimestamp:2020-05-02 13:18:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 2 13:18:50.440: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2740,SelfLink:/api/v1/namespaces/watch-2740/configmaps/e2e-watch-test-configmap-a,UID:5fb03e6b-b6de-439e-a852-f5785079a01c,ResourceVersion:8624213,Generation:0,CreationTimestamp:2020-05-02 13:18:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 2 13:19:00.447: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2740,SelfLink:/api/v1/namespaces/watch-2740/configmaps/e2e-watch-test-configmap-a,UID:5fb03e6b-b6de-439e-a852-f5785079a01c,ResourceVersion:8624233,Generation:0,CreationTimestamp:2020-05-02 13:18:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 2 13:19:00.447: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2740,SelfLink:/api/v1/namespaces/watch-2740/configmaps/e2e-watch-test-configmap-a,UID:5fb03e6b-b6de-439e-a852-f5785079a01c,ResourceVersion:8624233,Generation:0,CreationTimestamp:2020-05-02 13:18:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 2 13:19:10.455: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-2740,SelfLink:/api/v1/namespaces/watch-2740/configmaps/e2e-watch-test-configmap-b,UID:88f04672-7b31-4fe2-b8fb-81be31f1086f,ResourceVersion:8624254,Generation:0,CreationTimestamp:2020-05-02 13:19:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 2 13:19:10.455: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-2740,SelfLink:/api/v1/namespaces/watch-2740/configmaps/e2e-watch-test-configmap-b,UID:88f04672-7b31-4fe2-b8fb-81be31f1086f,ResourceVersion:8624254,Generation:0,CreationTimestamp:2020-05-02 13:19:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 2 13:19:20.462: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-2740,SelfLink:/api/v1/namespaces/watch-2740/configmaps/e2e-watch-test-configmap-b,UID:88f04672-7b31-4fe2-b8fb-81be31f1086f,ResourceVersion:8624274,Generation:0,CreationTimestamp:2020-05-02 13:19:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 2 13:19:20.462: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-2740,SelfLink:/api/v1/namespaces/watch-2740/configmaps/e2e-watch-test-configmap-b,UID:88f04672-7b31-4fe2-b8fb-81be31f1086f,ResourceVersion:8624274,Generation:0,CreationTimestamp:2020-05-02 13:19:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:19:30.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2740" for this suite. May 2 13:19:36.487: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:19:36.567: INFO: namespace watch-2740 deletion completed in 6.098919724s • [SLOW TEST:66.227 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:19:36.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 2 13:19:40.707: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:19:40.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9438" for this suite. May 2 13:19:46.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:19:46.866: INFO: namespace container-runtime-9438 deletion completed in 6.10402268s • [SLOW TEST:10.299 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:19:46.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 2 13:19:46.929: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 2 13:19:51.934: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 2 13:19:51.934: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 2 13:19:51.955: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-7174,SelfLink:/apis/apps/v1/namespaces/deployment-7174/deployments/test-cleanup-deployment,UID:ae6cb3fe-f178-4699-b4bf-adea0f43b6f8,ResourceVersion:8624378,Generation:1,CreationTimestamp:2020-05-02 13:19:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} May 2 13:19:51.973: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-7174,SelfLink:/apis/apps/v1/namespaces/deployment-7174/replicasets/test-cleanup-deployment-55bbcbc84c,UID:2aaf59d7-ce9a-48aa-815d-f53a4103bc85,ResourceVersion:8624380,Generation:1,CreationTimestamp:2020-05-02 13:19:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment ae6cb3fe-f178-4699-b4bf-adea0f43b6f8 0xc0028fc807 0xc0028fc808}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 2 13:19:51.973: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 2 13:19:51.973: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-7174,SelfLink:/apis/apps/v1/namespaces/deployment-7174/replicasets/test-cleanup-controller,UID:fba3d7f7-9c95-4381-a91b-a1541b14424f,ResourceVersion:8624379,Generation:1,CreationTimestamp:2020-05-02 13:19:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment ae6cb3fe-f178-4699-b4bf-adea0f43b6f8 0xc0028fc737 0xc0028fc738}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 2 13:19:52.053: INFO: Pod "test-cleanup-controller-5khn7" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-5khn7,GenerateName:test-cleanup-controller-,Namespace:deployment-7174,SelfLink:/api/v1/namespaces/deployment-7174/pods/test-cleanup-controller-5khn7,UID:892e8661-89c8-4532-b070-a0694cc22add,ResourceVersion:8624374,Generation:0,CreationTimestamp:2020-05-02 13:19:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller fba3d7f7-9c95-4381-a91b-a1541b14424f 0xc0028fd0c7 0xc0028fd0c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-n7xvt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n7xvt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-n7xvt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0028fd140} {node.kubernetes.io/unreachable Exists NoExecute 0xc0028fd160}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 13:19:47 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 13:19:50 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 13:19:50 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 13:19:46 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.60,StartTime:2020-05-02 13:19:47 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-02 13:19:49 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://ba6d27d37eea4851138e4679247377265bd08a8d8921eeff5616f87ec4a47ddc}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 13:19:52.054: INFO: Pod "test-cleanup-deployment-55bbcbc84c-h4cpj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-h4cpj,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-7174,SelfLink:/api/v1/namespaces/deployment-7174/pods/test-cleanup-deployment-55bbcbc84c-h4cpj,UID:edf4bc1f-b982-443b-a06d-f86086da3383,ResourceVersion:8624386,Generation:0,CreationTimestamp:2020-05-02 13:19:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 2aaf59d7-ce9a-48aa-815d-f53a4103bc85 0xc0028fd247 0xc0028fd248}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-n7xvt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n7xvt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-n7xvt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0028fd2c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0028fd2e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 13:19:51 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:19:52.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7174" for this suite. May 2 13:19:58.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:19:58.187: INFO: namespace deployment-7174 deletion completed in 6.118915756s • [SLOW TEST:11.321 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:19:58.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 May 2 13:19:58.218: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 2 13:19:58.241: INFO: Waiting for terminating namespaces to be deleted... May 2 13:19:58.243: INFO: Logging pods the kubelet thinks is on node iruya-worker before test May 2 13:19:58.269: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 2 13:19:58.269: INFO: Container kube-proxy ready: true, restart count 0 May 2 13:19:58.269: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 2 13:19:58.269: INFO: Container kindnet-cni ready: true, restart count 0 May 2 13:19:58.269: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test May 2 13:19:58.275: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) May 2 13:19:58.275: INFO: Container kube-proxy ready: true, restart count 0 May 2 13:19:58.275: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) May 2 13:19:58.275: INFO: Container kindnet-cni ready: true, restart count 0 May 2 13:19:58.275: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) May 2 13:19:58.275: INFO: Container coredns ready: true, restart count 0 May 2 13:19:58.275: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) May 2 13:19:58.275: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-5a103b4d-2db6-44e2-9ab0-9b042d5b6d24 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-5a103b4d-2db6-44e2-9ab0-9b042d5b6d24 off the node iruya-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-5a103b4d-2db6-44e2-9ab0-9b042d5b6d24 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:20:06.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3361" for this suite. May 2 13:20:16.483: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:20:16.576: INFO: namespace sched-pred-3361 deletion completed in 10.102908108s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:18.389 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:20:16.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod May 2 13:20:16.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6434' May 2 13:20:21.203: INFO: stderr: "" May 2 13:20:21.203: INFO: stdout: "pod/pause created\n" May 2 13:20:21.203: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 2 13:20:21.203: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-6434" to be "running and ready" May 2 13:20:21.241: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 38.444633ms May 2 13:20:23.259: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056434879s May 2 13:20:25.264: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.060800058s May 2 13:20:25.264: INFO: Pod "pause" satisfied condition "running and ready" May 2 13:20:25.264: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod May 2 13:20:25.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-6434' May 2 13:20:25.373: INFO: stderr: "" May 2 13:20:25.373: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 2 13:20:25.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6434' May 2 13:20:25.473: INFO: stderr: "" May 2 13:20:25.474: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod May 2 13:20:25.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-6434' May 2 13:20:25.557: INFO: stderr: "" May 2 13:20:25.557: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 2 13:20:25.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6434' May 2 13:20:25.644: INFO: stderr: "" May 2 13:20:25.644: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources May 2 13:20:25.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6434' May 2 13:20:25.740: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 2 13:20:25.740: INFO: stdout: "pod \"pause\" force deleted\n" May 2 13:20:25.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-6434' May 2 13:20:25.849: INFO: stderr: "No resources found.\n" May 2 13:20:25.849: INFO: stdout: "" May 2 13:20:25.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-6434 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 2 13:20:25.941: INFO: stderr: "" May 2 13:20:25.942: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:20:25.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6434" for this suite. May 2 13:20:32.000: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:20:32.083: INFO: namespace kubectl-6434 deletion completed in 6.138722975s • [SLOW TEST:15.506 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:20:32.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-7080 STEP: creating a selector STEP: Creating the service pods in kubernetes May 2 13:20:32.133: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 2 13:20:56.256: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.239:8080/dial?request=hostName&protocol=http&host=10.244.2.238&port=8080&tries=1'] Namespace:pod-network-test-7080 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 2 13:20:56.256: INFO: >>> kubeConfig: /root/.kube/config I0502 13:20:56.295847 6 log.go:172] (0xc0008c24d0) (0xc001dbc460) Create stream I0502 13:20:56.295898 6 log.go:172] (0xc0008c24d0) (0xc001dbc460) Stream added, broadcasting: 1 I0502 13:20:56.299401 6 log.go:172] (0xc0008c24d0) Reply frame received for 1 I0502 13:20:56.299428 6 log.go:172] (0xc0008c24d0) (0xc000540000) Create stream I0502 13:20:56.299442 6 log.go:172] (0xc0008c24d0) (0xc000540000) Stream added, broadcasting: 3 I0502 13:20:56.300547 6 log.go:172] (0xc0008c24d0) Reply frame received for 3 I0502 13:20:56.300606 6 log.go:172] (0xc0008c24d0) (0xc0005400a0) Create stream I0502 13:20:56.300625 6 log.go:172] (0xc0008c24d0) (0xc0005400a0) Stream added, broadcasting: 5 I0502 13:20:56.301854 6 log.go:172] (0xc0008c24d0) Reply frame received for 5 I0502 13:20:56.397618 6 log.go:172] (0xc0008c24d0) Data frame received for 3 I0502 13:20:56.397650 6 log.go:172] (0xc000540000) (3) Data frame handling I0502 13:20:56.397800 6 log.go:172] (0xc000540000) (3) Data frame sent I0502 13:20:56.398442 6 log.go:172] (0xc0008c24d0) Data frame received for 5 I0502 13:20:56.398460 6 log.go:172] (0xc0005400a0) (5) Data frame handling I0502 13:20:56.398478 6 log.go:172] (0xc0008c24d0) Data frame received for 3 I0502 13:20:56.398487 6 log.go:172] (0xc000540000) (3) Data frame handling I0502 13:20:56.406903 6 log.go:172] (0xc0008c24d0) Data frame received for 1 I0502 13:20:56.406930 6 log.go:172] (0xc001dbc460) (1) Data frame handling I0502 13:20:56.406950 6 log.go:172] (0xc001dbc460) (1) Data frame sent I0502 13:20:56.406969 6 log.go:172] (0xc0008c24d0) (0xc001dbc460) Stream removed, broadcasting: 1 I0502 13:20:56.406982 6 log.go:172] (0xc0008c24d0) Go away received I0502 13:20:56.407100 6 log.go:172] (0xc0008c24d0) (0xc001dbc460) Stream removed, broadcasting: 1 I0502 13:20:56.407139 6 log.go:172] (0xc0008c24d0) (0xc000540000) Stream removed, broadcasting: 3 I0502 13:20:56.407165 6 log.go:172] (0xc0008c24d0) (0xc0005400a0) Stream removed, broadcasting: 5 May 2 13:20:56.407: INFO: Waiting for endpoints: map[] May 2 13:20:56.410: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.239:8080/dial?request=hostName&protocol=http&host=10.244.1.63&port=8080&tries=1'] Namespace:pod-network-test-7080 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 2 13:20:56.410: INFO: >>> kubeConfig: /root/.kube/config I0502 13:20:56.431552 6 log.go:172] (0xc000ac09a0) (0xc001b54640) Create stream I0502 13:20:56.431579 6 log.go:172] (0xc000ac09a0) (0xc001b54640) Stream added, broadcasting: 1 I0502 13:20:56.434012 6 log.go:172] (0xc000ac09a0) Reply frame received for 1 I0502 13:20:56.434051 6 log.go:172] (0xc000ac09a0) (0xc002c72500) Create stream I0502 13:20:56.434060 6 log.go:172] (0xc000ac09a0) (0xc002c72500) Stream added, broadcasting: 3 I0502 13:20:56.434969 6 log.go:172] (0xc000ac09a0) Reply frame received for 3 I0502 13:20:56.435009 6 log.go:172] (0xc000ac09a0) (0xc002c725a0) Create stream I0502 13:20:56.435023 6 log.go:172] (0xc000ac09a0) (0xc002c725a0) Stream added, broadcasting: 5 I0502 13:20:56.435800 6 log.go:172] (0xc000ac09a0) Reply frame received for 5 I0502 13:20:56.511357 6 log.go:172] (0xc000ac09a0) Data frame received for 3 I0502 13:20:56.511388 6 log.go:172] (0xc002c72500) (3) Data frame handling I0502 13:20:56.511403 6 log.go:172] (0xc002c72500) (3) Data frame sent I0502 13:20:56.512006 6 log.go:172] (0xc000ac09a0) Data frame received for 5 I0502 13:20:56.512028 6 log.go:172] (0xc002c725a0) (5) Data frame handling I0502 13:20:56.512103 6 log.go:172] (0xc000ac09a0) Data frame received for 3 I0502 13:20:56.512117 6 log.go:172] (0xc002c72500) (3) Data frame handling I0502 13:20:56.514031 6 log.go:172] (0xc000ac09a0) Data frame received for 1 I0502 13:20:56.514079 6 log.go:172] (0xc001b54640) (1) Data frame handling I0502 13:20:56.514132 6 log.go:172] (0xc001b54640) (1) Data frame sent I0502 13:20:56.514173 6 log.go:172] (0xc000ac09a0) (0xc001b54640) Stream removed, broadcasting: 1 I0502 13:20:56.514218 6 log.go:172] (0xc000ac09a0) Go away received I0502 13:20:56.514298 6 log.go:172] (0xc000ac09a0) (0xc001b54640) Stream removed, broadcasting: 1 I0502 13:20:56.514328 6 log.go:172] (0xc000ac09a0) (0xc002c72500) Stream removed, broadcasting: 3 I0502 13:20:56.514351 6 log.go:172] (0xc000ac09a0) (0xc002c725a0) Stream removed, broadcasting: 5 May 2 13:20:56.514: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:20:56.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7080" for this suite. May 2 13:21:18.550: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:21:18.627: INFO: namespace pod-network-test-7080 deletion completed in 22.108394544s • [SLOW TEST:46.543 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:21:18.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-486 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-486 to expose endpoints map[] May 2 13:21:18.746: INFO: Get endpoints failed (11.210503ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 2 13:21:19.750: INFO: successfully validated that service endpoint-test2 in namespace services-486 exposes endpoints map[] (1.015328375s elapsed) STEP: Creating pod pod1 in namespace services-486 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-486 to expose endpoints map[pod1:[80]] May 2 13:21:23.887: INFO: successfully validated that service endpoint-test2 in namespace services-486 exposes endpoints map[pod1:[80]] (4.12970769s elapsed) STEP: Creating pod pod2 in namespace services-486 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-486 to expose endpoints map[pod1:[80] pod2:[80]] May 2 13:21:27.028: INFO: successfully validated that service endpoint-test2 in namespace services-486 exposes endpoints map[pod1:[80] pod2:[80]] (3.135594059s elapsed) STEP: Deleting pod pod1 in namespace services-486 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-486 to expose endpoints map[pod2:[80]] May 2 13:21:28.083: INFO: successfully validated that service endpoint-test2 in namespace services-486 exposes endpoints map[pod2:[80]] (1.050607618s elapsed) STEP: Deleting pod pod2 in namespace services-486 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-486 to expose endpoints map[] May 2 13:21:29.133: INFO: successfully validated that service endpoint-test2 in namespace services-486 exposes endpoints map[] (1.04549677s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:21:29.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-486" for this suite. May 2 13:21:51.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:21:52.047: INFO: namespace services-486 deletion completed in 22.195004557s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:33.420 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:21:52.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-8584 STEP: creating a selector STEP: Creating the service pods in kubernetes May 2 13:21:52.148: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 2 13:22:14.300: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.241:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8584 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 2 13:22:14.300: INFO: >>> kubeConfig: /root/.kube/config I0502 13:22:14.340250 6 log.go:172] (0xc000ac1d90) (0xc001a73d60) Create stream I0502 13:22:14.340277 6 log.go:172] (0xc000ac1d90) (0xc001a73d60) Stream added, broadcasting: 1 I0502 13:22:14.343403 6 log.go:172] (0xc000ac1d90) Reply frame received for 1 I0502 13:22:14.343432 6 log.go:172] (0xc000ac1d90) (0xc003260500) Create stream I0502 13:22:14.343438 6 log.go:172] (0xc000ac1d90) (0xc003260500) Stream added, broadcasting: 3 I0502 13:22:14.344522 6 log.go:172] (0xc000ac1d90) Reply frame received for 3 I0502 13:22:14.344561 6 log.go:172] (0xc000ac1d90) (0xc0032605a0) Create stream I0502 13:22:14.344575 6 log.go:172] (0xc000ac1d90) (0xc0032605a0) Stream added, broadcasting: 5 I0502 13:22:14.345672 6 log.go:172] (0xc000ac1d90) Reply frame received for 5 I0502 13:22:14.438420 6 log.go:172] (0xc000ac1d90) Data frame received for 5 I0502 13:22:14.438452 6 log.go:172] (0xc0032605a0) (5) Data frame handling I0502 13:22:14.438489 6 log.go:172] (0xc000ac1d90) Data frame received for 3 I0502 13:22:14.438505 6 log.go:172] (0xc003260500) (3) Data frame handling I0502 13:22:14.438516 6 log.go:172] (0xc003260500) (3) Data frame sent I0502 13:22:14.438559 6 log.go:172] (0xc000ac1d90) Data frame received for 3 I0502 13:22:14.438572 6 log.go:172] (0xc003260500) (3) Data frame handling I0502 13:22:14.440800 6 log.go:172] (0xc000ac1d90) Data frame received for 1 I0502 13:22:14.440823 6 log.go:172] (0xc001a73d60) (1) Data frame handling I0502 13:22:14.440834 6 log.go:172] (0xc001a73d60) (1) Data frame sent I0502 13:22:14.440844 6 log.go:172] (0xc000ac1d90) (0xc001a73d60) Stream removed, broadcasting: 1 I0502 13:22:14.440856 6 log.go:172] (0xc000ac1d90) Go away received I0502 13:22:14.440975 6 log.go:172] (0xc000ac1d90) (0xc001a73d60) Stream removed, broadcasting: 1 I0502 13:22:14.440997 6 log.go:172] (0xc000ac1d90) (0xc003260500) Stream removed, broadcasting: 3 I0502 13:22:14.441007 6 log.go:172] (0xc000ac1d90) (0xc0032605a0) Stream removed, broadcasting: 5 May 2 13:22:14.441: INFO: Found all expected endpoints: [netserver-0] May 2 13:22:14.444: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.65:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8584 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 2 13:22:14.444: INFO: >>> kubeConfig: /root/.kube/config I0502 13:22:14.469410 6 log.go:172] (0xc00302e9a0) (0xc003260780) Create stream I0502 13:22:14.469473 6 log.go:172] (0xc00302e9a0) (0xc003260780) Stream added, broadcasting: 1 I0502 13:22:14.471620 6 log.go:172] (0xc00302e9a0) Reply frame received for 1 I0502 13:22:14.471654 6 log.go:172] (0xc00302e9a0) (0xc003260820) Create stream I0502 13:22:14.471665 6 log.go:172] (0xc00302e9a0) (0xc003260820) Stream added, broadcasting: 3 I0502 13:22:14.472642 6 log.go:172] (0xc00302e9a0) Reply frame received for 3 I0502 13:22:14.472699 6 log.go:172] (0xc00302e9a0) (0xc001a73e00) Create stream I0502 13:22:14.472727 6 log.go:172] (0xc00302e9a0) (0xc001a73e00) Stream added, broadcasting: 5 I0502 13:22:14.474010 6 log.go:172] (0xc00302e9a0) Reply frame received for 5 I0502 13:22:14.525535 6 log.go:172] (0xc00302e9a0) Data frame received for 3 I0502 13:22:14.525620 6 log.go:172] (0xc003260820) (3) Data frame handling I0502 13:22:14.525647 6 log.go:172] (0xc003260820) (3) Data frame sent I0502 13:22:14.525659 6 log.go:172] (0xc00302e9a0) Data frame received for 3 I0502 13:22:14.525682 6 log.go:172] (0xc003260820) (3) Data frame handling I0502 13:22:14.526280 6 log.go:172] (0xc00302e9a0) Data frame received for 5 I0502 13:22:14.526300 6 log.go:172] (0xc001a73e00) (5) Data frame handling I0502 13:22:14.527255 6 log.go:172] (0xc00302e9a0) Data frame received for 1 I0502 13:22:14.527274 6 log.go:172] (0xc003260780) (1) Data frame handling I0502 13:22:14.527285 6 log.go:172] (0xc003260780) (1) Data frame sent I0502 13:22:14.527308 6 log.go:172] (0xc00302e9a0) (0xc003260780) Stream removed, broadcasting: 1 I0502 13:22:14.527398 6 log.go:172] (0xc00302e9a0) (0xc003260780) Stream removed, broadcasting: 1 I0502 13:22:14.527411 6 log.go:172] (0xc00302e9a0) (0xc003260820) Stream removed, broadcasting: 3 I0502 13:22:14.527534 6 log.go:172] (0xc00302e9a0) (0xc001a73e00) Stream removed, broadcasting: 5 May 2 13:22:14.527: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:22:14.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8584" for this suite. May 2 13:22:38.575: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:22:38.682: INFO: namespace pod-network-test-8584 deletion completed in 24.150552662s • [SLOW TEST:46.634 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:22:38.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command May 2 13:22:38.761: INFO: Waiting up to 5m0s for pod "client-containers-7af2f9c3-2d63-411d-8da2-0db906470bdc" in namespace "containers-7650" to be "success or failure" May 2 13:22:38.764: INFO: Pod "client-containers-7af2f9c3-2d63-411d-8da2-0db906470bdc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.551094ms May 2 13:22:40.843: INFO: Pod "client-containers-7af2f9c3-2d63-411d-8da2-0db906470bdc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082321207s May 2 13:22:42.848: INFO: Pod "client-containers-7af2f9c3-2d63-411d-8da2-0db906470bdc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.086603317s STEP: Saw pod success May 2 13:22:42.848: INFO: Pod "client-containers-7af2f9c3-2d63-411d-8da2-0db906470bdc" satisfied condition "success or failure" May 2 13:22:42.851: INFO: Trying to get logs from node iruya-worker pod client-containers-7af2f9c3-2d63-411d-8da2-0db906470bdc container test-container: STEP: delete the pod May 2 13:22:42.994: INFO: Waiting for pod client-containers-7af2f9c3-2d63-411d-8da2-0db906470bdc to disappear May 2 13:22:43.004: INFO: Pod client-containers-7af2f9c3-2d63-411d-8da2-0db906470bdc no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:22:43.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7650" for this suite. May 2 13:22:49.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:22:49.131: INFO: namespace containers-7650 deletion completed in 6.123439139s • [SLOW TEST:10.449 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:22:49.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap that has name configmap-test-emptyKey-0b1ae1f2-7a54-48b7-9c3e-4905e8f37376 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:22:49.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8980" for this suite. May 2 13:22:55.259: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:22:55.339: INFO: namespace configmap-8980 deletion completed in 6.112485617s • [SLOW TEST:6.207 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:22:55.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 2 13:22:55.431: INFO: Creating deployment "test-recreate-deployment" May 2 13:22:55.448: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 2 13:22:55.496: INFO: deployment "test-recreate-deployment" doesn't have the required revision set May 2 13:22:57.503: INFO: Waiting deployment "test-recreate-deployment" to complete May 2 13:22:57.506: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724022575, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724022575, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724022575, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724022575, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 2 13:22:59.509: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 2 13:22:59.515: INFO: Updating deployment test-recreate-deployment May 2 13:22:59.515: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 2 13:22:59.747: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-4377,SelfLink:/apis/apps/v1/namespaces/deployment-4377/deployments/test-recreate-deployment,UID:c6b69036-5626-45c3-ac7a-abc9c9e44f54,ResourceVersion:8625113,Generation:2,CreationTimestamp:2020-05-02 13:22:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-05-02 13:22:59 +0000 UTC 2020-05-02 13:22:59 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-05-02 13:22:59 +0000 UTC 2020-05-02 13:22:55 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} May 2 13:22:59.751: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-4377,SelfLink:/apis/apps/v1/namespaces/deployment-4377/replicasets/test-recreate-deployment-5c8c9cc69d,UID:0d3e889a-d07c-44fb-894e-c3fae810c03a,ResourceVersion:8625111,Generation:1,CreationTimestamp:2020-05-02 13:22:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment c6b69036-5626-45c3-ac7a-abc9c9e44f54 0xc001d6d507 0xc001d6d508}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 2 13:22:59.751: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 2 13:22:59.751: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-4377,SelfLink:/apis/apps/v1/namespaces/deployment-4377/replicasets/test-recreate-deployment-6df85df6b9,UID:7d1d112f-19d5-4b1f-a16c-4384165b4481,ResourceVersion:8625102,Generation:2,CreationTimestamp:2020-05-02 13:22:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment c6b69036-5626-45c3-ac7a-abc9c9e44f54 0xc001d6d5d7 0xc001d6d5d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 2 13:22:59.764: INFO: Pod "test-recreate-deployment-5c8c9cc69d-9k747" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-9k747,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-4377,SelfLink:/api/v1/namespaces/deployment-4377/pods/test-recreate-deployment-5c8c9cc69d-9k747,UID:9342c3ee-b83c-4b49-9dc7-c8dd25500607,ResourceVersion:8625114,Generation:0,CreationTimestamp:2020-05-02 13:22:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 0d3e889a-d07c-44fb-894e-c3fae810c03a 0xc001d6deb7 0xc001d6deb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rpzvf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpzvf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rpzvf true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d6df30} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d6df50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 13:22:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 13:22:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 13:22:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 13:22:59 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-02 13:22:59 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:22:59.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4377" for this suite. May 2 13:23:05.937: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:23:06.021: INFO: namespace deployment-4377 deletion completed in 6.253710756s • [SLOW TEST:10.682 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:23:06.021: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-a7056fc0-90c8-492c-ad84-a24ea873406c in namespace container-probe-3417 May 2 13:23:10.129: INFO: Started pod liveness-a7056fc0-90c8-492c-ad84-a24ea873406c in namespace container-probe-3417 STEP: checking the pod's current state and verifying that restartCount is present May 2 13:23:10.132: INFO: Initial restart count of pod liveness-a7056fc0-90c8-492c-ad84-a24ea873406c is 0 May 2 13:23:26.219: INFO: Restart count of pod container-probe-3417/liveness-a7056fc0-90c8-492c-ad84-a24ea873406c is now 1 (16.08687171s elapsed) May 2 13:23:46.258: INFO: Restart count of pod container-probe-3417/liveness-a7056fc0-90c8-492c-ad84-a24ea873406c is now 2 (36.126656338s elapsed) May 2 13:24:06.315: INFO: Restart count of pod container-probe-3417/liveness-a7056fc0-90c8-492c-ad84-a24ea873406c is now 3 (56.18370013s elapsed) May 2 13:24:27.162: INFO: Restart count of pod container-probe-3417/liveness-a7056fc0-90c8-492c-ad84-a24ea873406c is now 4 (1m17.030242857s elapsed) May 2 13:25:35.418: INFO: Restart count of pod container-probe-3417/liveness-a7056fc0-90c8-492c-ad84-a24ea873406c is now 5 (2m25.286124274s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:25:35.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3417" for this suite. May 2 13:25:42.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:25:42.286: INFO: namespace container-probe-3417 deletion completed in 6.349551761s • [SLOW TEST:156.265 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:25:42.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 2 13:25:42.494: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:25:51.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6069" for this suite. May 2 13:26:33.061: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:26:33.138: INFO: namespace pods-6069 deletion completed in 42.118958415s • [SLOW TEST:50.851 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:26:33.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 2 13:26:33.236: INFO: Waiting up to 5m0s for pod "downwardapi-volume-de37a6c3-4557-4f51-a89a-bd71b18357d4" in namespace "downward-api-9991" to be "success or failure" May 2 13:26:33.259: INFO: Pod "downwardapi-volume-de37a6c3-4557-4f51-a89a-bd71b18357d4": Phase="Pending", Reason="", readiness=false. Elapsed: 23.74246ms May 2 13:26:35.263: INFO: Pod "downwardapi-volume-de37a6c3-4557-4f51-a89a-bd71b18357d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027448734s May 2 13:26:37.284: INFO: Pod "downwardapi-volume-de37a6c3-4557-4f51-a89a-bd71b18357d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048282895s STEP: Saw pod success May 2 13:26:37.284: INFO: Pod "downwardapi-volume-de37a6c3-4557-4f51-a89a-bd71b18357d4" satisfied condition "success or failure" May 2 13:26:37.295: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-de37a6c3-4557-4f51-a89a-bd71b18357d4 container client-container: STEP: delete the pod May 2 13:26:37.374: INFO: Waiting for pod downwardapi-volume-de37a6c3-4557-4f51-a89a-bd71b18357d4 to disappear May 2 13:26:37.391: INFO: Pod downwardapi-volume-de37a6c3-4557-4f51-a89a-bd71b18357d4 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:26:37.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9991" for this suite. May 2 13:26:43.406: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:26:43.516: INFO: namespace downward-api-9991 deletion completed in 6.121882293s • [SLOW TEST:10.378 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:26:43.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller May 2 13:26:43.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8637' May 2 13:26:43.901: INFO: stderr: "" May 2 13:26:43.901: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 2 13:26:43.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8637' May 2 13:26:44.010: INFO: stderr: "" May 2 13:26:44.010: INFO: stdout: "update-demo-nautilus-2zmwk update-demo-nautilus-rkxrx " May 2 13:26:44.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2zmwk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8637' May 2 13:26:44.089: INFO: stderr: "" May 2 13:26:44.089: INFO: stdout: "" May 2 13:26:44.089: INFO: update-demo-nautilus-2zmwk is created but not running May 2 13:26:49.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8637' May 2 13:26:49.198: INFO: stderr: "" May 2 13:26:49.198: INFO: stdout: "update-demo-nautilus-2zmwk update-demo-nautilus-rkxrx " May 2 13:26:49.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2zmwk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8637' May 2 13:26:49.284: INFO: stderr: "" May 2 13:26:49.284: INFO: stdout: "true" May 2 13:26:49.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2zmwk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8637' May 2 13:26:49.369: INFO: stderr: "" May 2 13:26:49.369: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 2 13:26:49.369: INFO: validating pod update-demo-nautilus-2zmwk May 2 13:26:49.372: INFO: got data: { "image": "nautilus.jpg" } May 2 13:26:49.372: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 2 13:26:49.372: INFO: update-demo-nautilus-2zmwk is verified up and running May 2 13:26:49.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rkxrx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8637' May 2 13:26:49.461: INFO: stderr: "" May 2 13:26:49.461: INFO: stdout: "true" May 2 13:26:49.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rkxrx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8637' May 2 13:26:49.551: INFO: stderr: "" May 2 13:26:49.551: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 2 13:26:49.551: INFO: validating pod update-demo-nautilus-rkxrx May 2 13:26:49.555: INFO: got data: { "image": "nautilus.jpg" } May 2 13:26:49.555: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 2 13:26:49.555: INFO: update-demo-nautilus-rkxrx is verified up and running STEP: using delete to clean up resources May 2 13:26:49.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8637' May 2 13:26:49.653: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 2 13:26:49.653: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 2 13:26:49.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8637' May 2 13:26:49.758: INFO: stderr: "No resources found.\n" May 2 13:26:49.758: INFO: stdout: "" May 2 13:26:49.758: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8637 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 2 13:26:49.893: INFO: stderr: "" May 2 13:26:49.893: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:26:49.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8637" for this suite. May 2 13:27:11.916: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:27:12.001: INFO: namespace kubectl-8637 deletion completed in 22.103136377s • [SLOW TEST:28.485 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:27:12.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 2 13:27:12.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-5575' May 2 13:27:12.168: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 2 13:27:12.169: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 May 2 13:27:14.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-5575' May 2 13:27:14.361: INFO: stderr: "" May 2 13:27:14.361: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:27:14.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5575" for this suite. May 2 13:27:20.677: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:27:20.804: INFO: namespace kubectl-5575 deletion completed in 6.428642981s • [SLOW TEST:8.802 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:27:20.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-91be4d24-7c68-4047-8ece-70fdd069a430 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-91be4d24-7c68-4047-8ece-70fdd069a430 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:27:28.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6070" for this suite. May 2 13:27:51.011: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:27:51.092: INFO: namespace configmap-6070 deletion completed in 22.115060763s • [SLOW TEST:30.287 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:27:51.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 2 13:27:51.161: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4ee5bb43-4613-415d-966f-ea503caf9bf8" in namespace "projected-6636" to be "success or failure" May 2 13:27:51.175: INFO: Pod "downwardapi-volume-4ee5bb43-4613-415d-966f-ea503caf9bf8": Phase="Pending", Reason="", readiness=false. Elapsed: 13.872987ms May 2 13:27:53.179: INFO: Pod "downwardapi-volume-4ee5bb43-4613-415d-966f-ea503caf9bf8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017947375s May 2 13:27:55.183: INFO: Pod "downwardapi-volume-4ee5bb43-4613-415d-966f-ea503caf9bf8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022411807s STEP: Saw pod success May 2 13:27:55.183: INFO: Pod "downwardapi-volume-4ee5bb43-4613-415d-966f-ea503caf9bf8" satisfied condition "success or failure" May 2 13:27:55.187: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-4ee5bb43-4613-415d-966f-ea503caf9bf8 container client-container: STEP: delete the pod May 2 13:27:55.232: INFO: Waiting for pod downwardapi-volume-4ee5bb43-4613-415d-966f-ea503caf9bf8 to disappear May 2 13:27:55.246: INFO: Pod downwardapi-volume-4ee5bb43-4613-415d-966f-ea503caf9bf8 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:27:55.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6636" for this suite. May 2 13:28:01.283: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:28:01.357: INFO: namespace projected-6636 deletion completed in 6.108650763s • [SLOW TEST:10.265 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:28:01.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 2 13:28:01.461: INFO: (0) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 5.020607ms) May 2 13:28:01.464: INFO: (1) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.515665ms) May 2 13:28:01.466: INFO: (2) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.500714ms) May 2 13:28:01.468: INFO: (3) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.089315ms) May 2 13:28:01.470: INFO: (4) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.263397ms) May 2 13:28:01.473: INFO: (5) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.416909ms) May 2 13:28:01.476: INFO: (6) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.606282ms) May 2 13:28:01.479: INFO: (7) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.090113ms) May 2 13:28:01.482: INFO: (8) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.856958ms) May 2 13:28:01.485: INFO: (9) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.964023ms) May 2 13:28:01.488: INFO: (10) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.081817ms) May 2 13:28:01.491: INFO: (11) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.495625ms) May 2 13:28:01.495: INFO: (12) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.368482ms) May 2 13:28:01.498: INFO: (13) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.50035ms) May 2 13:28:01.502: INFO: (14) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.576579ms) May 2 13:28:01.505: INFO: (15) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.271871ms) May 2 13:28:01.508: INFO: (16) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.163877ms) May 2 13:28:01.512: INFO: (17) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.864149ms) May 2 13:28:01.516: INFO: (18) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.683177ms) May 2 13:28:01.520: INFO: (19) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.687424ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:28:01.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-3066" for this suite. May 2 13:28:07.564: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:28:07.649: INFO: namespace proxy-3066 deletion completed in 6.125928866s • [SLOW TEST:6.291 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:28:07.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0502 13:28:38.250644 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 2 13:28:38.250: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:28:38.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6422" for this suite. May 2 13:28:44.272: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:28:44.349: INFO: namespace gc-6422 deletion completed in 6.0950308s • [SLOW TEST:36.699 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:28:44.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc May 2 13:28:44.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8934' May 2 13:28:44.788: INFO: stderr: "" May 2 13:28:44.788: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. May 2 13:28:45.798: INFO: Selector matched 1 pods for map[app:redis] May 2 13:28:45.798: INFO: Found 0 / 1 May 2 13:28:46.792: INFO: Selector matched 1 pods for map[app:redis] May 2 13:28:46.792: INFO: Found 0 / 1 May 2 13:28:47.808: INFO: Selector matched 1 pods for map[app:redis] May 2 13:28:47.808: INFO: Found 1 / 1 May 2 13:28:47.808: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 2 13:28:47.811: INFO: Selector matched 1 pods for map[app:redis] May 2 13:28:47.811: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings May 2 13:28:47.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-blqln redis-master --namespace=kubectl-8934' May 2 13:28:47.930: INFO: stderr: "" May 2 13:28:47.930: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 02 May 13:28:47.489 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 02 May 13:28:47.489 # Server started, Redis version 3.2.12\n1:M 02 May 13:28:47.489 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 02 May 13:28:47.489 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines May 2 13:28:47.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-blqln redis-master --namespace=kubectl-8934 --tail=1' May 2 13:28:48.031: INFO: stderr: "" May 2 13:28:48.031: INFO: stdout: "1:M 02 May 13:28:47.489 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes May 2 13:28:48.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-blqln redis-master --namespace=kubectl-8934 --limit-bytes=1' May 2 13:28:48.127: INFO: stderr: "" May 2 13:28:48.127: INFO: stdout: " " STEP: exposing timestamps May 2 13:28:48.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-blqln redis-master --namespace=kubectl-8934 --tail=1 --timestamps' May 2 13:28:48.233: INFO: stderr: "" May 2 13:28:48.234: INFO: stdout: "2020-05-02T13:28:47.489905472Z 1:M 02 May 13:28:47.489 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range May 2 13:28:50.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-blqln redis-master --namespace=kubectl-8934 --since=1s' May 2 13:28:50.849: INFO: stderr: "" May 2 13:28:50.849: INFO: stdout: "" May 2 13:28:50.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-blqln redis-master --namespace=kubectl-8934 --since=24h' May 2 13:28:50.976: INFO: stderr: "" May 2 13:28:50.976: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 02 May 13:28:47.489 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 02 May 13:28:47.489 # Server started, Redis version 3.2.12\n1:M 02 May 13:28:47.489 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 02 May 13:28:47.489 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources May 2 13:28:50.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8934' May 2 13:28:51.074: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 2 13:28:51.074: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" May 2 13:28:51.074: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-8934' May 2 13:28:51.168: INFO: stderr: "No resources found.\n" May 2 13:28:51.168: INFO: stdout: "" May 2 13:28:51.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-8934 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 2 13:28:51.262: INFO: stderr: "" May 2 13:28:51.262: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:28:51.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8934" for this suite. May 2 13:29:13.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:29:13.361: INFO: namespace kubectl-8934 deletion completed in 22.095247603s • [SLOW TEST:29.012 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:29:13.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 2 13:29:13.524: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9826,SelfLink:/api/v1/namespaces/watch-9826/configmaps/e2e-watch-test-label-changed,UID:73a6b49b-e4ee-4dac-aa90-2ef0c1fde259,ResourceVersion:8626202,Generation:0,CreationTimestamp:2020-05-02 13:29:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 2 13:29:13.524: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9826,SelfLink:/api/v1/namespaces/watch-9826/configmaps/e2e-watch-test-label-changed,UID:73a6b49b-e4ee-4dac-aa90-2ef0c1fde259,ResourceVersion:8626203,Generation:0,CreationTimestamp:2020-05-02 13:29:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 2 13:29:13.524: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9826,SelfLink:/api/v1/namespaces/watch-9826/configmaps/e2e-watch-test-label-changed,UID:73a6b49b-e4ee-4dac-aa90-2ef0c1fde259,ResourceVersion:8626204,Generation:0,CreationTimestamp:2020-05-02 13:29:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 2 13:29:23.610: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9826,SelfLink:/api/v1/namespaces/watch-9826/configmaps/e2e-watch-test-label-changed,UID:73a6b49b-e4ee-4dac-aa90-2ef0c1fde259,ResourceVersion:8626225,Generation:0,CreationTimestamp:2020-05-02 13:29:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 2 13:29:23.611: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9826,SelfLink:/api/v1/namespaces/watch-9826/configmaps/e2e-watch-test-label-changed,UID:73a6b49b-e4ee-4dac-aa90-2ef0c1fde259,ResourceVersion:8626226,Generation:0,CreationTimestamp:2020-05-02 13:29:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} May 2 13:29:23.611: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9826,SelfLink:/api/v1/namespaces/watch-9826/configmaps/e2e-watch-test-label-changed,UID:73a6b49b-e4ee-4dac-aa90-2ef0c1fde259,ResourceVersion:8626228,Generation:0,CreationTimestamp:2020-05-02 13:29:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:29:23.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9826" for this suite. May 2 13:29:29.640: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:29:29.719: INFO: namespace watch-9826 deletion completed in 6.095647183s • [SLOW TEST:16.358 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:29:29.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 2 13:29:29.828: INFO: Waiting up to 5m0s for pod "downwardapi-volume-56d7ea1b-a573-4266-815c-9771e204ba35" in namespace "downward-api-8934" to be "success or failure" May 2 13:29:29.920: INFO: Pod "downwardapi-volume-56d7ea1b-a573-4266-815c-9771e204ba35": Phase="Pending", Reason="", readiness=false. Elapsed: 91.657639ms May 2 13:29:31.924: INFO: Pod "downwardapi-volume-56d7ea1b-a573-4266-815c-9771e204ba35": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0959689s May 2 13:29:33.929: INFO: Pod "downwardapi-volume-56d7ea1b-a573-4266-815c-9771e204ba35": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.100280222s STEP: Saw pod success May 2 13:29:33.929: INFO: Pod "downwardapi-volume-56d7ea1b-a573-4266-815c-9771e204ba35" satisfied condition "success or failure" May 2 13:29:33.932: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-56d7ea1b-a573-4266-815c-9771e204ba35 container client-container: STEP: delete the pod May 2 13:29:33.975: INFO: Waiting for pod downwardapi-volume-56d7ea1b-a573-4266-815c-9771e204ba35 to disappear May 2 13:29:33.982: INFO: Pod downwardapi-volume-56d7ea1b-a573-4266-815c-9771e204ba35 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:29:33.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8934" for this suite. May 2 13:29:39.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:29:40.058: INFO: namespace downward-api-8934 deletion completed in 6.073424247s • [SLOW TEST:10.339 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:29:40.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 2 13:29:40.162: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 13:29:40.168: INFO: Number of nodes with available pods: 0 May 2 13:29:40.168: INFO: Node iruya-worker is running more than one daemon pod May 2 13:29:41.206: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 13:29:41.210: INFO: Number of nodes with available pods: 0 May 2 13:29:41.210: INFO: Node iruya-worker is running more than one daemon pod May 2 13:29:42.226: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 13:29:42.288: INFO: Number of nodes with available pods: 0 May 2 13:29:42.288: INFO: Node iruya-worker is running more than one daemon pod May 2 13:29:43.173: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 13:29:43.177: INFO: Number of nodes with available pods: 0 May 2 13:29:43.177: INFO: Node iruya-worker is running more than one daemon pod May 2 13:29:44.174: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 13:29:44.177: INFO: Number of nodes with available pods: 1 May 2 13:29:44.178: INFO: Node iruya-worker is running more than one daemon pod May 2 13:29:45.174: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 13:29:45.178: INFO: Number of nodes with available pods: 2 May 2 13:29:45.178: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 2 13:29:45.267: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 13:29:45.271: INFO: Number of nodes with available pods: 1 May 2 13:29:45.271: INFO: Node iruya-worker is running more than one daemon pod May 2 13:29:46.275: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 13:29:46.278: INFO: Number of nodes with available pods: 1 May 2 13:29:46.278: INFO: Node iruya-worker is running more than one daemon pod May 2 13:29:47.335: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 13:29:47.339: INFO: Number of nodes with available pods: 1 May 2 13:29:47.339: INFO: Node iruya-worker is running more than one daemon pod May 2 13:29:48.274: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 13:29:48.277: INFO: Number of nodes with available pods: 1 May 2 13:29:48.277: INFO: Node iruya-worker is running more than one daemon pod May 2 13:29:49.304: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 13:29:49.307: INFO: Number of nodes with available pods: 1 May 2 13:29:49.307: INFO: Node iruya-worker is running more than one daemon pod May 2 13:29:50.276: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 13:29:50.279: INFO: Number of nodes with available pods: 1 May 2 13:29:50.279: INFO: Node iruya-worker is running more than one daemon pod May 2 13:29:51.276: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 13:29:51.279: INFO: Number of nodes with available pods: 1 May 2 13:29:51.279: INFO: Node iruya-worker is running more than one daemon pod May 2 13:29:52.279: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 13:29:52.320: INFO: Number of nodes with available pods: 1 May 2 13:29:52.320: INFO: Node iruya-worker is running more than one daemon pod May 2 13:29:53.280: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 13:29:53.283: INFO: Number of nodes with available pods: 1 May 2 13:29:53.283: INFO: Node iruya-worker is running more than one daemon pod May 2 13:29:54.276: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 13:29:54.279: INFO: Number of nodes with available pods: 1 May 2 13:29:54.279: INFO: Node iruya-worker is running more than one daemon pod May 2 13:29:55.276: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 13:29:55.279: INFO: Number of nodes with available pods: 2 May 2 13:29:55.279: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6186, will wait for the garbage collector to delete the pods May 2 13:29:55.342: INFO: Deleting DaemonSet.extensions daemon-set took: 6.330758ms May 2 13:29:55.642: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.231399ms May 2 13:30:02.246: INFO: Number of nodes with available pods: 0 May 2 13:30:02.246: INFO: Number of running nodes: 0, number of available pods: 0 May 2 13:30:02.249: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6186/daemonsets","resourceVersion":"8626386"},"items":null} May 2 13:30:02.251: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6186/pods","resourceVersion":"8626386"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:30:02.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6186" for this suite. May 2 13:30:08.340: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:30:08.411: INFO: namespace daemonsets-6186 deletion completed in 6.09761992s • [SLOW TEST:28.353 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:30:08.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium May 2 13:30:08.535: INFO: Waiting up to 5m0s for pod "pod-a1652bc4-e8e9-4fc8-8ae3-9f477528c941" in namespace "emptydir-8943" to be "success or failure" May 2 13:30:08.552: INFO: Pod "pod-a1652bc4-e8e9-4fc8-8ae3-9f477528c941": Phase="Pending", Reason="", readiness=false. Elapsed: 17.087541ms May 2 13:30:10.557: INFO: Pod "pod-a1652bc4-e8e9-4fc8-8ae3-9f477528c941": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021586249s May 2 13:30:12.561: INFO: Pod "pod-a1652bc4-e8e9-4fc8-8ae3-9f477528c941": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026267013s STEP: Saw pod success May 2 13:30:12.561: INFO: Pod "pod-a1652bc4-e8e9-4fc8-8ae3-9f477528c941" satisfied condition "success or failure" May 2 13:30:12.564: INFO: Trying to get logs from node iruya-worker2 pod pod-a1652bc4-e8e9-4fc8-8ae3-9f477528c941 container test-container: STEP: delete the pod May 2 13:30:12.598: INFO: Waiting for pod pod-a1652bc4-e8e9-4fc8-8ae3-9f477528c941 to disappear May 2 13:30:12.612: INFO: Pod pod-a1652bc4-e8e9-4fc8-8ae3-9f477528c941 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:30:12.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8943" for this suite. May 2 13:30:18.640: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:30:18.707: INFO: namespace emptydir-8943 deletion completed in 6.091902793s • [SLOW TEST:10.295 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:30:18.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-659779e9-f4be-4f30-af2a-6de1f9d123ca STEP: Creating a pod to test consume configMaps May 2 13:30:18.851: INFO: Waiting up to 5m0s for pod "pod-configmaps-6800e3a3-2f8c-4c09-ab3a-bb009bb885b4" in namespace "configmap-6771" to be "success or failure" May 2 13:30:18.864: INFO: Pod "pod-configmaps-6800e3a3-2f8c-4c09-ab3a-bb009bb885b4": Phase="Pending", Reason="", readiness=false. Elapsed: 12.640899ms May 2 13:30:20.868: INFO: Pod "pod-configmaps-6800e3a3-2f8c-4c09-ab3a-bb009bb885b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016503159s May 2 13:30:22.872: INFO: Pod "pod-configmaps-6800e3a3-2f8c-4c09-ab3a-bb009bb885b4": Phase="Running", Reason="", readiness=true. Elapsed: 4.020439695s May 2 13:30:24.876: INFO: Pod "pod-configmaps-6800e3a3-2f8c-4c09-ab3a-bb009bb885b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.024510232s STEP: Saw pod success May 2 13:30:24.876: INFO: Pod "pod-configmaps-6800e3a3-2f8c-4c09-ab3a-bb009bb885b4" satisfied condition "success or failure" May 2 13:30:24.879: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-6800e3a3-2f8c-4c09-ab3a-bb009bb885b4 container configmap-volume-test: STEP: delete the pod May 2 13:30:24.931: INFO: Waiting for pod pod-configmaps-6800e3a3-2f8c-4c09-ab3a-bb009bb885b4 to disappear May 2 13:30:24.958: INFO: Pod pod-configmaps-6800e3a3-2f8c-4c09-ab3a-bb009bb885b4 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:30:24.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6771" for this suite. May 2 13:30:30.990: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:30:31.069: INFO: namespace configmap-6771 deletion completed in 6.107405776s • [SLOW TEST:12.362 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:30:31.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod May 2 13:30:31.168: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:30:36.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7004" for this suite. May 2 13:30:43.016: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:30:43.109: INFO: namespace init-container-7004 deletion completed in 6.107482887s • [SLOW TEST:12.039 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:30:43.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 2 13:30:43.220: INFO: Waiting up to 5m0s for pod "downward-api-fd1b4960-7ff2-45a6-a542-f9827ff005bc" in namespace "downward-api-4560" to be "success or failure" May 2 13:30:43.238: INFO: Pod "downward-api-fd1b4960-7ff2-45a6-a542-f9827ff005bc": Phase="Pending", Reason="", readiness=false. Elapsed: 17.434859ms May 2 13:30:45.242: INFO: Pod "downward-api-fd1b4960-7ff2-45a6-a542-f9827ff005bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021767165s May 2 13:30:47.250: INFO: Pod "downward-api-fd1b4960-7ff2-45a6-a542-f9827ff005bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029489916s STEP: Saw pod success May 2 13:30:47.250: INFO: Pod "downward-api-fd1b4960-7ff2-45a6-a542-f9827ff005bc" satisfied condition "success or failure" May 2 13:30:47.252: INFO: Trying to get logs from node iruya-worker pod downward-api-fd1b4960-7ff2-45a6-a542-f9827ff005bc container dapi-container: STEP: delete the pod May 2 13:30:47.339: INFO: Waiting for pod downward-api-fd1b4960-7ff2-45a6-a542-f9827ff005bc to disappear May 2 13:30:47.355: INFO: Pod downward-api-fd1b4960-7ff2-45a6-a542-f9827ff005bc no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:30:47.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4560" for this suite. May 2 13:30:53.370: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:30:53.443: INFO: namespace downward-api-4560 deletion completed in 6.084563721s • [SLOW TEST:10.334 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:30:53.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-9537 STEP: creating a selector STEP: Creating the service pods in kubernetes May 2 13:30:53.520: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 2 13:31:17.714: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.77:8080/dial?request=hostName&protocol=udp&host=10.244.2.2&port=8081&tries=1'] Namespace:pod-network-test-9537 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 2 13:31:17.714: INFO: >>> kubeConfig: /root/.kube/config I0502 13:31:17.750618 6 log.go:172] (0xc0013c6b00) (0xc000211220) Create stream I0502 13:31:17.750650 6 log.go:172] (0xc0013c6b00) (0xc000211220) Stream added, broadcasting: 1 I0502 13:31:17.752598 6 log.go:172] (0xc0013c6b00) Reply frame received for 1 I0502 13:31:17.752672 6 log.go:172] (0xc0013c6b00) (0xc0006680a0) Create stream I0502 13:31:17.752699 6 log.go:172] (0xc0013c6b00) (0xc0006680a0) Stream added, broadcasting: 3 I0502 13:31:17.754177 6 log.go:172] (0xc0013c6b00) Reply frame received for 3 I0502 13:31:17.754226 6 log.go:172] (0xc0013c6b00) (0xc0002112c0) Create stream I0502 13:31:17.754246 6 log.go:172] (0xc0013c6b00) (0xc0002112c0) Stream added, broadcasting: 5 I0502 13:31:17.755221 6 log.go:172] (0xc0013c6b00) Reply frame received for 5 I0502 13:31:17.822614 6 log.go:172] (0xc0013c6b00) Data frame received for 3 I0502 13:31:17.822662 6 log.go:172] (0xc0006680a0) (3) Data frame handling I0502 13:31:17.822684 6 log.go:172] (0xc0006680a0) (3) Data frame sent I0502 13:31:17.823160 6 log.go:172] (0xc0013c6b00) Data frame received for 5 I0502 13:31:17.823240 6 log.go:172] (0xc0002112c0) (5) Data frame handling I0502 13:31:17.823363 6 log.go:172] (0xc0013c6b00) Data frame received for 3 I0502 13:31:17.823388 6 log.go:172] (0xc0006680a0) (3) Data frame handling I0502 13:31:17.825456 6 log.go:172] (0xc0013c6b00) Data frame received for 1 I0502 13:31:17.825489 6 log.go:172] (0xc000211220) (1) Data frame handling I0502 13:31:17.825513 6 log.go:172] (0xc000211220) (1) Data frame sent I0502 13:31:17.825537 6 log.go:172] (0xc0013c6b00) (0xc000211220) Stream removed, broadcasting: 1 I0502 13:31:17.825691 6 log.go:172] (0xc0013c6b00) (0xc000211220) Stream removed, broadcasting: 1 I0502 13:31:17.825722 6 log.go:172] (0xc0013c6b00) (0xc0006680a0) Stream removed, broadcasting: 3 I0502 13:31:17.825859 6 log.go:172] (0xc0013c6b00) Go away received I0502 13:31:17.825922 6 log.go:172] (0xc0013c6b00) (0xc0002112c0) Stream removed, broadcasting: 5 May 2 13:31:17.825: INFO: Waiting for endpoints: map[] May 2 13:31:17.837: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.77:8080/dial?request=hostName&protocol=udp&host=10.244.1.76&port=8081&tries=1'] Namespace:pod-network-test-9537 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 2 13:31:17.837: INFO: >>> kubeConfig: /root/.kube/config I0502 13:31:17.873476 6 log.go:172] (0xc0012aed10) (0xc000668d20) Create stream I0502 13:31:17.873504 6 log.go:172] (0xc0012aed10) (0xc000668d20) Stream added, broadcasting: 1 I0502 13:31:17.875176 6 log.go:172] (0xc0012aed10) Reply frame received for 1 I0502 13:31:17.875219 6 log.go:172] (0xc0012aed10) (0xc003260000) Create stream I0502 13:31:17.875241 6 log.go:172] (0xc0012aed10) (0xc003260000) Stream added, broadcasting: 3 I0502 13:31:17.876201 6 log.go:172] (0xc0012aed10) Reply frame received for 3 I0502 13:31:17.876236 6 log.go:172] (0xc0012aed10) (0xc0002115e0) Create stream I0502 13:31:17.876251 6 log.go:172] (0xc0012aed10) (0xc0002115e0) Stream added, broadcasting: 5 I0502 13:31:17.877239 6 log.go:172] (0xc0012aed10) Reply frame received for 5 I0502 13:31:17.947005 6 log.go:172] (0xc0012aed10) Data frame received for 3 I0502 13:31:17.947090 6 log.go:172] (0xc003260000) (3) Data frame handling I0502 13:31:17.947114 6 log.go:172] (0xc003260000) (3) Data frame sent I0502 13:31:17.947122 6 log.go:172] (0xc0012aed10) Data frame received for 3 I0502 13:31:17.947129 6 log.go:172] (0xc003260000) (3) Data frame handling I0502 13:31:17.947340 6 log.go:172] (0xc0012aed10) Data frame received for 5 I0502 13:31:17.947368 6 log.go:172] (0xc0002115e0) (5) Data frame handling I0502 13:31:17.948800 6 log.go:172] (0xc0012aed10) Data frame received for 1 I0502 13:31:17.948825 6 log.go:172] (0xc000668d20) (1) Data frame handling I0502 13:31:17.948843 6 log.go:172] (0xc000668d20) (1) Data frame sent I0502 13:31:17.948861 6 log.go:172] (0xc0012aed10) (0xc000668d20) Stream removed, broadcasting: 1 I0502 13:31:17.948899 6 log.go:172] (0xc0012aed10) Go away received I0502 13:31:17.948973 6 log.go:172] (0xc0012aed10) (0xc000668d20) Stream removed, broadcasting: 1 I0502 13:31:17.948987 6 log.go:172] (0xc0012aed10) (0xc003260000) Stream removed, broadcasting: 3 I0502 13:31:17.948997 6 log.go:172] (0xc0012aed10) (0xc0002115e0) Stream removed, broadcasting: 5 May 2 13:31:17.949: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:31:17.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9537" for this suite. May 2 13:31:43.968: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:31:44.096: INFO: namespace pod-network-test-9537 deletion completed in 26.143407399s • [SLOW TEST:50.652 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:31:44.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-5rs5 STEP: Creating a pod to test atomic-volume-subpath May 2 13:31:44.246: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-5rs5" in namespace "subpath-6885" to be "success or failure" May 2 13:31:44.261: INFO: Pod "pod-subpath-test-secret-5rs5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.972684ms May 2 13:31:46.264: INFO: Pod "pod-subpath-test-secret-5rs5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018170384s May 2 13:31:48.268: INFO: Pod "pod-subpath-test-secret-5rs5": Phase="Running", Reason="", readiness=true. Elapsed: 4.022538066s May 2 13:31:50.273: INFO: Pod "pod-subpath-test-secret-5rs5": Phase="Running", Reason="", readiness=true. Elapsed: 6.026702068s May 2 13:31:52.288: INFO: Pod "pod-subpath-test-secret-5rs5": Phase="Running", Reason="", readiness=true. Elapsed: 8.042114704s May 2 13:31:54.311: INFO: Pod "pod-subpath-test-secret-5rs5": Phase="Running", Reason="", readiness=true. Elapsed: 10.06508872s May 2 13:31:56.315: INFO: Pod "pod-subpath-test-secret-5rs5": Phase="Running", Reason="", readiness=true. Elapsed: 12.068868203s May 2 13:31:58.318: INFO: Pod "pod-subpath-test-secret-5rs5": Phase="Running", Reason="", readiness=true. Elapsed: 14.07257029s May 2 13:32:00.323: INFO: Pod "pod-subpath-test-secret-5rs5": Phase="Running", Reason="", readiness=true. Elapsed: 16.076793093s May 2 13:32:02.335: INFO: Pod "pod-subpath-test-secret-5rs5": Phase="Running", Reason="", readiness=true. Elapsed: 18.089156783s May 2 13:32:04.340: INFO: Pod "pod-subpath-test-secret-5rs5": Phase="Running", Reason="", readiness=true. Elapsed: 20.094183067s May 2 13:32:06.344: INFO: Pod "pod-subpath-test-secret-5rs5": Phase="Running", Reason="", readiness=true. Elapsed: 22.098491165s May 2 13:32:08.383: INFO: Pod "pod-subpath-test-secret-5rs5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.137187441s STEP: Saw pod success May 2 13:32:08.383: INFO: Pod "pod-subpath-test-secret-5rs5" satisfied condition "success or failure" May 2 13:32:08.386: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-secret-5rs5 container test-container-subpath-secret-5rs5: STEP: delete the pod May 2 13:32:08.476: INFO: Waiting for pod pod-subpath-test-secret-5rs5 to disappear May 2 13:32:08.557: INFO: Pod pod-subpath-test-secret-5rs5 no longer exists STEP: Deleting pod pod-subpath-test-secret-5rs5 May 2 13:32:08.557: INFO: Deleting pod "pod-subpath-test-secret-5rs5" in namespace "subpath-6885" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:32:08.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6885" for this suite. May 2 13:32:14.587: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:32:14.658: INFO: namespace subpath-6885 deletion completed in 6.094531424s • [SLOW TEST:30.562 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:32:14.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token May 2 13:32:15.284: INFO: created pod pod-service-account-defaultsa May 2 13:32:15.284: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 2 13:32:15.290: INFO: created pod pod-service-account-mountsa May 2 13:32:15.290: INFO: pod pod-service-account-mountsa service account token volume mount: true May 2 13:32:15.296: INFO: created pod pod-service-account-nomountsa May 2 13:32:15.296: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 2 13:32:15.315: INFO: created pod pod-service-account-defaultsa-mountspec May 2 13:32:15.315: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 2 13:32:15.333: INFO: created pod pod-service-account-mountsa-mountspec May 2 13:32:15.333: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 2 13:32:15.376: INFO: created pod pod-service-account-nomountsa-mountspec May 2 13:32:15.376: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 2 13:32:15.418: INFO: created pod pod-service-account-defaultsa-nomountspec May 2 13:32:15.418: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 2 13:32:15.442: INFO: created pod pod-service-account-mountsa-nomountspec May 2 13:32:15.442: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 2 13:32:15.478: INFO: created pod pod-service-account-nomountsa-nomountspec May 2 13:32:15.478: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:32:15.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-741" for this suite. May 2 13:32:43.631: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:32:43.718: INFO: namespace svcaccounts-741 deletion completed in 28.164081352s • [SLOW TEST:29.059 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:32:43.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:32:48.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1541" for this suite. May 2 13:33:10.835: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:33:10.912: INFO: namespace replication-controller-1541 deletion completed in 22.091545533s • [SLOW TEST:27.193 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:33:10.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-4485/secret-test-963313d6-848a-4e13-812c-6fba1ab932da STEP: Creating a pod to test consume secrets May 2 13:33:12.064: INFO: Waiting up to 5m0s for pod "pod-configmaps-fb9e28a1-ccb6-43e8-87cc-53ca1e0872b3" in namespace "secrets-4485" to be "success or failure" May 2 13:33:12.079: INFO: Pod "pod-configmaps-fb9e28a1-ccb6-43e8-87cc-53ca1e0872b3": Phase="Pending", Reason="", readiness=false. Elapsed: 14.503068ms May 2 13:33:14.129: INFO: Pod "pod-configmaps-fb9e28a1-ccb6-43e8-87cc-53ca1e0872b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064500405s May 2 13:33:16.135: INFO: Pod "pod-configmaps-fb9e28a1-ccb6-43e8-87cc-53ca1e0872b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.070696491s STEP: Saw pod success May 2 13:33:16.135: INFO: Pod "pod-configmaps-fb9e28a1-ccb6-43e8-87cc-53ca1e0872b3" satisfied condition "success or failure" May 2 13:33:16.137: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-fb9e28a1-ccb6-43e8-87cc-53ca1e0872b3 container env-test: STEP: delete the pod May 2 13:33:16.184: INFO: Waiting for pod pod-configmaps-fb9e28a1-ccb6-43e8-87cc-53ca1e0872b3 to disappear May 2 13:33:16.201: INFO: Pod pod-configmaps-fb9e28a1-ccb6-43e8-87cc-53ca1e0872b3 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:33:16.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4485" for this suite. May 2 13:33:22.292: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:33:22.361: INFO: namespace secrets-4485 deletion completed in 6.156009929s • [SLOW TEST:11.449 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:33:22.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition May 2 13:33:22.463: INFO: Waiting up to 5m0s for pod "var-expansion-c7ad8f67-ca3f-4c08-924b-868e1aa4b331" in namespace "var-expansion-7143" to be "success or failure" May 2 13:33:22.488: INFO: Pod "var-expansion-c7ad8f67-ca3f-4c08-924b-868e1aa4b331": Phase="Pending", Reason="", readiness=false. Elapsed: 25.212816ms May 2 13:33:24.492: INFO: Pod "var-expansion-c7ad8f67-ca3f-4c08-924b-868e1aa4b331": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028928071s May 2 13:33:26.510: INFO: Pod "var-expansion-c7ad8f67-ca3f-4c08-924b-868e1aa4b331": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046689591s STEP: Saw pod success May 2 13:33:26.510: INFO: Pod "var-expansion-c7ad8f67-ca3f-4c08-924b-868e1aa4b331" satisfied condition "success or failure" May 2 13:33:26.513: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-c7ad8f67-ca3f-4c08-924b-868e1aa4b331 container dapi-container: STEP: delete the pod May 2 13:33:26.531: INFO: Waiting for pod var-expansion-c7ad8f67-ca3f-4c08-924b-868e1aa4b331 to disappear May 2 13:33:26.536: INFO: Pod var-expansion-c7ad8f67-ca3f-4c08-924b-868e1aa4b331 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:33:26.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7143" for this suite. May 2 13:33:32.564: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:33:32.641: INFO: namespace var-expansion-7143 deletion completed in 6.10250991s • [SLOW TEST:10.280 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:33:32.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod May 2 13:33:32.745: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:33:42.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2870" for this suite. May 2 13:34:04.123: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:34:04.204: INFO: namespace init-container-2870 deletion completed in 22.130676517s • [SLOW TEST:31.562 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:34:04.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium May 2 13:34:04.270: INFO: Waiting up to 5m0s for pod "pod-7bef6986-f1de-4871-9978-92f3bbaffd23" in namespace "emptydir-1700" to be "success or failure" May 2 13:34:04.274: INFO: Pod "pod-7bef6986-f1de-4871-9978-92f3bbaffd23": Phase="Pending", Reason="", readiness=false. Elapsed: 3.54736ms May 2 13:34:06.278: INFO: Pod "pod-7bef6986-f1de-4871-9978-92f3bbaffd23": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007909875s May 2 13:34:08.282: INFO: Pod "pod-7bef6986-f1de-4871-9978-92f3bbaffd23": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0116974s STEP: Saw pod success May 2 13:34:08.282: INFO: Pod "pod-7bef6986-f1de-4871-9978-92f3bbaffd23" satisfied condition "success or failure" May 2 13:34:08.284: INFO: Trying to get logs from node iruya-worker2 pod pod-7bef6986-f1de-4871-9978-92f3bbaffd23 container test-container: STEP: delete the pod May 2 13:34:08.322: INFO: Waiting for pod pod-7bef6986-f1de-4871-9978-92f3bbaffd23 to disappear May 2 13:34:08.339: INFO: Pod pod-7bef6986-f1de-4871-9978-92f3bbaffd23 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:34:08.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1700" for this suite. May 2 13:34:14.355: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:34:14.433: INFO: namespace emptydir-1700 deletion completed in 6.090638639s • [SLOW TEST:10.229 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:34:14.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 2 13:34:18.607: INFO: Waiting up to 5m0s for pod "client-envvars-70f98790-bd93-46bb-b48d-9e3cfd5f81da" in namespace "pods-1577" to be "success or failure" May 2 13:34:18.612: INFO: Pod "client-envvars-70f98790-bd93-46bb-b48d-9e3cfd5f81da": Phase="Pending", Reason="", readiness=false. Elapsed: 4.746061ms May 2 13:34:20.616: INFO: Pod "client-envvars-70f98790-bd93-46bb-b48d-9e3cfd5f81da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008951174s May 2 13:34:22.620: INFO: Pod "client-envvars-70f98790-bd93-46bb-b48d-9e3cfd5f81da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013221694s STEP: Saw pod success May 2 13:34:22.620: INFO: Pod "client-envvars-70f98790-bd93-46bb-b48d-9e3cfd5f81da" satisfied condition "success or failure" May 2 13:34:22.624: INFO: Trying to get logs from node iruya-worker2 pod client-envvars-70f98790-bd93-46bb-b48d-9e3cfd5f81da container env3cont: STEP: delete the pod May 2 13:34:22.677: INFO: Waiting for pod client-envvars-70f98790-bd93-46bb-b48d-9e3cfd5f81da to disappear May 2 13:34:22.690: INFO: Pod client-envvars-70f98790-bd93-46bb-b48d-9e3cfd5f81da no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:34:22.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1577" for this suite. May 2 13:35:00.705: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:35:00.781: INFO: namespace pods-1577 deletion completed in 38.086013821s • [SLOW TEST:46.348 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:35:00.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-098935a4-56a2-4f94-8472-d50e80556515 STEP: Creating secret with name s-test-opt-upd-3279ceb6-f7bc-4dbd-85df-ca0e23c4f66a STEP: Creating the pod STEP: Deleting secret s-test-opt-del-098935a4-56a2-4f94-8472-d50e80556515 STEP: Updating secret s-test-opt-upd-3279ceb6-f7bc-4dbd-85df-ca0e23c4f66a STEP: Creating secret with name s-test-opt-create-003adc29-4163-4b62-b8af-86a2dd595106 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:35:08.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6898" for this suite. May 2 13:35:33.002: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:35:33.083: INFO: namespace secrets-6898 deletion completed in 24.094818657s • [SLOW TEST:32.301 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:35:33.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command May 2 13:35:33.203: INFO: Waiting up to 5m0s for pod "var-expansion-0b25cda5-3c4c-45ca-85de-c2301e537e30" in namespace "var-expansion-505" to be "success or failure" May 2 13:35:33.285: INFO: Pod "var-expansion-0b25cda5-3c4c-45ca-85de-c2301e537e30": Phase="Pending", Reason="", readiness=false. Elapsed: 81.74489ms May 2 13:35:35.289: INFO: Pod "var-expansion-0b25cda5-3c4c-45ca-85de-c2301e537e30": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086307007s May 2 13:35:37.293: INFO: Pod "var-expansion-0b25cda5-3c4c-45ca-85de-c2301e537e30": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.089981553s STEP: Saw pod success May 2 13:35:37.293: INFO: Pod "var-expansion-0b25cda5-3c4c-45ca-85de-c2301e537e30" satisfied condition "success or failure" May 2 13:35:37.295: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-0b25cda5-3c4c-45ca-85de-c2301e537e30 container dapi-container: STEP: delete the pod May 2 13:35:37.335: INFO: Waiting for pod var-expansion-0b25cda5-3c4c-45ca-85de-c2301e537e30 to disappear May 2 13:35:37.340: INFO: Pod var-expansion-0b25cda5-3c4c-45ca-85de-c2301e537e30 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:35:37.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-505" for this suite. May 2 13:35:43.709: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:35:43.851: INFO: namespace var-expansion-505 deletion completed in 6.507236885s • [SLOW TEST:10.768 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:35:43.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-6316d806-7282-4ad9-b540-88622c81f288 STEP: Creating a pod to test consume secrets May 2 13:35:43.905: INFO: Waiting up to 5m0s for pod "pod-secrets-7f6d1aa8-fa7c-4d82-b84d-004045d1aed1" in namespace "secrets-6712" to be "success or failure" May 2 13:35:43.910: INFO: Pod "pod-secrets-7f6d1aa8-fa7c-4d82-b84d-004045d1aed1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.237716ms May 2 13:35:45.913: INFO: Pod "pod-secrets-7f6d1aa8-fa7c-4d82-b84d-004045d1aed1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007538174s May 2 13:35:47.917: INFO: Pod "pod-secrets-7f6d1aa8-fa7c-4d82-b84d-004045d1aed1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011404478s STEP: Saw pod success May 2 13:35:47.917: INFO: Pod "pod-secrets-7f6d1aa8-fa7c-4d82-b84d-004045d1aed1" satisfied condition "success or failure" May 2 13:35:47.920: INFO: Trying to get logs from node iruya-worker pod pod-secrets-7f6d1aa8-fa7c-4d82-b84d-004045d1aed1 container secret-volume-test: STEP: delete the pod May 2 13:35:47.948: INFO: Waiting for pod pod-secrets-7f6d1aa8-fa7c-4d82-b84d-004045d1aed1 to disappear May 2 13:35:47.958: INFO: Pod pod-secrets-7f6d1aa8-fa7c-4d82-b84d-004045d1aed1 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:35:47.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6712" for this suite. May 2 13:35:53.986: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:35:54.061: INFO: namespace secrets-6712 deletion completed in 6.099814853s • [SLOW TEST:10.209 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:35:54.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 2 13:35:54.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-8152' May 2 13:35:56.933: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 2 13:35:56.933: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 May 2 13:35:56.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-8152' May 2 13:35:57.050: INFO: stderr: "" May 2 13:35:57.050: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:35:57.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8152" for this suite. May 2 13:36:03.091: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:36:03.168: INFO: namespace kubectl-8152 deletion completed in 6.092878631s • [SLOW TEST:9.107 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:36:03.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-wpmq STEP: Creating a pod to test atomic-volume-subpath May 2 13:36:03.275: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-wpmq" in namespace "subpath-3657" to be "success or failure" May 2 13:36:03.279: INFO: Pod "pod-subpath-test-downwardapi-wpmq": Phase="Pending", Reason="", readiness=false. Elapsed: 3.887712ms May 2 13:36:05.284: INFO: Pod "pod-subpath-test-downwardapi-wpmq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008716344s May 2 13:36:07.286: INFO: Pod "pod-subpath-test-downwardapi-wpmq": Phase="Running", Reason="", readiness=true. Elapsed: 4.011342182s May 2 13:36:09.291: INFO: Pod "pod-subpath-test-downwardapi-wpmq": Phase="Running", Reason="", readiness=true. Elapsed: 6.01606947s May 2 13:36:11.296: INFO: Pod "pod-subpath-test-downwardapi-wpmq": Phase="Running", Reason="", readiness=true. Elapsed: 8.020573885s May 2 13:36:13.300: INFO: Pod "pod-subpath-test-downwardapi-wpmq": Phase="Running", Reason="", readiness=true. Elapsed: 10.025157298s May 2 13:36:15.305: INFO: Pod "pod-subpath-test-downwardapi-wpmq": Phase="Running", Reason="", readiness=true. Elapsed: 12.030043864s May 2 13:36:17.320: INFO: Pod "pod-subpath-test-downwardapi-wpmq": Phase="Running", Reason="", readiness=true. Elapsed: 14.044923793s May 2 13:36:19.324: INFO: Pod "pod-subpath-test-downwardapi-wpmq": Phase="Running", Reason="", readiness=true. Elapsed: 16.049234899s May 2 13:36:21.329: INFO: Pod "pod-subpath-test-downwardapi-wpmq": Phase="Running", Reason="", readiness=true. Elapsed: 18.053945145s May 2 13:36:23.333: INFO: Pod "pod-subpath-test-downwardapi-wpmq": Phase="Running", Reason="", readiness=true. Elapsed: 20.058296079s May 2 13:36:25.338: INFO: Pod "pod-subpath-test-downwardapi-wpmq": Phase="Running", Reason="", readiness=true. Elapsed: 22.06269886s May 2 13:36:27.345: INFO: Pod "pod-subpath-test-downwardapi-wpmq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.069977513s STEP: Saw pod success May 2 13:36:27.345: INFO: Pod "pod-subpath-test-downwardapi-wpmq" satisfied condition "success or failure" May 2 13:36:27.348: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-downwardapi-wpmq container test-container-subpath-downwardapi-wpmq: STEP: delete the pod May 2 13:36:27.373: INFO: Waiting for pod pod-subpath-test-downwardapi-wpmq to disappear May 2 13:36:27.426: INFO: Pod pod-subpath-test-downwardapi-wpmq no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-wpmq May 2 13:36:27.426: INFO: Deleting pod "pod-subpath-test-downwardapi-wpmq" in namespace "subpath-3657" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:36:27.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3657" for this suite. May 2 13:36:33.468: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:36:33.578: INFO: namespace subpath-3657 deletion completed in 6.145641034s • [SLOW TEST:30.410 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:36:33.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 2 13:36:33.656: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota May 2 13:36:35.716: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:36:36.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9936" for this suite. May 2 13:36:43.747: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:36:44.466: INFO: namespace replication-controller-9936 deletion completed in 7.725835418s • [SLOW TEST:10.888 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:36:44.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium May 2 13:36:44.575: INFO: Waiting up to 5m0s for pod "pod-f29945a2-1f4e-4616-9ef5-fe16c840b366" in namespace "emptydir-8382" to be "success or failure" May 2 13:36:44.595: INFO: Pod "pod-f29945a2-1f4e-4616-9ef5-fe16c840b366": Phase="Pending", Reason="", readiness=false. Elapsed: 19.599772ms May 2 13:36:46.599: INFO: Pod "pod-f29945a2-1f4e-4616-9ef5-fe16c840b366": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023805635s May 2 13:36:48.606: INFO: Pod "pod-f29945a2-1f4e-4616-9ef5-fe16c840b366": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030389342s STEP: Saw pod success May 2 13:36:48.606: INFO: Pod "pod-f29945a2-1f4e-4616-9ef5-fe16c840b366" satisfied condition "success or failure" May 2 13:36:48.609: INFO: Trying to get logs from node iruya-worker2 pod pod-f29945a2-1f4e-4616-9ef5-fe16c840b366 container test-container: STEP: delete the pod May 2 13:36:48.628: INFO: Waiting for pod pod-f29945a2-1f4e-4616-9ef5-fe16c840b366 to disappear May 2 13:36:48.633: INFO: Pod pod-f29945a2-1f4e-4616-9ef5-fe16c840b366 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:36:48.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8382" for this suite. May 2 13:36:54.650: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:36:54.727: INFO: namespace emptydir-8382 deletion completed in 6.08978519s • [SLOW TEST:10.261 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:36:54.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-1062 STEP: creating a selector STEP: Creating the service pods in kubernetes May 2 13:36:54.822: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 2 13:37:20.964: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.16 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1062 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 2 13:37:20.964: INFO: >>> kubeConfig: /root/.kube/config I0502 13:37:20.998945 6 log.go:172] (0xc001227130) (0xc001e292c0) Create stream I0502 13:37:20.998978 6 log.go:172] (0xc001227130) (0xc001e292c0) Stream added, broadcasting: 1 I0502 13:37:21.001050 6 log.go:172] (0xc001227130) Reply frame received for 1 I0502 13:37:21.001294 6 log.go:172] (0xc001227130) (0xc001e29360) Create stream I0502 13:37:21.001318 6 log.go:172] (0xc001227130) (0xc001e29360) Stream added, broadcasting: 3 I0502 13:37:21.002459 6 log.go:172] (0xc001227130) Reply frame received for 3 I0502 13:37:21.002510 6 log.go:172] (0xc001227130) (0xc001557cc0) Create stream I0502 13:37:21.002527 6 log.go:172] (0xc001227130) (0xc001557cc0) Stream added, broadcasting: 5 I0502 13:37:21.003628 6 log.go:172] (0xc001227130) Reply frame received for 5 I0502 13:37:22.102621 6 log.go:172] (0xc001227130) Data frame received for 5 I0502 13:37:22.102663 6 log.go:172] (0xc001557cc0) (5) Data frame handling I0502 13:37:22.102688 6 log.go:172] (0xc001227130) Data frame received for 3 I0502 13:37:22.102701 6 log.go:172] (0xc001e29360) (3) Data frame handling I0502 13:37:22.102713 6 log.go:172] (0xc001e29360) (3) Data frame sent I0502 13:37:22.102727 6 log.go:172] (0xc001227130) Data frame received for 3 I0502 13:37:22.102739 6 log.go:172] (0xc001e29360) (3) Data frame handling I0502 13:37:22.105353 6 log.go:172] (0xc001227130) Data frame received for 1 I0502 13:37:22.105382 6 log.go:172] (0xc001e292c0) (1) Data frame handling I0502 13:37:22.105395 6 log.go:172] (0xc001e292c0) (1) Data frame sent I0502 13:37:22.105423 6 log.go:172] (0xc001227130) (0xc001e292c0) Stream removed, broadcasting: 1 I0502 13:37:22.105449 6 log.go:172] (0xc001227130) Go away received I0502 13:37:22.105787 6 log.go:172] (0xc001227130) (0xc001e292c0) Stream removed, broadcasting: 1 I0502 13:37:22.105814 6 log.go:172] (0xc001227130) (0xc001e29360) Stream removed, broadcasting: 3 I0502 13:37:22.105837 6 log.go:172] (0xc001227130) (0xc001557cc0) Stream removed, broadcasting: 5 May 2 13:37:22.105: INFO: Found all expected endpoints: [netserver-0] May 2 13:37:22.109: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.90 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1062 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 2 13:37:22.109: INFO: >>> kubeConfig: /root/.kube/config I0502 13:37:22.143813 6 log.go:172] (0xc0018f6160) (0xc001e29720) Create stream I0502 13:37:22.143839 6 log.go:172] (0xc0018f6160) (0xc001e29720) Stream added, broadcasting: 1 I0502 13:37:22.146285 6 log.go:172] (0xc0018f6160) Reply frame received for 1 I0502 13:37:22.146360 6 log.go:172] (0xc0018f6160) (0xc00220d220) Create stream I0502 13:37:22.146401 6 log.go:172] (0xc0018f6160) (0xc00220d220) Stream added, broadcasting: 3 I0502 13:37:22.147819 6 log.go:172] (0xc0018f6160) Reply frame received for 3 I0502 13:37:22.147877 6 log.go:172] (0xc0018f6160) (0xc001557e00) Create stream I0502 13:37:22.147916 6 log.go:172] (0xc0018f6160) (0xc001557e00) Stream added, broadcasting: 5 I0502 13:37:22.149891 6 log.go:172] (0xc0018f6160) Reply frame received for 5 I0502 13:37:23.213845 6 log.go:172] (0xc0018f6160) Data frame received for 5 I0502 13:37:23.213874 6 log.go:172] (0xc001557e00) (5) Data frame handling I0502 13:37:23.213918 6 log.go:172] (0xc0018f6160) Data frame received for 3 I0502 13:37:23.213972 6 log.go:172] (0xc00220d220) (3) Data frame handling I0502 13:37:23.213999 6 log.go:172] (0xc00220d220) (3) Data frame sent I0502 13:37:23.214020 6 log.go:172] (0xc0018f6160) Data frame received for 3 I0502 13:37:23.214033 6 log.go:172] (0xc00220d220) (3) Data frame handling I0502 13:37:23.215992 6 log.go:172] (0xc0018f6160) Data frame received for 1 I0502 13:37:23.216017 6 log.go:172] (0xc001e29720) (1) Data frame handling I0502 13:37:23.216038 6 log.go:172] (0xc001e29720) (1) Data frame sent I0502 13:37:23.216056 6 log.go:172] (0xc0018f6160) (0xc001e29720) Stream removed, broadcasting: 1 I0502 13:37:23.216082 6 log.go:172] (0xc0018f6160) Go away received I0502 13:37:23.216262 6 log.go:172] (0xc0018f6160) (0xc001e29720) Stream removed, broadcasting: 1 I0502 13:37:23.216304 6 log.go:172] (0xc0018f6160) (0xc00220d220) Stream removed, broadcasting: 3 I0502 13:37:23.216323 6 log.go:172] (0xc0018f6160) (0xc001557e00) Stream removed, broadcasting: 5 May 2 13:37:23.216: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:37:23.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1062" for this suite. May 2 13:37:45.298: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:37:45.374: INFO: namespace pod-network-test-1062 deletion completed in 22.15294229s • [SLOW TEST:50.646 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:37:45.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod May 2 13:37:50.009: INFO: Successfully updated pod "labelsupdate65d0d41d-3518-4711-9646-cde73bd67683" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:37:54.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9983" for this suite. May 2 13:38:16.072: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:38:16.144: INFO: namespace projected-9983 deletion completed in 22.090714354s • [SLOW TEST:30.769 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:38:16.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 2 13:38:26.314: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4446 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 2 13:38:26.314: INFO: >>> kubeConfig: /root/.kube/config I0502 13:38:26.344726 6 log.go:172] (0xc00268a580) (0xc000f875e0) Create stream I0502 13:38:26.344755 6 log.go:172] (0xc00268a580) (0xc000f875e0) Stream added, broadcasting: 1 I0502 13:38:26.346615 6 log.go:172] (0xc00268a580) Reply frame received for 1 I0502 13:38:26.346656 6 log.go:172] (0xc00268a580) (0xc0023d3360) Create stream I0502 13:38:26.346671 6 log.go:172] (0xc00268a580) (0xc0023d3360) Stream added, broadcasting: 3 I0502 13:38:26.347570 6 log.go:172] (0xc00268a580) Reply frame received for 3 I0502 13:38:26.347621 6 log.go:172] (0xc00268a580) (0xc0005f6140) Create stream I0502 13:38:26.347636 6 log.go:172] (0xc00268a580) (0xc0005f6140) Stream added, broadcasting: 5 I0502 13:38:26.348410 6 log.go:172] (0xc00268a580) Reply frame received for 5 I0502 13:38:26.435692 6 log.go:172] (0xc00268a580) Data frame received for 5 I0502 13:38:26.435730 6 log.go:172] (0xc0005f6140) (5) Data frame handling I0502 13:38:26.435750 6 log.go:172] (0xc00268a580) Data frame received for 3 I0502 13:38:26.435768 6 log.go:172] (0xc0023d3360) (3) Data frame handling I0502 13:38:26.435786 6 log.go:172] (0xc0023d3360) (3) Data frame sent I0502 13:38:26.435795 6 log.go:172] (0xc00268a580) Data frame received for 3 I0502 13:38:26.435803 6 log.go:172] (0xc0023d3360) (3) Data frame handling I0502 13:38:26.437442 6 log.go:172] (0xc00268a580) Data frame received for 1 I0502 13:38:26.437471 6 log.go:172] (0xc000f875e0) (1) Data frame handling I0502 13:38:26.437501 6 log.go:172] (0xc000f875e0) (1) Data frame sent I0502 13:38:26.437522 6 log.go:172] (0xc00268a580) (0xc000f875e0) Stream removed, broadcasting: 1 I0502 13:38:26.437648 6 log.go:172] (0xc00268a580) (0xc000f875e0) Stream removed, broadcasting: 1 I0502 13:38:26.437671 6 log.go:172] (0xc00268a580) (0xc0023d3360) Stream removed, broadcasting: 3 I0502 13:38:26.437766 6 log.go:172] (0xc00268a580) Go away received I0502 13:38:26.437892 6 log.go:172] (0xc00268a580) (0xc0005f6140) Stream removed, broadcasting: 5 May 2 13:38:26.437: INFO: Exec stderr: "" May 2 13:38:26.437: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4446 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 2 13:38:26.438: INFO: >>> kubeConfig: /root/.kube/config I0502 13:38:26.470404 6 log.go:172] (0xc001ae3340) (0xc0023d3680) Create stream I0502 13:38:26.470434 6 log.go:172] (0xc001ae3340) (0xc0023d3680) Stream added, broadcasting: 1 I0502 13:38:26.472190 6 log.go:172] (0xc001ae3340) Reply frame received for 1 I0502 13:38:26.472238 6 log.go:172] (0xc001ae3340) (0xc001a72fa0) Create stream I0502 13:38:26.472254 6 log.go:172] (0xc001ae3340) (0xc001a72fa0) Stream added, broadcasting: 3 I0502 13:38:26.473275 6 log.go:172] (0xc001ae3340) Reply frame received for 3 I0502 13:38:26.473300 6 log.go:172] (0xc001ae3340) (0xc00220d7c0) Create stream I0502 13:38:26.473308 6 log.go:172] (0xc001ae3340) (0xc00220d7c0) Stream added, broadcasting: 5 I0502 13:38:26.474379 6 log.go:172] (0xc001ae3340) Reply frame received for 5 I0502 13:38:26.545481 6 log.go:172] (0xc001ae3340) Data frame received for 5 I0502 13:38:26.545516 6 log.go:172] (0xc00220d7c0) (5) Data frame handling I0502 13:38:26.545561 6 log.go:172] (0xc001ae3340) Data frame received for 3 I0502 13:38:26.545621 6 log.go:172] (0xc001a72fa0) (3) Data frame handling I0502 13:38:26.545655 6 log.go:172] (0xc001a72fa0) (3) Data frame sent I0502 13:38:26.545678 6 log.go:172] (0xc001ae3340) Data frame received for 3 I0502 13:38:26.545694 6 log.go:172] (0xc001a72fa0) (3) Data frame handling I0502 13:38:26.547339 6 log.go:172] (0xc001ae3340) Data frame received for 1 I0502 13:38:26.547371 6 log.go:172] (0xc0023d3680) (1) Data frame handling I0502 13:38:26.547394 6 log.go:172] (0xc0023d3680) (1) Data frame sent I0502 13:38:26.547409 6 log.go:172] (0xc001ae3340) (0xc0023d3680) Stream removed, broadcasting: 1 I0502 13:38:26.547502 6 log.go:172] (0xc001ae3340) (0xc0023d3680) Stream removed, broadcasting: 1 I0502 13:38:26.547567 6 log.go:172] (0xc001ae3340) (0xc001a72fa0) Stream removed, broadcasting: 3 I0502 13:38:26.547609 6 log.go:172] (0xc001ae3340) Go away received I0502 13:38:26.547691 6 log.go:172] (0xc001ae3340) (0xc00220d7c0) Stream removed, broadcasting: 5 May 2 13:38:26.547: INFO: Exec stderr: "" May 2 13:38:26.547: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4446 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 2 13:38:26.547: INFO: >>> kubeConfig: /root/.kube/config I0502 13:38:26.574605 6 log.go:172] (0xc001ae3b80) (0xc0023d3900) Create stream I0502 13:38:26.574631 6 log.go:172] (0xc001ae3b80) (0xc0023d3900) Stream added, broadcasting: 1 I0502 13:38:26.577732 6 log.go:172] (0xc001ae3b80) Reply frame received for 1 I0502 13:38:26.577795 6 log.go:172] (0xc001ae3b80) (0xc001a73040) Create stream I0502 13:38:26.577808 6 log.go:172] (0xc001ae3b80) (0xc001a73040) Stream added, broadcasting: 3 I0502 13:38:26.578853 6 log.go:172] (0xc001ae3b80) Reply frame received for 3 I0502 13:38:26.578891 6 log.go:172] (0xc001ae3b80) (0xc0005f6280) Create stream I0502 13:38:26.578905 6 log.go:172] (0xc001ae3b80) (0xc0005f6280) Stream added, broadcasting: 5 I0502 13:38:26.579910 6 log.go:172] (0xc001ae3b80) Reply frame received for 5 I0502 13:38:26.652126 6 log.go:172] (0xc001ae3b80) Data frame received for 5 I0502 13:38:26.652165 6 log.go:172] (0xc0005f6280) (5) Data frame handling I0502 13:38:26.652184 6 log.go:172] (0xc001ae3b80) Data frame received for 3 I0502 13:38:26.652192 6 log.go:172] (0xc001a73040) (3) Data frame handling I0502 13:38:26.652199 6 log.go:172] (0xc001a73040) (3) Data frame sent I0502 13:38:26.652206 6 log.go:172] (0xc001ae3b80) Data frame received for 3 I0502 13:38:26.652210 6 log.go:172] (0xc001a73040) (3) Data frame handling I0502 13:38:26.653950 6 log.go:172] (0xc001ae3b80) Data frame received for 1 I0502 13:38:26.653967 6 log.go:172] (0xc0023d3900) (1) Data frame handling I0502 13:38:26.653975 6 log.go:172] (0xc0023d3900) (1) Data frame sent I0502 13:38:26.653983 6 log.go:172] (0xc001ae3b80) (0xc0023d3900) Stream removed, broadcasting: 1 I0502 13:38:26.654040 6 log.go:172] (0xc001ae3b80) Go away received I0502 13:38:26.654083 6 log.go:172] (0xc001ae3b80) (0xc0023d3900) Stream removed, broadcasting: 1 I0502 13:38:26.654094 6 log.go:172] (0xc001ae3b80) (0xc001a73040) Stream removed, broadcasting: 3 I0502 13:38:26.654100 6 log.go:172] (0xc001ae3b80) (0xc0005f6280) Stream removed, broadcasting: 5 May 2 13:38:26.654: INFO: Exec stderr: "" May 2 13:38:26.654: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4446 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 2 13:38:26.654: INFO: >>> kubeConfig: /root/.kube/config I0502 13:38:26.689405 6 log.go:172] (0xc002885760) (0xc001a73360) Create stream I0502 13:38:26.689559 6 log.go:172] (0xc002885760) (0xc001a73360) Stream added, broadcasting: 1 I0502 13:38:26.692163 6 log.go:172] (0xc002885760) Reply frame received for 1 I0502 13:38:26.692213 6 log.go:172] (0xc002885760) (0xc001a734a0) Create stream I0502 13:38:26.692240 6 log.go:172] (0xc002885760) (0xc001a734a0) Stream added, broadcasting: 3 I0502 13:38:26.693741 6 log.go:172] (0xc002885760) Reply frame received for 3 I0502 13:38:26.693783 6 log.go:172] (0xc002885760) (0xc001a735e0) Create stream I0502 13:38:26.693807 6 log.go:172] (0xc002885760) (0xc001a735e0) Stream added, broadcasting: 5 I0502 13:38:26.694841 6 log.go:172] (0xc002885760) Reply frame received for 5 I0502 13:38:26.756837 6 log.go:172] (0xc002885760) Data frame received for 5 I0502 13:38:26.756868 6 log.go:172] (0xc001a735e0) (5) Data frame handling I0502 13:38:26.756900 6 log.go:172] (0xc002885760) Data frame received for 3 I0502 13:38:26.756920 6 log.go:172] (0xc001a734a0) (3) Data frame handling I0502 13:38:26.756939 6 log.go:172] (0xc001a734a0) (3) Data frame sent I0502 13:38:26.756953 6 log.go:172] (0xc002885760) Data frame received for 3 I0502 13:38:26.756963 6 log.go:172] (0xc001a734a0) (3) Data frame handling I0502 13:38:26.758885 6 log.go:172] (0xc002885760) Data frame received for 1 I0502 13:38:26.758908 6 log.go:172] (0xc001a73360) (1) Data frame handling I0502 13:38:26.758929 6 log.go:172] (0xc001a73360) (1) Data frame sent I0502 13:38:26.758941 6 log.go:172] (0xc002885760) (0xc001a73360) Stream removed, broadcasting: 1 I0502 13:38:26.759002 6 log.go:172] (0xc002885760) Go away received I0502 13:38:26.759018 6 log.go:172] (0xc002885760) (0xc001a73360) Stream removed, broadcasting: 1 I0502 13:38:26.759029 6 log.go:172] (0xc002885760) (0xc001a734a0) Stream removed, broadcasting: 3 I0502 13:38:26.759036 6 log.go:172] (0xc002885760) (0xc001a735e0) Stream removed, broadcasting: 5 May 2 13:38:26.759: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 2 13:38:26.759: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4446 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 2 13:38:26.759: INFO: >>> kubeConfig: /root/.kube/config I0502 13:38:26.791183 6 log.go:172] (0xc0027f1340) (0xc00220dc20) Create stream I0502 13:38:26.791214 6 log.go:172] (0xc0027f1340) (0xc00220dc20) Stream added, broadcasting: 1 I0502 13:38:26.793345 6 log.go:172] (0xc0027f1340) Reply frame received for 1 I0502 13:38:26.793373 6 log.go:172] (0xc0027f1340) (0xc001a73720) Create stream I0502 13:38:26.793384 6 log.go:172] (0xc0027f1340) (0xc001a73720) Stream added, broadcasting: 3 I0502 13:38:26.794165 6 log.go:172] (0xc0027f1340) Reply frame received for 3 I0502 13:38:26.794191 6 log.go:172] (0xc0027f1340) (0xc0023d39a0) Create stream I0502 13:38:26.794200 6 log.go:172] (0xc0027f1340) (0xc0023d39a0) Stream added, broadcasting: 5 I0502 13:38:26.794767 6 log.go:172] (0xc0027f1340) Reply frame received for 5 I0502 13:38:26.847881 6 log.go:172] (0xc0027f1340) Data frame received for 5 I0502 13:38:26.847928 6 log.go:172] (0xc0023d39a0) (5) Data frame handling I0502 13:38:26.847990 6 log.go:172] (0xc0027f1340) Data frame received for 3 I0502 13:38:26.848011 6 log.go:172] (0xc001a73720) (3) Data frame handling I0502 13:38:26.848036 6 log.go:172] (0xc001a73720) (3) Data frame sent I0502 13:38:26.848046 6 log.go:172] (0xc0027f1340) Data frame received for 3 I0502 13:38:26.848052 6 log.go:172] (0xc001a73720) (3) Data frame handling I0502 13:38:26.849347 6 log.go:172] (0xc0027f1340) Data frame received for 1 I0502 13:38:26.849370 6 log.go:172] (0xc00220dc20) (1) Data frame handling I0502 13:38:26.849379 6 log.go:172] (0xc00220dc20) (1) Data frame sent I0502 13:38:26.849390 6 log.go:172] (0xc0027f1340) (0xc00220dc20) Stream removed, broadcasting: 1 I0502 13:38:26.849435 6 log.go:172] (0xc0027f1340) Go away received I0502 13:38:26.849475 6 log.go:172] (0xc0027f1340) (0xc00220dc20) Stream removed, broadcasting: 1 I0502 13:38:26.849488 6 log.go:172] (0xc0027f1340) (0xc001a73720) Stream removed, broadcasting: 3 I0502 13:38:26.849500 6 log.go:172] (0xc0027f1340) (0xc0023d39a0) Stream removed, broadcasting: 5 May 2 13:38:26.849: INFO: Exec stderr: "" May 2 13:38:26.849: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4446 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 2 13:38:26.849: INFO: >>> kubeConfig: /root/.kube/config I0502 13:38:26.885662 6 log.go:172] (0xc002862420) (0xc00220df40) Create stream I0502 13:38:26.885698 6 log.go:172] (0xc002862420) (0xc00220df40) Stream added, broadcasting: 1 I0502 13:38:26.888381 6 log.go:172] (0xc002862420) Reply frame received for 1 I0502 13:38:26.888410 6 log.go:172] (0xc002862420) (0xc0023d3a40) Create stream I0502 13:38:26.888424 6 log.go:172] (0xc002862420) (0xc0023d3a40) Stream added, broadcasting: 3 I0502 13:38:26.889538 6 log.go:172] (0xc002862420) Reply frame received for 3 I0502 13:38:26.889574 6 log.go:172] (0xc002862420) (0xc000f87680) Create stream I0502 13:38:26.889590 6 log.go:172] (0xc002862420) (0xc000f87680) Stream added, broadcasting: 5 I0502 13:38:26.890532 6 log.go:172] (0xc002862420) Reply frame received for 5 I0502 13:38:26.945755 6 log.go:172] (0xc002862420) Data frame received for 3 I0502 13:38:26.945828 6 log.go:172] (0xc0023d3a40) (3) Data frame handling I0502 13:38:26.945848 6 log.go:172] (0xc0023d3a40) (3) Data frame sent I0502 13:38:26.945871 6 log.go:172] (0xc002862420) Data frame received for 3 I0502 13:38:26.945888 6 log.go:172] (0xc0023d3a40) (3) Data frame handling I0502 13:38:26.945918 6 log.go:172] (0xc002862420) Data frame received for 5 I0502 13:38:26.945947 6 log.go:172] (0xc000f87680) (5) Data frame handling I0502 13:38:26.946995 6 log.go:172] (0xc002862420) Data frame received for 1 I0502 13:38:26.947022 6 log.go:172] (0xc00220df40) (1) Data frame handling I0502 13:38:26.947045 6 log.go:172] (0xc00220df40) (1) Data frame sent I0502 13:38:26.947061 6 log.go:172] (0xc002862420) (0xc00220df40) Stream removed, broadcasting: 1 I0502 13:38:26.947153 6 log.go:172] (0xc002862420) (0xc00220df40) Stream removed, broadcasting: 1 I0502 13:38:26.947168 6 log.go:172] (0xc002862420) (0xc0023d3a40) Stream removed, broadcasting: 3 I0502 13:38:26.947177 6 log.go:172] (0xc002862420) (0xc000f87680) Stream removed, broadcasting: 5 May 2 13:38:26.947: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 2 13:38:26.947: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4446 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 2 13:38:26.947: INFO: >>> kubeConfig: /root/.kube/config I0502 13:38:26.947301 6 log.go:172] (0xc002862420) Go away received I0502 13:38:26.984411 6 log.go:172] (0xc0021d2fd0) (0xc0023d3d60) Create stream I0502 13:38:26.984445 6 log.go:172] (0xc0021d2fd0) (0xc0023d3d60) Stream added, broadcasting: 1 I0502 13:38:26.991856 6 log.go:172] (0xc0021d2fd0) Reply frame received for 1 I0502 13:38:26.992004 6 log.go:172] (0xc0021d2fd0) (0xc0023d3ea0) Create stream I0502 13:38:26.992081 6 log.go:172] (0xc0021d2fd0) (0xc0023d3ea0) Stream added, broadcasting: 3 I0502 13:38:26.993760 6 log.go:172] (0xc0021d2fd0) Reply frame received for 3 I0502 13:38:26.993807 6 log.go:172] (0xc0021d2fd0) (0xc001ca2000) Create stream I0502 13:38:26.993821 6 log.go:172] (0xc0021d2fd0) (0xc001ca2000) Stream added, broadcasting: 5 I0502 13:38:26.995065 6 log.go:172] (0xc0021d2fd0) Reply frame received for 5 I0502 13:38:27.068716 6 log.go:172] (0xc0021d2fd0) Data frame received for 3 I0502 13:38:27.068750 6 log.go:172] (0xc0023d3ea0) (3) Data frame handling I0502 13:38:27.068763 6 log.go:172] (0xc0023d3ea0) (3) Data frame sent I0502 13:38:27.068778 6 log.go:172] (0xc0021d2fd0) Data frame received for 3 I0502 13:38:27.068787 6 log.go:172] (0xc0023d3ea0) (3) Data frame handling I0502 13:38:27.068843 6 log.go:172] (0xc0021d2fd0) Data frame received for 5 I0502 13:38:27.068889 6 log.go:172] (0xc001ca2000) (5) Data frame handling I0502 13:38:27.070264 6 log.go:172] (0xc0021d2fd0) Data frame received for 1 I0502 13:38:27.070291 6 log.go:172] (0xc0023d3d60) (1) Data frame handling I0502 13:38:27.070319 6 log.go:172] (0xc0023d3d60) (1) Data frame sent I0502 13:38:27.070333 6 log.go:172] (0xc0021d2fd0) (0xc0023d3d60) Stream removed, broadcasting: 1 I0502 13:38:27.070356 6 log.go:172] (0xc0021d2fd0) Go away received I0502 13:38:27.070411 6 log.go:172] (0xc0021d2fd0) (0xc0023d3d60) Stream removed, broadcasting: 1 I0502 13:38:27.070423 6 log.go:172] (0xc0021d2fd0) (0xc0023d3ea0) Stream removed, broadcasting: 3 I0502 13:38:27.070435 6 log.go:172] (0xc0021d2fd0) (0xc001ca2000) Stream removed, broadcasting: 5 May 2 13:38:27.070: INFO: Exec stderr: "" May 2 13:38:27.070: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4446 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 2 13:38:27.070: INFO: >>> kubeConfig: /root/.kube/config I0502 13:38:27.097876 6 log.go:172] (0xc00317c000) (0xc0005f6aa0) Create stream I0502 13:38:27.097926 6 log.go:172] (0xc00317c000) (0xc0005f6aa0) Stream added, broadcasting: 1 I0502 13:38:27.101041 6 log.go:172] (0xc00317c000) Reply frame received for 1 I0502 13:38:27.101094 6 log.go:172] (0xc00317c000) (0xc001a73860) Create stream I0502 13:38:27.101277 6 log.go:172] (0xc00317c000) (0xc001a73860) Stream added, broadcasting: 3 I0502 13:38:27.102603 6 log.go:172] (0xc00317c000) Reply frame received for 3 I0502 13:38:27.102647 6 log.go:172] (0xc00317c000) (0xc001a73900) Create stream I0502 13:38:27.102662 6 log.go:172] (0xc00317c000) (0xc001a73900) Stream added, broadcasting: 5 I0502 13:38:27.103612 6 log.go:172] (0xc00317c000) Reply frame received for 5 I0502 13:38:27.172524 6 log.go:172] (0xc00317c000) Data frame received for 3 I0502 13:38:27.172550 6 log.go:172] (0xc001a73860) (3) Data frame handling I0502 13:38:27.172558 6 log.go:172] (0xc001a73860) (3) Data frame sent I0502 13:38:27.172563 6 log.go:172] (0xc00317c000) Data frame received for 3 I0502 13:38:27.172567 6 log.go:172] (0xc001a73860) (3) Data frame handling I0502 13:38:27.172606 6 log.go:172] (0xc00317c000) Data frame received for 5 I0502 13:38:27.172649 6 log.go:172] (0xc001a73900) (5) Data frame handling I0502 13:38:27.174139 6 log.go:172] (0xc00317c000) Data frame received for 1 I0502 13:38:27.174157 6 log.go:172] (0xc0005f6aa0) (1) Data frame handling I0502 13:38:27.174171 6 log.go:172] (0xc0005f6aa0) (1) Data frame sent I0502 13:38:27.174181 6 log.go:172] (0xc00317c000) (0xc0005f6aa0) Stream removed, broadcasting: 1 I0502 13:38:27.174211 6 log.go:172] (0xc00317c000) Go away received I0502 13:38:27.174328 6 log.go:172] (0xc00317c000) (0xc0005f6aa0) Stream removed, broadcasting: 1 I0502 13:38:27.174347 6 log.go:172] (0xc00317c000) (0xc001a73860) Stream removed, broadcasting: 3 I0502 13:38:27.174355 6 log.go:172] (0xc00317c000) (0xc001a73900) Stream removed, broadcasting: 5 May 2 13:38:27.174: INFO: Exec stderr: "" May 2 13:38:27.174: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4446 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 2 13:38:27.174: INFO: >>> kubeConfig: /root/.kube/config I0502 13:38:27.220847 6 log.go:172] (0xc002863290) (0xc001ca2460) Create stream I0502 13:38:27.220876 6 log.go:172] (0xc002863290) (0xc001ca2460) Stream added, broadcasting: 1 I0502 13:38:27.222745 6 log.go:172] (0xc002863290) Reply frame received for 1 I0502 13:38:27.222776 6 log.go:172] (0xc002863290) (0xc0005f6d20) Create stream I0502 13:38:27.222786 6 log.go:172] (0xc002863290) (0xc0005f6d20) Stream added, broadcasting: 3 I0502 13:38:27.223332 6 log.go:172] (0xc002863290) Reply frame received for 3 I0502 13:38:27.223360 6 log.go:172] (0xc002863290) (0xc0005f6e60) Create stream I0502 13:38:27.223373 6 log.go:172] (0xc002863290) (0xc0005f6e60) Stream added, broadcasting: 5 I0502 13:38:27.223985 6 log.go:172] (0xc002863290) Reply frame received for 5 I0502 13:38:27.279747 6 log.go:172] (0xc002863290) Data frame received for 3 I0502 13:38:27.279771 6 log.go:172] (0xc0005f6d20) (3) Data frame handling I0502 13:38:27.279792 6 log.go:172] (0xc002863290) Data frame received for 5 I0502 13:38:27.279826 6 log.go:172] (0xc0005f6e60) (5) Data frame handling I0502 13:38:27.279851 6 log.go:172] (0xc0005f6d20) (3) Data frame sent I0502 13:38:27.279868 6 log.go:172] (0xc002863290) Data frame received for 3 I0502 13:38:27.279880 6 log.go:172] (0xc0005f6d20) (3) Data frame handling I0502 13:38:27.281427 6 log.go:172] (0xc002863290) Data frame received for 1 I0502 13:38:27.281495 6 log.go:172] (0xc001ca2460) (1) Data frame handling I0502 13:38:27.281559 6 log.go:172] (0xc001ca2460) (1) Data frame sent I0502 13:38:27.281617 6 log.go:172] (0xc002863290) (0xc001ca2460) Stream removed, broadcasting: 1 I0502 13:38:27.281665 6 log.go:172] (0xc002863290) Go away received I0502 13:38:27.281804 6 log.go:172] (0xc002863290) (0xc001ca2460) Stream removed, broadcasting: 1 I0502 13:38:27.281843 6 log.go:172] (0xc002863290) (0xc0005f6d20) Stream removed, broadcasting: 3 I0502 13:38:27.281868 6 log.go:172] (0xc002863290) (0xc0005f6e60) Stream removed, broadcasting: 5 May 2 13:38:27.281: INFO: Exec stderr: "" May 2 13:38:27.281: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4446 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 2 13:38:27.281: INFO: >>> kubeConfig: /root/.kube/config I0502 13:38:27.314383 6 log.go:172] (0xc00268b760) (0xc0009ba320) Create stream I0502 13:38:27.314413 6 log.go:172] (0xc00268b760) (0xc0009ba320) Stream added, broadcasting: 1 I0502 13:38:27.316790 6 log.go:172] (0xc00268b760) Reply frame received for 1 I0502 13:38:27.316838 6 log.go:172] (0xc00268b760) (0xc0023d3f40) Create stream I0502 13:38:27.316852 6 log.go:172] (0xc00268b760) (0xc0023d3f40) Stream added, broadcasting: 3 I0502 13:38:27.318309 6 log.go:172] (0xc00268b760) Reply frame received for 3 I0502 13:38:27.318392 6 log.go:172] (0xc00268b760) (0xc0006680a0) Create stream I0502 13:38:27.318424 6 log.go:172] (0xc00268b760) (0xc0006680a0) Stream added, broadcasting: 5 I0502 13:38:27.319628 6 log.go:172] (0xc00268b760) Reply frame received for 5 I0502 13:38:27.466565 6 log.go:172] (0xc00268b760) Data frame received for 5 I0502 13:38:27.466597 6 log.go:172] (0xc0006680a0) (5) Data frame handling I0502 13:38:27.466617 6 log.go:172] (0xc00268b760) Data frame received for 3 I0502 13:38:27.466627 6 log.go:172] (0xc0023d3f40) (3) Data frame handling I0502 13:38:27.466638 6 log.go:172] (0xc0023d3f40) (3) Data frame sent I0502 13:38:27.466647 6 log.go:172] (0xc00268b760) Data frame received for 3 I0502 13:38:27.466669 6 log.go:172] (0xc0023d3f40) (3) Data frame handling I0502 13:38:27.467785 6 log.go:172] (0xc00268b760) Data frame received for 1 I0502 13:38:27.467806 6 log.go:172] (0xc0009ba320) (1) Data frame handling I0502 13:38:27.467822 6 log.go:172] (0xc0009ba320) (1) Data frame sent I0502 13:38:27.467845 6 log.go:172] (0xc00268b760) (0xc0009ba320) Stream removed, broadcasting: 1 I0502 13:38:27.467866 6 log.go:172] (0xc00268b760) Go away received I0502 13:38:27.468060 6 log.go:172] (0xc00268b760) (0xc0009ba320) Stream removed, broadcasting: 1 I0502 13:38:27.468086 6 log.go:172] (0xc00268b760) (0xc0023d3f40) Stream removed, broadcasting: 3 I0502 13:38:27.468100 6 log.go:172] (0xc00268b760) (0xc0006680a0) Stream removed, broadcasting: 5 May 2 13:38:27.468: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:38:27.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-4446" for this suite. May 2 13:39:17.489: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:39:17.561: INFO: namespace e2e-kubelet-etc-hosts-4446 deletion completed in 50.090151284s • [SLOW TEST:61.416 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:39:17.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0502 13:39:29.856954 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 2 13:39:29.857: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:39:29.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4949" for this suite. May 2 13:39:36.067: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:39:36.143: INFO: namespace gc-4949 deletion completed in 6.282714161s • [SLOW TEST:18.581 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:39:36.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod test-webserver-c870efb7-4cf9-4347-9454-563f090cb962 in namespace container-probe-4296 May 2 13:39:40.432: INFO: Started pod test-webserver-c870efb7-4cf9-4347-9454-563f090cb962 in namespace container-probe-4296 STEP: checking the pod's current state and verifying that restartCount is present May 2 13:39:40.435: INFO: Initial restart count of pod test-webserver-c870efb7-4cf9-4347-9454-563f090cb962 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:43:40.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4296" for this suite. May 2 13:43:46.922: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:43:46.995: INFO: namespace container-probe-4296 deletion completed in 6.118034661s • [SLOW TEST:250.852 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:43:46.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:44:13.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8034" for this suite. May 2 13:44:19.424: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:44:19.506: INFO: namespace namespaces-8034 deletion completed in 6.112164534s STEP: Destroying namespace "nsdeletetest-6995" for this suite. May 2 13:44:19.508: INFO: Namespace nsdeletetest-6995 was already deleted STEP: Destroying namespace "nsdeletetest-5097" for this suite. May 2 13:44:25.533: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:44:25.621: INFO: namespace nsdeletetest-5097 deletion completed in 6.113355897s • [SLOW TEST:38.626 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:44:25.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-89c53b82-8cbb-47bd-89ac-bd80371213eb STEP: Creating a pod to test consume secrets May 2 13:44:25.759: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-fcfbadd8-7e12-479e-a0b8-63ee16e358be" in namespace "projected-5792" to be "success or failure" May 2 13:44:25.763: INFO: Pod "pod-projected-secrets-fcfbadd8-7e12-479e-a0b8-63ee16e358be": Phase="Pending", Reason="", readiness=false. Elapsed: 3.939604ms May 2 13:44:27.788: INFO: Pod "pod-projected-secrets-fcfbadd8-7e12-479e-a0b8-63ee16e358be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029001241s May 2 13:44:29.792: INFO: Pod "pod-projected-secrets-fcfbadd8-7e12-479e-a0b8-63ee16e358be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032099428s STEP: Saw pod success May 2 13:44:29.792: INFO: Pod "pod-projected-secrets-fcfbadd8-7e12-479e-a0b8-63ee16e358be" satisfied condition "success or failure" May 2 13:44:29.794: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-fcfbadd8-7e12-479e-a0b8-63ee16e358be container projected-secret-volume-test: STEP: delete the pod May 2 13:44:29.868: INFO: Waiting for pod pod-projected-secrets-fcfbadd8-7e12-479e-a0b8-63ee16e358be to disappear May 2 13:44:29.883: INFO: Pod pod-projected-secrets-fcfbadd8-7e12-479e-a0b8-63ee16e358be no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:44:29.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5792" for this suite. May 2 13:44:35.930: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:44:36.038: INFO: namespace projected-5792 deletion completed in 6.151576222s • [SLOW TEST:10.417 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:44:36.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller May 2 13:44:36.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9560' May 2 13:44:36.346: INFO: stderr: "" May 2 13:44:36.346: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 2 13:44:36.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9560' May 2 13:44:36.503: INFO: stderr: "" May 2 13:44:36.503: INFO: stdout: "update-demo-nautilus-fj75f update-demo-nautilus-s8v26 " May 2 13:44:36.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fj75f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9560' May 2 13:44:36.600: INFO: stderr: "" May 2 13:44:36.600: INFO: stdout: "" May 2 13:44:36.600: INFO: update-demo-nautilus-fj75f is created but not running May 2 13:44:41.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9560' May 2 13:44:41.709: INFO: stderr: "" May 2 13:44:41.709: INFO: stdout: "update-demo-nautilus-fj75f update-demo-nautilus-s8v26 " May 2 13:44:41.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fj75f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9560' May 2 13:44:41.800: INFO: stderr: "" May 2 13:44:41.800: INFO: stdout: "true" May 2 13:44:41.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fj75f -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9560' May 2 13:44:41.897: INFO: stderr: "" May 2 13:44:41.897: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 2 13:44:41.897: INFO: validating pod update-demo-nautilus-fj75f May 2 13:44:41.901: INFO: got data: { "image": "nautilus.jpg" } May 2 13:44:41.901: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 2 13:44:41.901: INFO: update-demo-nautilus-fj75f is verified up and running May 2 13:44:41.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-s8v26 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9560' May 2 13:44:42.016: INFO: stderr: "" May 2 13:44:42.016: INFO: stdout: "true" May 2 13:44:42.016: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-s8v26 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9560' May 2 13:44:42.108: INFO: stderr: "" May 2 13:44:42.108: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 2 13:44:42.108: INFO: validating pod update-demo-nautilus-s8v26 May 2 13:44:42.112: INFO: got data: { "image": "nautilus.jpg" } May 2 13:44:42.112: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 2 13:44:42.112: INFO: update-demo-nautilus-s8v26 is verified up and running STEP: scaling down the replication controller May 2 13:44:42.114: INFO: scanned /root for discovery docs: May 2 13:44:42.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-9560' May 2 13:44:43.248: INFO: stderr: "" May 2 13:44:43.249: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 2 13:44:43.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9560' May 2 13:44:43.355: INFO: stderr: "" May 2 13:44:43.355: INFO: stdout: "update-demo-nautilus-fj75f update-demo-nautilus-s8v26 " STEP: Replicas for name=update-demo: expected=1 actual=2 May 2 13:44:48.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9560' May 2 13:44:48.468: INFO: stderr: "" May 2 13:44:48.469: INFO: stdout: "update-demo-nautilus-fj75f update-demo-nautilus-s8v26 " STEP: Replicas for name=update-demo: expected=1 actual=2 May 2 13:44:53.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9560' May 2 13:44:53.572: INFO: stderr: "" May 2 13:44:53.572: INFO: stdout: "update-demo-nautilus-s8v26 " May 2 13:44:53.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-s8v26 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9560' May 2 13:44:53.667: INFO: stderr: "" May 2 13:44:53.667: INFO: stdout: "true" May 2 13:44:53.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-s8v26 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9560' May 2 13:44:53.756: INFO: stderr: "" May 2 13:44:53.756: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 2 13:44:53.756: INFO: validating pod update-demo-nautilus-s8v26 May 2 13:44:53.759: INFO: got data: { "image": "nautilus.jpg" } May 2 13:44:53.759: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 2 13:44:53.759: INFO: update-demo-nautilus-s8v26 is verified up and running STEP: scaling up the replication controller May 2 13:44:53.762: INFO: scanned /root for discovery docs: May 2 13:44:53.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-9560' May 2 13:44:54.949: INFO: stderr: "" May 2 13:44:54.949: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 2 13:44:54.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9560' May 2 13:44:55.050: INFO: stderr: "" May 2 13:44:55.050: INFO: stdout: "update-demo-nautilus-nkl2q update-demo-nautilus-s8v26 " May 2 13:44:55.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nkl2q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9560' May 2 13:44:55.166: INFO: stderr: "" May 2 13:44:55.166: INFO: stdout: "" May 2 13:44:55.166: INFO: update-demo-nautilus-nkl2q is created but not running May 2 13:45:00.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9560' May 2 13:45:00.268: INFO: stderr: "" May 2 13:45:00.268: INFO: stdout: "update-demo-nautilus-nkl2q update-demo-nautilus-s8v26 " May 2 13:45:00.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nkl2q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9560' May 2 13:45:00.363: INFO: stderr: "" May 2 13:45:00.363: INFO: stdout: "true" May 2 13:45:00.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nkl2q -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9560' May 2 13:45:00.451: INFO: stderr: "" May 2 13:45:00.451: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 2 13:45:00.451: INFO: validating pod update-demo-nautilus-nkl2q May 2 13:45:00.455: INFO: got data: { "image": "nautilus.jpg" } May 2 13:45:00.455: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 2 13:45:00.455: INFO: update-demo-nautilus-nkl2q is verified up and running May 2 13:45:00.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-s8v26 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9560' May 2 13:45:00.560: INFO: stderr: "" May 2 13:45:00.560: INFO: stdout: "true" May 2 13:45:00.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-s8v26 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9560' May 2 13:45:00.647: INFO: stderr: "" May 2 13:45:00.648: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 2 13:45:00.648: INFO: validating pod update-demo-nautilus-s8v26 May 2 13:45:00.651: INFO: got data: { "image": "nautilus.jpg" } May 2 13:45:00.651: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 2 13:45:00.651: INFO: update-demo-nautilus-s8v26 is verified up and running STEP: using delete to clean up resources May 2 13:45:00.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9560' May 2 13:45:00.756: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 2 13:45:00.756: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 2 13:45:00.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9560' May 2 13:45:00.851: INFO: stderr: "No resources found.\n" May 2 13:45:00.851: INFO: stdout: "" May 2 13:45:00.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9560 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 2 13:45:00.953: INFO: stderr: "" May 2 13:45:00.953: INFO: stdout: "update-demo-nautilus-nkl2q\nupdate-demo-nautilus-s8v26\n" May 2 13:45:01.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9560' May 2 13:45:01.565: INFO: stderr: "No resources found.\n" May 2 13:45:01.565: INFO: stdout: "" May 2 13:45:01.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9560 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 2 13:45:01.693: INFO: stderr: "" May 2 13:45:01.693: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:45:01.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9560" for this suite. May 2 13:45:23.747: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:45:23.818: INFO: namespace kubectl-9560 deletion completed in 22.12184915s • [SLOW TEST:47.780 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:45:23.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 2 13:45:28.933: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:45:29.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-3133" for this suite. May 2 13:45:52.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:45:52.120: INFO: namespace replicaset-3133 deletion completed in 22.137823531s • [SLOW TEST:28.301 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:45:52.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 2 13:46:00.264: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 2 13:46:00.275: INFO: Pod pod-with-prestop-http-hook still exists May 2 13:46:02.275: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 2 13:46:02.279: INFO: Pod pod-with-prestop-http-hook still exists May 2 13:46:04.275: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 2 13:46:04.279: INFO: Pod pod-with-prestop-http-hook still exists May 2 13:46:06.275: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 2 13:46:06.279: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:46:06.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1310" for this suite. May 2 13:46:28.335: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:46:28.392: INFO: namespace container-lifecycle-hook-1310 deletion completed in 22.103070852s • [SLOW TEST:36.272 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:46:28.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components May 2 13:46:28.532: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend May 2 13:46:28.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5167' May 2 13:46:34.718: INFO: stderr: "" May 2 13:46:34.718: INFO: stdout: "service/redis-slave created\n" May 2 13:46:34.718: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend May 2 13:46:34.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5167' May 2 13:46:35.077: INFO: stderr: "" May 2 13:46:35.077: INFO: stdout: "service/redis-master created\n" May 2 13:46:35.078: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 2 13:46:35.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5167' May 2 13:46:35.399: INFO: stderr: "" May 2 13:46:35.399: INFO: stdout: "service/frontend created\n" May 2 13:46:35.399: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 May 2 13:46:35.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5167' May 2 13:46:35.690: INFO: stderr: "" May 2 13:46:35.690: INFO: stdout: "deployment.apps/frontend created\n" May 2 13:46:35.690: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 2 13:46:35.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5167' May 2 13:46:36.043: INFO: stderr: "" May 2 13:46:36.043: INFO: stdout: "deployment.apps/redis-master created\n" May 2 13:46:36.043: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 May 2 13:46:36.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5167' May 2 13:46:36.363: INFO: stderr: "" May 2 13:46:36.363: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app May 2 13:46:36.363: INFO: Waiting for all frontend pods to be Running. May 2 13:46:46.414: INFO: Waiting for frontend to serve content. May 2 13:46:46.432: INFO: Trying to add a new entry to the guestbook. May 2 13:46:46.451: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 2 13:46:46.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5167' May 2 13:46:46.603: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 2 13:46:46.603: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources May 2 13:46:46.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5167' May 2 13:46:46.735: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 2 13:46:46.735: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 2 13:46:46.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5167' May 2 13:46:46.846: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 2 13:46:46.846: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 2 13:46:46.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5167' May 2 13:46:46.943: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 2 13:46:46.943: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources May 2 13:46:46.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5167' May 2 13:46:47.044: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 2 13:46:47.044: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 2 13:46:47.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5167' May 2 13:46:47.192: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 2 13:46:47.192: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:46:47.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5167" for this suite. May 2 13:47:27.243: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:47:27.340: INFO: namespace kubectl-5167 deletion completed in 40.129677835s • [SLOW TEST:58.947 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:47:27.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container May 2 13:47:31.983: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4610 pod-service-account-da7fa285-9aee-4652-a271-15454e6897c2 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container May 2 13:47:32.243: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4610 pod-service-account-da7fa285-9aee-4652-a271-15454e6897c2 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container May 2 13:47:32.446: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4610 pod-service-account-da7fa285-9aee-4652-a271-15454e6897c2 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:47:32.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-4610" for this suite. May 2 13:47:38.688: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:47:38.769: INFO: namespace svcaccounts-4610 deletion completed in 6.114591074s • [SLOW TEST:11.428 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:47:38.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-40d4df78-ddd9-43fd-bca5-227fe5f993b7 in namespace container-probe-6518 May 2 13:47:42.899: INFO: Started pod busybox-40d4df78-ddd9-43fd-bca5-227fe5f993b7 in namespace container-probe-6518 STEP: checking the pod's current state and verifying that restartCount is present May 2 13:47:42.901: INFO: Initial restart count of pod busybox-40d4df78-ddd9-43fd-bca5-227fe5f993b7 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:51:43.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6518" for this suite. May 2 13:51:49.527: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:51:49.601: INFO: namespace container-probe-6518 deletion completed in 6.087402011s • [SLOW TEST:250.832 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:51:49.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 2 13:52:09.692: INFO: Container started at 2020-05-02 13:51:52 +0000 UTC, pod became ready at 2020-05-02 13:52:08 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:52:09.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4022" for this suite. May 2 13:52:31.736: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:52:31.823: INFO: namespace container-probe-4022 deletion completed in 22.127679161s • [SLOW TEST:42.222 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:52:31.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:52:35.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2572" for this suite. May 2 13:53:25.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:53:26.072: INFO: namespace kubelet-test-2572 deletion completed in 50.101308037s • [SLOW TEST:54.248 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:53:26.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod May 2 13:53:26.135: INFO: PodSpec: initContainers in spec.initContainers May 2 13:54:17.287: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-ee64c622-2d0f-4654-b17b-f7aa75c7623b", GenerateName:"", Namespace:"init-container-7623", SelfLink:"/api/v1/namespaces/init-container-7623/pods/pod-init-ee64c622-2d0f-4654-b17b-f7aa75c7623b", UID:"6966eb8e-6b2c-4e9b-be5c-07bb25ce42cb", ResourceVersion:"8630852", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63724024406, loc:(*time.Location)(0x7ead8c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"135584032"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-mhwbn", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0026a0b80), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-mhwbn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-mhwbn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-mhwbn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002b36ba8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002e6e120), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002b36c30)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002b36c50)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002b36c58), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002b36c5c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724024406, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724024406, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724024406, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724024406, loc:(*time.Location)(0x7ead8c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.5", PodIP:"10.244.1.110", StartTime:(*v1.Time)(0xc0029dd020), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc0029dd060), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0016e2f50)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://4e693abfb56b6d15fde990ac3be7d2ff95c13affb59541f3b69c5e27c4daf599"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0029dd080), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0029dd040), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:54:17.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7623" for this suite. May 2 13:54:39.443: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:54:39.521: INFO: namespace init-container-7623 deletion completed in 22.212261791s • [SLOW TEST:73.448 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:54:39.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-cda74bce-1813-42c6-8ee2-8d347750396c STEP: Creating a pod to test consume configMaps May 2 13:54:39.648: INFO: Waiting up to 5m0s for pod "pod-configmaps-b90687c2-8095-4000-8296-066faef79ef2" in namespace "configmap-7135" to be "success or failure" May 2 13:54:39.670: INFO: Pod "pod-configmaps-b90687c2-8095-4000-8296-066faef79ef2": Phase="Pending", Reason="", readiness=false. Elapsed: 21.254444ms May 2 13:54:41.674: INFO: Pod "pod-configmaps-b90687c2-8095-4000-8296-066faef79ef2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025667547s May 2 13:54:43.678: INFO: Pod "pod-configmaps-b90687c2-8095-4000-8296-066faef79ef2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029864297s STEP: Saw pod success May 2 13:54:43.678: INFO: Pod "pod-configmaps-b90687c2-8095-4000-8296-066faef79ef2" satisfied condition "success or failure" May 2 13:54:43.682: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-b90687c2-8095-4000-8296-066faef79ef2 container configmap-volume-test: STEP: delete the pod May 2 13:54:43.715: INFO: Waiting for pod pod-configmaps-b90687c2-8095-4000-8296-066faef79ef2 to disappear May 2 13:54:43.736: INFO: Pod pod-configmaps-b90687c2-8095-4000-8296-066faef79ef2 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:54:43.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7135" for this suite. May 2 13:54:49.758: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:54:49.835: INFO: namespace configmap-7135 deletion completed in 6.091340448s • [SLOW TEST:10.314 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:54:49.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 2 13:54:49.956: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2764be0f-709c-488c-9400-1328f190d615" in namespace "downward-api-8391" to be "success or failure" May 2 13:54:49.963: INFO: Pod "downwardapi-volume-2764be0f-709c-488c-9400-1328f190d615": Phase="Pending", Reason="", readiness=false. Elapsed: 6.641606ms May 2 13:54:51.967: INFO: Pod "downwardapi-volume-2764be0f-709c-488c-9400-1328f190d615": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010383059s May 2 13:54:55.105: INFO: Pod "downwardapi-volume-2764be0f-709c-488c-9400-1328f190d615": Phase="Pending", Reason="", readiness=false. Elapsed: 5.149045322s May 2 13:54:57.115: INFO: Pod "downwardapi-volume-2764be0f-709c-488c-9400-1328f190d615": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.15888597s STEP: Saw pod success May 2 13:54:57.115: INFO: Pod "downwardapi-volume-2764be0f-709c-488c-9400-1328f190d615" satisfied condition "success or failure" May 2 13:54:57.143: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-2764be0f-709c-488c-9400-1328f190d615 container client-container: STEP: delete the pod May 2 13:54:57.185: INFO: Waiting for pod downwardapi-volume-2764be0f-709c-488c-9400-1328f190d615 to disappear May 2 13:54:57.196: INFO: Pod downwardapi-volume-2764be0f-709c-488c-9400-1328f190d615 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:54:57.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8391" for this suite. May 2 13:55:03.277: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:55:03.472: INFO: namespace downward-api-8391 deletion completed in 6.270334265s • [SLOW TEST:13.637 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:55:03.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-a008afa1-319e-458d-8af1-87e4b10a072f STEP: Creating a pod to test consume configMaps May 2 13:55:03.597: INFO: Waiting up to 5m0s for pod "pod-configmaps-000bc8e0-ae1b-48f0-aa3e-63796c18d16e" in namespace "configmap-6057" to be "success or failure" May 2 13:55:03.604: INFO: Pod "pod-configmaps-000bc8e0-ae1b-48f0-aa3e-63796c18d16e": Phase="Pending", Reason="", readiness=false. Elapsed: 7.266192ms May 2 13:55:05.608: INFO: Pod "pod-configmaps-000bc8e0-ae1b-48f0-aa3e-63796c18d16e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01162021s May 2 13:55:07.612: INFO: Pod "pod-configmaps-000bc8e0-ae1b-48f0-aa3e-63796c18d16e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01484648s STEP: Saw pod success May 2 13:55:07.612: INFO: Pod "pod-configmaps-000bc8e0-ae1b-48f0-aa3e-63796c18d16e" satisfied condition "success or failure" May 2 13:55:07.614: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-000bc8e0-ae1b-48f0-aa3e-63796c18d16e container configmap-volume-test: STEP: delete the pod May 2 13:55:07.654: INFO: Waiting for pod pod-configmaps-000bc8e0-ae1b-48f0-aa3e-63796c18d16e to disappear May 2 13:55:07.665: INFO: Pod pod-configmaps-000bc8e0-ae1b-48f0-aa3e-63796c18d16e no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:55:07.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6057" for this suite. May 2 13:55:13.693: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:55:13.773: INFO: namespace configmap-6057 deletion completed in 6.105583976s • [SLOW TEST:10.301 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:55:13.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-d8886acd-71d5-4a18-bbdf-458a34da6c89 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:55:19.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3012" for this suite. May 2 13:55:41.936: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:55:42.010: INFO: namespace configmap-3012 deletion completed in 22.106681344s • [SLOW TEST:28.236 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:55:42.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod May 2 13:55:46.639: INFO: Successfully updated pod "labelsupdate3590ad77-8914-432c-a366-9092338a887b" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:55:50.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8362" for this suite. May 2 13:56:12.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:56:12.765: INFO: namespace downward-api-8362 deletion completed in 22.101602941s • [SLOW TEST:30.755 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:56:12.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 2 13:56:12.939: INFO: Waiting up to 5m0s for pod "downwardapi-volume-39c7e60d-4d0b-4157-8e50-6a302abbbdc6" in namespace "projected-6888" to be "success or failure" May 2 13:56:12.948: INFO: Pod "downwardapi-volume-39c7e60d-4d0b-4157-8e50-6a302abbbdc6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.841634ms May 2 13:56:14.952: INFO: Pod "downwardapi-volume-39c7e60d-4d0b-4157-8e50-6a302abbbdc6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013212808s May 2 13:56:16.960: INFO: Pod "downwardapi-volume-39c7e60d-4d0b-4157-8e50-6a302abbbdc6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021232065s STEP: Saw pod success May 2 13:56:16.960: INFO: Pod "downwardapi-volume-39c7e60d-4d0b-4157-8e50-6a302abbbdc6" satisfied condition "success or failure" May 2 13:56:16.962: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-39c7e60d-4d0b-4157-8e50-6a302abbbdc6 container client-container: STEP: delete the pod May 2 13:56:17.003: INFO: Waiting for pod downwardapi-volume-39c7e60d-4d0b-4157-8e50-6a302abbbdc6 to disappear May 2 13:56:17.025: INFO: Pod downwardapi-volume-39c7e60d-4d0b-4157-8e50-6a302abbbdc6 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:56:17.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6888" for this suite. May 2 13:56:23.041: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:56:23.118: INFO: namespace projected-6888 deletion completed in 6.089352251s • [SLOW TEST:10.352 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:56:23.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test use defaults May 2 13:56:23.208: INFO: Waiting up to 5m0s for pod "client-containers-81d7709d-e9ce-4154-a4f9-dd4f60c7ded6" in namespace "containers-3441" to be "success or failure" May 2 13:56:23.210: INFO: Pod "client-containers-81d7709d-e9ce-4154-a4f9-dd4f60c7ded6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.646041ms May 2 13:56:25.215: INFO: Pod "client-containers-81d7709d-e9ce-4154-a4f9-dd4f60c7ded6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007071222s May 2 13:56:27.219: INFO: Pod "client-containers-81d7709d-e9ce-4154-a4f9-dd4f60c7ded6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011774651s STEP: Saw pod success May 2 13:56:27.219: INFO: Pod "client-containers-81d7709d-e9ce-4154-a4f9-dd4f60c7ded6" satisfied condition "success or failure" May 2 13:56:27.222: INFO: Trying to get logs from node iruya-worker pod client-containers-81d7709d-e9ce-4154-a4f9-dd4f60c7ded6 container test-container: STEP: delete the pod May 2 13:56:27.248: INFO: Waiting for pod client-containers-81d7709d-e9ce-4154-a4f9-dd4f60c7ded6 to disappear May 2 13:56:27.259: INFO: Pod client-containers-81d7709d-e9ce-4154-a4f9-dd4f60c7ded6 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:56:27.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3441" for this suite. May 2 13:56:33.304: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:56:33.379: INFO: namespace containers-3441 deletion completed in 6.116812494s • [SLOW TEST:10.261 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:56:33.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 2 13:56:33.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-4837' May 2 13:56:34.465: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 2 13:56:34.465: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc May 2 13:56:34.474: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-srz9w] May 2 13:56:34.475: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-srz9w" in namespace "kubectl-4837" to be "running and ready" May 2 13:56:34.502: INFO: Pod "e2e-test-nginx-rc-srz9w": Phase="Pending", Reason="", readiness=false. Elapsed: 27.09396ms May 2 13:56:36.506: INFO: Pod "e2e-test-nginx-rc-srz9w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031242513s May 2 13:56:38.510: INFO: Pod "e2e-test-nginx-rc-srz9w": Phase="Running", Reason="", readiness=true. Elapsed: 4.035008272s May 2 13:56:38.510: INFO: Pod "e2e-test-nginx-rc-srz9w" satisfied condition "running and ready" May 2 13:56:38.510: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-srz9w] May 2 13:56:38.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-4837' May 2 13:56:40.339: INFO: stderr: "" May 2 13:56:40.339: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 May 2 13:56:40.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-4837' May 2 13:56:40.449: INFO: stderr: "" May 2 13:56:40.449: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:56:40.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4837" for this suite. May 2 13:57:02.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:57:02.539: INFO: namespace kubectl-4837 deletion completed in 22.086143717s • [SLOW TEST:29.160 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:57:02.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-c5848711-5ec0-406e-95ed-6e862ec14c1d STEP: Creating a pod to test consume secrets May 2 13:57:02.633: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-aee21efc-50e2-4dd0-866f-6be39c0812b7" in namespace "projected-1613" to be "success or failure" May 2 13:57:02.674: INFO: Pod "pod-projected-secrets-aee21efc-50e2-4dd0-866f-6be39c0812b7": Phase="Pending", Reason="", readiness=false. Elapsed: 40.32942ms May 2 13:57:04.679: INFO: Pod "pod-projected-secrets-aee21efc-50e2-4dd0-866f-6be39c0812b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045305898s May 2 13:57:06.683: INFO: Pod "pod-projected-secrets-aee21efc-50e2-4dd0-866f-6be39c0812b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049868894s STEP: Saw pod success May 2 13:57:06.683: INFO: Pod "pod-projected-secrets-aee21efc-50e2-4dd0-866f-6be39c0812b7" satisfied condition "success or failure" May 2 13:57:06.687: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-aee21efc-50e2-4dd0-866f-6be39c0812b7 container secret-volume-test: STEP: delete the pod May 2 13:57:06.741: INFO: Waiting for pod pod-projected-secrets-aee21efc-50e2-4dd0-866f-6be39c0812b7 to disappear May 2 13:57:06.751: INFO: Pod pod-projected-secrets-aee21efc-50e2-4dd0-866f-6be39c0812b7 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:57:06.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1613" for this suite. May 2 13:57:12.767: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:57:12.845: INFO: namespace projected-1613 deletion completed in 6.090680519s • [SLOW TEST:10.305 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:57:12.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 2 13:57:16.999: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-954fffb5-e9a5-4c43-94ba-0f46a17af18c,GenerateName:,Namespace:events-3939,SelfLink:/api/v1/namespaces/events-3939/pods/send-events-954fffb5-e9a5-4c43-94ba-0f46a17af18c,UID:d8677924-cbf0-4dea-8c91-3d30ebb7e7de,ResourceVersion:8631457,Generation:0,CreationTimestamp:2020-05-02 13:57:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 923699404,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lsnm2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lsnm2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-lsnm2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a49b60} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a49b80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 13:57:13 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 13:57:15 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 13:57:15 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 13:57:12 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.116,StartTime:2020-05-02 13:57:13 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-05-02 13:57:15 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://7a955a79e5bbd533b048e8e174c6abfe4233451bb8ed2612cc28d3feaa01c957}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod May 2 13:57:19.004: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 2 13:57:21.009: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:57:21.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-3939" for this suite. May 2 13:58:03.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:58:03.129: INFO: namespace events-3939 deletion completed in 42.107538395s • [SLOW TEST:50.283 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:58:03.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes May 2 13:58:03.263: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:58:22.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2818" for this suite. May 2 13:58:28.224: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:58:28.303: INFO: namespace pods-2818 deletion completed in 6.129022602s • [SLOW TEST:25.174 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:58:28.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:58:28.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2584" for this suite. May 2 13:58:50.482: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:58:50.564: INFO: namespace pods-2584 deletion completed in 22.103172188s • [SLOW TEST:22.260 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:58:50.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller May 2 13:58:50.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4638' May 2 13:58:50.932: INFO: stderr: "" May 2 13:58:50.932: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 2 13:58:50.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4638' May 2 13:58:51.042: INFO: stderr: "" May 2 13:58:51.043: INFO: stdout: "update-demo-nautilus-8ps2w update-demo-nautilus-kc2nt " May 2 13:58:51.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8ps2w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4638' May 2 13:58:51.154: INFO: stderr: "" May 2 13:58:51.154: INFO: stdout: "" May 2 13:58:51.154: INFO: update-demo-nautilus-8ps2w is created but not running May 2 13:58:56.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4638' May 2 13:58:56.263: INFO: stderr: "" May 2 13:58:56.263: INFO: stdout: "update-demo-nautilus-8ps2w update-demo-nautilus-kc2nt " May 2 13:58:56.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8ps2w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4638' May 2 13:58:56.354: INFO: stderr: "" May 2 13:58:56.354: INFO: stdout: "true" May 2 13:58:56.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8ps2w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4638' May 2 13:58:56.438: INFO: stderr: "" May 2 13:58:56.438: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 2 13:58:56.438: INFO: validating pod update-demo-nautilus-8ps2w May 2 13:58:56.443: INFO: got data: { "image": "nautilus.jpg" } May 2 13:58:56.443: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 2 13:58:56.443: INFO: update-demo-nautilus-8ps2w is verified up and running May 2 13:58:56.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kc2nt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4638' May 2 13:58:56.524: INFO: stderr: "" May 2 13:58:56.524: INFO: stdout: "true" May 2 13:58:56.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kc2nt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4638' May 2 13:58:56.610: INFO: stderr: "" May 2 13:58:56.610: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 2 13:58:56.610: INFO: validating pod update-demo-nautilus-kc2nt May 2 13:58:56.614: INFO: got data: { "image": "nautilus.jpg" } May 2 13:58:56.614: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 2 13:58:56.614: INFO: update-demo-nautilus-kc2nt is verified up and running STEP: rolling-update to new replication controller May 2 13:58:56.616: INFO: scanned /root for discovery docs: May 2 13:58:56.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-4638' May 2 13:59:19.617: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 2 13:59:19.618: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. May 2 13:59:19.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4638' May 2 13:59:20.566: INFO: stderr: "" May 2 13:59:20.566: INFO: stdout: "update-demo-kitten-9b4p9 update-demo-kitten-vjgtg update-demo-nautilus-kc2nt " STEP: Replicas for name=update-demo: expected=2 actual=3 May 2 13:59:25.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4638' May 2 13:59:25.671: INFO: stderr: "" May 2 13:59:25.671: INFO: stdout: "update-demo-kitten-9b4p9 update-demo-kitten-vjgtg " May 2 13:59:25.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-9b4p9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4638' May 2 13:59:25.769: INFO: stderr: "" May 2 13:59:25.769: INFO: stdout: "true" May 2 13:59:25.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-9b4p9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4638' May 2 13:59:25.860: INFO: stderr: "" May 2 13:59:25.860: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 2 13:59:25.860: INFO: validating pod update-demo-kitten-9b4p9 May 2 13:59:25.864: INFO: got data: { "image": "kitten.jpg" } May 2 13:59:25.864: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 2 13:59:25.864: INFO: update-demo-kitten-9b4p9 is verified up and running May 2 13:59:25.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-vjgtg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4638' May 2 13:59:25.953: INFO: stderr: "" May 2 13:59:25.953: INFO: stdout: "true" May 2 13:59:25.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-vjgtg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4638' May 2 13:59:26.044: INFO: stderr: "" May 2 13:59:26.044: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 2 13:59:26.044: INFO: validating pod update-demo-kitten-vjgtg May 2 13:59:26.048: INFO: got data: { "image": "kitten.jpg" } May 2 13:59:26.048: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 2 13:59:26.048: INFO: update-demo-kitten-vjgtg is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:59:26.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4638" for this suite. May 2 13:59:48.068: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 13:59:48.166: INFO: namespace kubectl-4638 deletion completed in 22.115020561s • [SLOW TEST:57.602 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 13:59:48.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 13:59:54.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-2704" for this suite. May 2 14:00:00.501: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:00:00.586: INFO: namespace namespaces-2704 deletion completed in 6.104529926s STEP: Destroying namespace "nsdeletetest-7704" for this suite. May 2 14:00:00.588: INFO: Namespace nsdeletetest-7704 was already deleted STEP: Destroying namespace "nsdeletetest-7362" for this suite. May 2 14:00:06.619: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:00:06.692: INFO: namespace nsdeletetest-7362 deletion completed in 6.103264537s • [SLOW TEST:18.525 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:00:06.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 2 14:00:06.766: INFO: Waiting up to 5m0s for pod "downward-api-7ddd1f6c-4f0b-4b19-8395-61772f80d1c7" in namespace "downward-api-4867" to be "success or failure" May 2 14:00:06.785: INFO: Pod "downward-api-7ddd1f6c-4f0b-4b19-8395-61772f80d1c7": Phase="Pending", Reason="", readiness=false. Elapsed: 19.00307ms May 2 14:00:08.789: INFO: Pod "downward-api-7ddd1f6c-4f0b-4b19-8395-61772f80d1c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022730998s May 2 14:00:11.150: INFO: Pod "downward-api-7ddd1f6c-4f0b-4b19-8395-61772f80d1c7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.38412939s May 2 14:00:13.154: INFO: Pod "downward-api-7ddd1f6c-4f0b-4b19-8395-61772f80d1c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.388305856s STEP: Saw pod success May 2 14:00:13.154: INFO: Pod "downward-api-7ddd1f6c-4f0b-4b19-8395-61772f80d1c7" satisfied condition "success or failure" May 2 14:00:13.158: INFO: Trying to get logs from node iruya-worker pod downward-api-7ddd1f6c-4f0b-4b19-8395-61772f80d1c7 container dapi-container: STEP: delete the pod May 2 14:00:13.199: INFO: Waiting for pod downward-api-7ddd1f6c-4f0b-4b19-8395-61772f80d1c7 to disappear May 2 14:00:13.211: INFO: Pod downward-api-7ddd1f6c-4f0b-4b19-8395-61772f80d1c7 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:00:13.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4867" for this suite. May 2 14:00:19.225: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:00:19.301: INFO: namespace downward-api-4867 deletion completed in 6.084852292s • [SLOW TEST:12.610 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:00:19.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs May 2 14:00:19.368: INFO: Waiting up to 5m0s for pod "pod-3bb0ea19-228e-4c88-908a-cb26b706f62f" in namespace "emptydir-7209" to be "success or failure" May 2 14:00:19.372: INFO: Pod "pod-3bb0ea19-228e-4c88-908a-cb26b706f62f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.973955ms May 2 14:00:21.421: INFO: Pod "pod-3bb0ea19-228e-4c88-908a-cb26b706f62f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052934455s May 2 14:00:23.425: INFO: Pod "pod-3bb0ea19-228e-4c88-908a-cb26b706f62f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057095446s STEP: Saw pod success May 2 14:00:23.425: INFO: Pod "pod-3bb0ea19-228e-4c88-908a-cb26b706f62f" satisfied condition "success or failure" May 2 14:00:23.428: INFO: Trying to get logs from node iruya-worker pod pod-3bb0ea19-228e-4c88-908a-cb26b706f62f container test-container: STEP: delete the pod May 2 14:00:23.479: INFO: Waiting for pod pod-3bb0ea19-228e-4c88-908a-cb26b706f62f to disappear May 2 14:00:23.532: INFO: Pod pod-3bb0ea19-228e-4c88-908a-cb26b706f62f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:00:23.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7209" for this suite. May 2 14:00:29.555: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:00:29.650: INFO: namespace emptydir-7209 deletion completed in 6.114229102s • [SLOW TEST:10.348 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:00:29.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 2 14:00:33.824: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:00:33.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2299" for this suite. May 2 14:00:39.866: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:00:39.968: INFO: namespace container-runtime-2299 deletion completed in 6.114861897s • [SLOW TEST:10.318 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:00:39.969: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 May 2 14:00:40.032: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. May 2 14:00:40.845: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set May 2 14:00:43.140: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724024840, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724024840, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724024840, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724024840, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 2 14:00:45.144: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724024840, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724024840, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724024840, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724024840, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 2 14:00:47.798: INFO: Waited 645.36438ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:00:48.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-6521" for this suite. May 2 14:00:54.562: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:00:54.637: INFO: namespace aggregator-6521 deletion completed in 6.111347456s • [SLOW TEST:14.668 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:00:54.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes May 2 14:00:58.746: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice May 2 14:01:13.837: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:01:13.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9032" for this suite. May 2 14:01:19.863: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:01:19.943: INFO: namespace pods-9032 deletion completed in 6.098555915s • [SLOW TEST:25.305 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:01:19.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs May 2 14:01:20.075: INFO: Waiting up to 5m0s for pod "pod-ce5ddd94-a1ba-4189-953a-22d385fe4dcf" in namespace "emptydir-3387" to be "success or failure" May 2 14:01:20.087: INFO: Pod "pod-ce5ddd94-a1ba-4189-953a-22d385fe4dcf": Phase="Pending", Reason="", readiness=false. Elapsed: 11.526075ms May 2 14:01:22.090: INFO: Pod "pod-ce5ddd94-a1ba-4189-953a-22d385fe4dcf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014394629s May 2 14:01:24.094: INFO: Pod "pod-ce5ddd94-a1ba-4189-953a-22d385fe4dcf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018746797s STEP: Saw pod success May 2 14:01:24.094: INFO: Pod "pod-ce5ddd94-a1ba-4189-953a-22d385fe4dcf" satisfied condition "success or failure" May 2 14:01:24.097: INFO: Trying to get logs from node iruya-worker2 pod pod-ce5ddd94-a1ba-4189-953a-22d385fe4dcf container test-container: STEP: delete the pod May 2 14:01:24.171: INFO: Waiting for pod pod-ce5ddd94-a1ba-4189-953a-22d385fe4dcf to disappear May 2 14:01:24.188: INFO: Pod pod-ce5ddd94-a1ba-4189-953a-22d385fe4dcf no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:01:24.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3387" for this suite. May 2 14:01:30.448: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:01:30.560: INFO: namespace emptydir-3387 deletion completed in 6.368214506s • [SLOW TEST:10.617 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:01:30.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-21fd5b21-b3cc-4614-b033-720a19ff034b STEP: Creating a pod to test consume secrets May 2 14:01:30.651: INFO: Waiting up to 5m0s for pod "pod-secrets-ce9c01c3-452b-452c-9cd0-e7093caf25c2" in namespace "secrets-6545" to be "success or failure" May 2 14:01:30.654: INFO: Pod "pod-secrets-ce9c01c3-452b-452c-9cd0-e7093caf25c2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.801336ms May 2 14:01:32.756: INFO: Pod "pod-secrets-ce9c01c3-452b-452c-9cd0-e7093caf25c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105069952s May 2 14:01:34.760: INFO: Pod "pod-secrets-ce9c01c3-452b-452c-9cd0-e7093caf25c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.109426199s STEP: Saw pod success May 2 14:01:34.760: INFO: Pod "pod-secrets-ce9c01c3-452b-452c-9cd0-e7093caf25c2" satisfied condition "success or failure" May 2 14:01:34.764: INFO: Trying to get logs from node iruya-worker pod pod-secrets-ce9c01c3-452b-452c-9cd0-e7093caf25c2 container secret-volume-test: STEP: delete the pod May 2 14:01:34.827: INFO: Waiting for pod pod-secrets-ce9c01c3-452b-452c-9cd0-e7093caf25c2 to disappear May 2 14:01:34.857: INFO: Pod pod-secrets-ce9c01c3-452b-452c-9cd0-e7093caf25c2 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:01:34.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6545" for this suite. May 2 14:01:40.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:01:40.961: INFO: namespace secrets-6545 deletion completed in 6.10074311s • [SLOW TEST:10.400 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:01:40.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-q725k in namespace proxy-8775 I0502 14:01:41.118491 6 runners.go:180] Created replication controller with name: proxy-service-q725k, namespace: proxy-8775, replica count: 1 I0502 14:01:42.169023 6 runners.go:180] proxy-service-q725k Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0502 14:01:43.169416 6 runners.go:180] proxy-service-q725k Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0502 14:01:44.169609 6 runners.go:180] proxy-service-q725k Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0502 14:01:45.169851 6 runners.go:180] proxy-service-q725k Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0502 14:01:46.170074 6 runners.go:180] proxy-service-q725k Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0502 14:01:47.170243 6 runners.go:180] proxy-service-q725k Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0502 14:01:48.170388 6 runners.go:180] proxy-service-q725k Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0502 14:01:49.170580 6 runners.go:180] proxy-service-q725k Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 2 14:01:49.175: INFO: setup took 8.124014518s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 2 14:01:49.183: INFO: (0) /api/v1/namespaces/proxy-8775/pods/proxy-service-q725k-h4llh:160/proxy/: foo (200; 8.787176ms) May 2 14:01:49.183: INFO: (0) /api/v1/namespaces/proxy-8775/pods/http:proxy-service-q725k-h4llh:1080/proxy/: ... (200; 8.681143ms) May 2 14:01:49.184: INFO: (0) /api/v1/namespaces/proxy-8775/pods/http:proxy-service-q725k-h4llh:162/proxy/: bar (200; 8.76834ms) May 2 14:01:49.184: INFO: (0) /api/v1/namespaces/proxy-8775/pods/proxy-service-q725k-h4llh:1080/proxy/: test<... (200; 8.693069ms) May 2 14:01:49.184: INFO: (0) /api/v1/namespaces/proxy-8775/services/proxy-service-q725k:portname1/proxy/: foo (200; 8.909886ms) May 2 14:01:49.184: INFO: (0) /api/v1/namespaces/proxy-8775/pods/proxy-service-q725k-h4llh/proxy/: test (200; 8.898895ms) May 2 14:01:49.184: INFO: (0) /api/v1/namespaces/proxy-8775/pods/proxy-service-q725k-h4llh:162/proxy/: bar (200; 8.92035ms) May 2 14:01:49.184: INFO: (0) /api/v1/namespaces/proxy-8775/services/http:proxy-service-q725k:portname2/proxy/: bar (200; 8.955274ms) May 2 14:01:49.184: INFO: (0) /api/v1/namespaces/proxy-8775/services/http:proxy-service-q725k:portname1/proxy/: foo (200; 9.300661ms) May 2 14:01:49.185: INFO: (0) /api/v1/namespaces/proxy-8775/pods/http:proxy-service-q725k-h4llh:160/proxy/: foo (200; 9.870803ms) May 2 14:01:49.185: INFO: (0) /api/v1/namespaces/proxy-8775/services/proxy-service-q725k:portname2/proxy/: bar (200; 10.579871ms) May 2 14:01:49.190: INFO: (0) /api/v1/namespaces/proxy-8775/pods/https:proxy-service-q725k-h4llh:443/proxy/: ... (200; 3.30057ms) May 2 14:01:49.198: INFO: (1) /api/v1/namespaces/proxy-8775/pods/http:proxy-service-q725k-h4llh:160/proxy/: foo (200; 4.205885ms) May 2 14:01:49.198: INFO: (1) /api/v1/namespaces/proxy-8775/pods/proxy-service-q725k-h4llh:160/proxy/: foo (200; 4.23074ms) May 2 14:01:49.198: INFO: (1) /api/v1/namespaces/proxy-8775/pods/proxy-service-q725k-h4llh:162/proxy/: bar (200; 4.927649ms) May 2 14:01:49.199: INFO: (1) /api/v1/namespaces/proxy-8775/pods/https:proxy-service-q725k-h4llh:443/proxy/: test<... (200; 5.627931ms) May 2 14:01:49.199: INFO: (1) /api/v1/namespaces/proxy-8775/pods/proxy-service-q725k-h4llh/proxy/: test (200; 5.660648ms) May 2 14:01:49.199: INFO: (1) /api/v1/namespaces/proxy-8775/pods/https:proxy-service-q725k-h4llh:462/proxy/: tls qux (200; 5.782295ms) May 2 14:01:49.199: INFO: (1) /api/v1/namespaces/proxy-8775/services/proxy-service-q725k:portname2/proxy/: bar (200; 5.890161ms) May 2 14:01:49.199: INFO: (1) /api/v1/namespaces/proxy-8775/services/https:proxy-service-q725k:tlsportname1/proxy/: tls baz (200; 5.942245ms) May 2 14:01:49.199: INFO: (1) /api/v1/namespaces/proxy-8775/services/http:proxy-service-q725k:portname2/proxy/: bar (200; 5.899261ms) May 2 14:01:49.203: INFO: (2) /api/v1/namespaces/proxy-8775/pods/https:proxy-service-q725k-h4llh:462/proxy/: tls qux (200; 3.377435ms) May 2 14:01:49.203: INFO: (2) /api/v1/namespaces/proxy-8775/pods/http:proxy-service-q725k-h4llh:162/proxy/: bar (200; 3.592595ms) May 2 14:01:49.203: INFO: (2) /api/v1/namespaces/proxy-8775/pods/https:proxy-service-q725k-h4llh:460/proxy/: tls baz (200; 3.990553ms) May 2 14:01:49.204: INFO: (2) /api/v1/namespaces/proxy-8775/pods/https:proxy-service-q725k-h4llh:443/proxy/: ... (200; 5.447968ms) May 2 14:01:49.205: INFO: (2) /api/v1/namespaces/proxy-8775/services/http:proxy-service-q725k:portname2/proxy/: bar (200; 5.550946ms) May 2 14:01:49.205: INFO: (2) /api/v1/namespaces/proxy-8775/pods/proxy-service-q725k-h4llh/proxy/: test (200; 5.488823ms) May 2 14:01:49.205: INFO: (2) /api/v1/namespaces/proxy-8775/services/proxy-service-q725k:portname2/proxy/: bar (200; 5.574723ms) May 2 14:01:49.205: INFO: (2) /api/v1/namespaces/proxy-8775/services/proxy-service-q725k:portname1/proxy/: foo (200; 5.535058ms) May 2 14:01:49.205: INFO: (2) /api/v1/namespaces/proxy-8775/pods/proxy-service-q725k-h4llh:1080/proxy/: test<... (200; 5.553515ms) May 2 14:01:49.205: INFO: (2) /api/v1/namespaces/proxy-8775/pods/proxy-service-q725k-h4llh:162/proxy/: bar (200; 5.805484ms) May 2 14:01:49.205: INFO: (2) /api/v1/namespaces/proxy-8775/pods/http:proxy-service-q725k-h4llh:160/proxy/: foo (200; 5.942124ms) May 2 14:01:49.205: INFO: (2) /api/v1/namespaces/proxy-8775/pods/proxy-service-q725k-h4llh:160/proxy/: foo (200; 5.975568ms) May 2 14:01:49.206: INFO: (2) /api/v1/namespaces/proxy-8775/services/https:proxy-service-q725k:tlsportname1/proxy/: tls baz (200; 6.121286ms) May 2 14:01:49.206: INFO: (2) /api/v1/namespaces/proxy-8775/services/http:proxy-service-q725k:portname1/proxy/: foo (200; 6.184493ms) May 2 14:01:49.206: INFO: (2) /api/v1/namespaces/proxy-8775/services/https:proxy-service-q725k:tlsportname2/proxy/: tls qux (200; 6.125324ms) May 2 14:01:49.210: INFO: (3) /api/v1/namespaces/proxy-8775/pods/http:proxy-service-q725k-h4llh:1080/proxy/: ... (200; 3.974273ms) May 2 14:01:49.210: INFO: (3) /api/v1/namespaces/proxy-8775/pods/proxy-service-q725k-h4llh:160/proxy/: foo (200; 3.99462ms) May 2 14:01:49.210: INFO: (3) /api/v1/namespaces/proxy-8775/pods/http:proxy-service-q725k-h4llh:160/proxy/: foo (200; 3.99961ms) May 2 14:01:49.210: INFO: (3) /api/v1/namespaces/proxy-8775/pods/http:proxy-service-q725k-h4llh:162/proxy/: bar (200; 4.174164ms) May 2 14:01:49.210: INFO: (3) /api/v1/namespaces/proxy-8775/pods/proxy-service-q725k-h4llh:1080/proxy/: test<... (200; 4.062398ms) May 2 14:01:49.210: INFO: (3) /api/v1/namespaces/proxy-8775/pods/proxy-service-q725k-h4llh/proxy/: test (200; 4.124424ms) May 2 14:01:49.210: INFO: (3) /api/v1/namespaces/proxy-8775/pods/proxy-service-q725k-h4llh:162/proxy/: bar (200; 4.220465ms) May 2 14:01:49.210: INFO: (3) /api/v1/namespaces/proxy-8775/pods/https:proxy-service-q725k-h4llh:462/proxy/: tls qux (200; 4.51344ms) May 2 14:01:49.210: INFO: (3) /api/v1/namespaces/proxy-8775/pods/https:proxy-service-q725k-h4llh:460/proxy/: tls baz (200; 4.628342ms) May 2 14:01:49.211: INFO: (3) /api/v1/namespaces/proxy-8775/services/proxy-service-q725k:portname2/proxy/: bar (200; 4.710528ms) May 2 14:01:49.211: INFO: (3) /api/v1/namespaces/proxy-8775/services/https:proxy-service-q725k:tlsportname2/proxy/: tls qux (200; 4.756715ms) May 2 14:01:49.211: INFO: (3) /api/v1/namespaces/proxy-8775/services/https:proxy-service-q725k:tlsportname1/proxy/: tls baz (200; 4.750303ms) May 2 14:01:49.211: INFO: (3) /api/v1/namespaces/proxy-8775/services/http:proxy-service-q725k:portname2/proxy/: bar (200; 5.241044ms) May 2 14:01:49.211: INFO: (3) /api/v1/namespaces/proxy-8775/services/proxy-service-q725k:portname1/proxy/: foo (200; 5.361586ms) May 2 14:01:49.211: INFO: (3) /api/v1/namespaces/proxy-8775/pods/https:proxy-service-q725k-h4llh:443/proxy/: test (200; 4.317541ms) May 2 14:01:49.216: INFO: (4) /api/v1/namespaces/proxy-8775/pods/https:proxy-service-q725k-h4llh:460/proxy/: tls baz (200; 4.538563ms) May 2 14:01:49.216: INFO: (4) /api/v1/namespaces/proxy-8775/pods/https:proxy-service-q725k-h4llh:443/proxy/: test<... (200; 4.587217ms) May 2 14:01:49.216: INFO: (4) /api/v1/namespaces/proxy-8775/pods/http:proxy-service-q725k-h4llh:1080/proxy/: ... (200; 4.802571ms) May 2 14:01:49.216: INFO: (4) /api/v1/namespaces/proxy-8775/services/https:proxy-service-q725k:tlsportname2/proxy/: tls qux (200; 4.828231ms) May 2 14:01:49.217: INFO: (4) /api/v1/namespaces/proxy-8775/pods/proxy-service-q725k-h4llh:160/proxy/: foo (200; 5.048659ms) May 2 14:01:49.217: INFO: (4) /api/v1/namespaces/proxy-8775/services/proxy-service-q725k:portname1/proxy/: foo (200; 5.531199ms) May 2 14:01:49.217: INFO: (4) /api/v1/namespaces/proxy-8775/services/http:proxy-service-q725k:portname2/proxy/: bar (200; 5.592347ms) May 2 14:01:49.217: INFO: (4) /api/v1/namespaces/proxy-8775/services/proxy-service-q725k:portname2/proxy/: bar (200; 5.597236ms) May 2 14:01:49.217: INFO: (4) /api/v1/namespaces/proxy-8775/services/https:proxy-service-q725k:tlsportname1/proxy/: tls baz (200; 5.691265ms) May 2 14:01:49.217: INFO: (4) /api/v1/namespaces/proxy-8775/pods/http:proxy-service-q725k-h4llh:160/proxy/: foo (200; 5.718915ms) May 2 14:01:49.220: INFO: (5) /api/v1/namespaces/proxy-8775/pods/https:proxy-service-q725k-h4llh:460/proxy/: tls baz (200; 2.314334ms) May 2 14:01:49.221: INFO: (5) /api/v1/namespaces/proxy-8775/pods/proxy-service-q725k-h4llh:1080/proxy/: test<... (200; 3.411238ms) May 2 14:01:49.221: INFO: (5) /api/v1/namespaces/proxy-8775/pods/http:proxy-service-q725k-h4llh:160/proxy/: foo (200; 3.370589ms) May 2 14:01:49.221: INFO: (5) /api/v1/namespaces/proxy-8775/pods/http:proxy-service-q725k-h4llh:162/proxy/: bar (200; 3.473731ms) May 2 14:01:49.221: INFO: (5) /api/v1/namespaces/proxy-8775/services/proxy-service-q725k:portname2/proxy/: bar (200; 3.708564ms) May 2 14:01:49.221: INFO: (5) /api/v1/namespaces/proxy-8775/pods/proxy-service-q725k-h4llh/proxy/: test (200; 3.869339ms) May 2 14:01:49.221: INFO: (5) /api/v1/namespaces/proxy-8775/pods/proxy-service-q725k-h4llh:162/proxy/: bar (200; 4.152599ms) May 2 14:01:49.222: INFO: (5) /api/v1/namespaces/proxy-8775/pods/proxy-service-q725k-h4llh:160/proxy/: foo (200; 4.172142ms) May 2 14:01:49.222: INFO: (5) /api/v1/namespaces/proxy-8775/pods/http:proxy-service-q725k-h4llh:1080/proxy/: ... (200; 4.142839ms) May 2 14:01:49.222: INFO: (5) /api/v1/namespaces/proxy-8775/pods/https:proxy-service-q725k-h4llh:443/proxy/: test (200; 3.616722ms) May 2 14:01:49.227: INFO: (6) /api/v1/namespaces/proxy-8775/pods/proxy-service-q725k-h4llh:160/proxy/: foo (200; 4.063117ms) May 2 14:01:49.227: INFO: (6) /api/v1/namespaces/proxy-8775/pods/http:proxy-service-q725k-h4llh:162/proxy/: bar (200; 4.113266ms) May 2 14:01:49.227: INFO: (6) /api/v1/namespaces/proxy-8775/pods/http:proxy-service-q725k-h4llh:1080/proxy/: ... (200; 4.16152ms) May 2 14:01:49.227: INFO: (6) /api/v1/namespaces/proxy-8775/pods/https:proxy-service-q725k-h4llh:443/proxy/: test<... (200; 4.537937ms) May 2 14:01:49.227: INFO: (6) /api/v1/namespaces/proxy-8775/pods/proxy-service-q725k-h4llh:162/proxy/: bar (200; 4.540263ms) May 2 14:01:49.227: INFO: (6) /api/v1/namespaces/proxy-8775/pods/https:proxy-service-q725k-h4llh:462/proxy/: tls qux (200; 4.601236ms) May 2 14:01:49.227: INFO: (6) /api/v1/namespaces/proxy-8775/services/http:proxy-service-q725k:portname1/proxy/: foo (200; 4.916779ms) May 2 14:01:49.228: INFO: (6) /api/v1/namespaces/proxy-8775/services/proxy-service-q725k:portname1/proxy/: foo (200; 5.072142ms) May 2 14:01:49.228: INFO: (6) /api/v1/namespaces/proxy-8775/services/http:proxy-service-q725k:portname2/proxy/: bar (200; 5.139342ms) May 2 14:01:49.228: INFO: (6) /api/v1/namespaces/proxy-8775/services/https:proxy-service-q725k:tlsportname1/proxy/: tls baz (200; 5.068574ms) May 2 14:01:49.228: INFO: (6) /api/v1/namespaces/proxy-8775/services/proxy-service-q725k:portname2/proxy/: bar (200; 5.119372ms) May 2 14:01:49.228: INFO: (6) /api/v1/namespaces/proxy-8775/services/https:proxy-service-q725k:tlsportname2/proxy/: tls qux (200; 5.227794ms) May 2 14:01:49.231: INFO: (7) /api/v1/namespaces/proxy-8775/pods/proxy-service-q725k-h4llh/proxy/: test (200; 3.349769ms) May 2 14:01:49.231: INFO: (7) /api/v1/namespaces/proxy-8775/pods/proxy-service-q725k-h4llh:162/proxy/: bar (200; 3.399698ms) May 2 14:01:49.231: INFO: (7) /api/v1/namespaces/proxy-8775/pods/http:proxy-service-q725k-h4llh:1080/proxy/: ... (200; 3.486048ms) May 2 14:01:49.231: INFO: (7) /api/v1/namespaces/proxy-8775/pods/proxy-service-q725k-h4llh:1080/proxy/: test<... (200; 3.463702ms) May 2 14:01:49.231: INFO: (7) /api/v1/namespaces/proxy-8775/pods/http:proxy-service-q725k-h4llh:160/proxy/: foo (200; 3.553571ms) May 2 14:01:49.231: INFO: (7) /api/v1/namespaces/proxy-8775/pods/http:proxy-service-q725k-h4llh:162/proxy/: bar (200; 3.645022ms) May 2 14:01:49.231: INFO: (7) /api/v1/namespaces/proxy-8775/pods/https:proxy-service-q725k-h4llh:460/proxy/: tls baz (200; 3.679377ms) May 2 14:01:49.231: INFO: (7) /api/v1/namespaces/proxy-8775/pods/https:proxy-service-q725k-h4llh:443/proxy/: ... (200; 2.032555ms) May 2 14:01:49.235: INFO: (8) /api/v1/namespaces/proxy-8775/pods/https:proxy-service-q725k-h4llh:460/proxy/: tls baz (200; 2.08881ms) May 2 14:01:49.235: INFO: (8) /api/v1/namespaces/proxy-8775/pods/http:proxy-service-q725k-h4llh:160/proxy/: foo (200; 2.251191ms) May 2 14:01:49.235: INFO: (8) /api/v1/namespaces/proxy-8775/pods/proxy-service-q725k-h4llh:162/proxy/: bar (200; 2.457415ms) May 2 14:01:49.235: INFO: (8) /api/v1/namespaces/proxy-8775/pods/proxy-service-q725k-h4llh:160/proxy/: foo (200; 2.462021ms) May 2 14:01:49.237: INFO: (8) /api/v1/namespaces/proxy-8775/pods/https:proxy-service-q725k-h4llh:462/proxy/: tls qux (200; 3.934238ms) May 2 14:01:49.237: INFO: (8) /api/v1/namespaces/proxy-8775/pods/https:proxy-service-q725k-h4llh:443/proxy/: test (200; 4.543213ms) May 2 14:01:49.238: INFO: (8) /api/v1/namespaces/proxy-8775/services/https:proxy-service-q725k:tlsportname1/proxy/: tls baz (200; 4.815072ms) May 2 14:01:49.238: INFO: (8) /api/v1/namespaces/proxy-8775/pods/proxy-service-q725k-h4llh:1080/proxy/: test<... (200; 4.859534ms) May 2 14:01:49.238: INFO: (8) /api/v1/namespaces/proxy-8775/pods/http:proxy-service-q725k-h4llh:162/proxy/: bar (200; 4.880229ms) May 2 14:01:49.238: INFO: (8) /api/v1/namespaces/proxy-8775/services/proxy-service-q725k:portname2/proxy/: bar (200; 4.939255ms) May 2 14:01:49.238: INFO: (8) /api/v1/namespaces/proxy-8775/services/https:proxy-service-q725k:tlsportname2/proxy/: tls qux (200; 4.918768ms) May 2 14:01:49.238: INFO: (8) /api/v1/namespaces/proxy-8775/services/proxy-service-q725k:portname1/proxy/: foo (200; 4.998294ms) May 2 14:01:49.238: INFO: (8) /api/v1/namespaces/proxy-8775/services/http:proxy-service-q725k:portname2/proxy/: bar (200; 5.069416ms) May 2 14:01:49.242: INFO: (9) /api/v1/namespaces/proxy-8775/pods/http:proxy-service-q725k-h4llh:160/proxy/: foo (200; 3.779992ms) May 2 14:01:49.242: INFO: (9) /api/v1/namespaces/proxy-8775/pods/https:proxy-service-q725k-h4llh:460/proxy/: tls baz (200; 3.895794ms) May 2 14:01:49.242: INFO: (9) /api/v1/namespaces/proxy-8775/pods/proxy-service-q725k-h4llh:160/proxy/: foo (200; 3.937061ms) May 2 14:01:49.242: INFO: (9) /api/v1/namespaces/proxy-8775/pods/proxy-service-q725k-h4llh/proxy/: test (200; 3.983998ms) May 2 14:01:49.242: INFO: (9) /api/v1/namespaces/proxy-8775/pods/https:proxy-service-q725k-h4llh:443/proxy/: test<... (200; 4.007558ms) May 2 14:01:49.242: INFO: (9) /api/v1/namespaces/proxy-8775/pods/http:proxy-service-q725k-h4llh:1080/proxy/: ... (200; 3.966964ms) May 2 14:01:49.242: INFO: (9) /api/v1/namespaces/proxy-8775/pods/https:proxy-service-q725k-h4llh:462/proxy/: tls qux (200; 3.97865ms) May 2 14:01:49.242: INFO: (9) /api/v1/namespaces/proxy-8775/pods/http:proxy-service-q725k-h4llh:162/proxy/: bar (200; 4.083707ms) May 2 14:01:49.243: INFO: (9) /api/v1/namespaces/proxy-8775/pods/proxy-service-q725k-h4llh:162/proxy/: bar (200; 5.140656ms) May 2 14:01:49.244: INFO: (9) /api/v1/namespaces/proxy-8775/services/proxy-service-q725k:portname2/proxy/: bar (200; 5.450343ms) May 2 14:01:49.244: INFO: (9) /api/v1/namespaces/proxy-8775/services/proxy-service-q725k:portname1/proxy/: foo (200; 5.660486ms) May 2 14:01:49.244: INFO: (9) /api/v1/namespaces/proxy-8775/services/http:proxy-service-q725k:portname1/proxy/: foo (200; 5.648027ms) May 2 14:01:49.244: INFO: (9) /api/v1/namespaces/proxy-8775/services/http:proxy-service-q725k:portname2/proxy/: bar (200; 5.601498ms) May 2 14:01:49.244: INFO: (9) /api/v1/namespaces/proxy-8775/services/https:proxy-service-q725k:tlsportname1/proxy/: tls baz (200; 5.711567ms) May 2 14:01:49.244: INFO: (9) /api/v1/namespaces/proxy-8775/services/https:proxy-service-q725k:tlsportname2/proxy/: tls qux (200; 5.716131ms) May 2 14:01:49.248: INFO: (10) /api/v1/namespaces/proxy-8775/pods/https:proxy-service-q725k-h4llh:460/proxy/: tls baz (200; 4.450112ms) May 2 14:01:49.248: INFO: (10) /api/v1/namespaces/proxy-8775/pods/http:proxy-service-q725k-h4llh:162/proxy/: bar (200; 4.417683ms) May 2 14:01:49.248: INFO: (10) /api/v1/namespaces/proxy-8775/pods/proxy-service-q725k-h4llh:1080/proxy/: test<... (200; 4.395342ms) May 2 14:01:49.248: INFO: (10) /api/v1/namespaces/proxy-8775/pods/http:proxy-service-q725k-h4llh:1080/proxy/: ... (200; 4.447649ms) May 2 14:01:49.248: INFO: (10) /api/v1/namespaces/proxy-8775/pods/http:proxy-service-q725k-h4llh:160/proxy/: foo (200; 4.506224ms) May 2 14:01:49.248: INFO: (10) /api/v1/namespaces/proxy-8775/pods/proxy-service-q725k-h4llh/proxy/: test (200; 4.52422ms) May 2 14:01:49.248: INFO: (10) /api/v1/namespaces/proxy-8775/pods/proxy-service-q725k-h4llh:160/proxy/: foo (200; 4.456602ms) May 2 14:01:49.248: INFO: (10) /api/v1/namespaces/proxy-8775/pods/https:proxy-service-q725k-h4llh:462/proxy/: tls qux (200; 4.472191ms) May 2 14:01:49.248: INFO: (10) /api/v1/namespaces/proxy-8775/services/https:proxy-service-q725k:tlsportname2/proxy/: tls qux (200; 4.535954ms) May 2 14:01:49.248: INFO: (10) /api/v1/namespaces/proxy-8775/pods/https:proxy-service-q725k-h4llh:443/proxy/: test<... (200; 4.253237ms) May 2 14:01:49.253: INFO: (11) /api/v1/namespaces/proxy-8775/pods/proxy-service-q725k-h4llh/proxy/: test (200; 4.360893ms) May 2 14:01:49.253: INFO: (11) /api/v1/namespaces/proxy-8775/pods/http:proxy-service-q725k-h4llh:160/proxy/: foo (200; 4.353358ms) May 2 14:01:49.253: INFO: (11) /api/v1/namespaces/proxy-8775/pods/http:proxy-service-q725k-h4llh:162/proxy/: bar (200; 4.314159ms) May 2 14:01:49.253: INFO: (11) /api/v1/namespaces/proxy-8775/pods/https:proxy-service-q725k-h4llh:443/proxy/: ... (200; 4.360964ms) May 2 14:01:49.254: INFO: (11) /api/v1/namespaces/proxy-8775/pods/proxy-service-q725k-h4llh:160/proxy/: foo (200; 4.356645ms) May 2 14:01:49.254: INFO: (11) /api/v1/namespaces/proxy-8775/pods/https:proxy-service-q725k-h4llh:462/proxy/: tls qux (200; 4.39905ms) May 2 14:01:49.256: INFO: (11) /api/v1/namespaces/proxy-8775/services/http:proxy-service-q725k:portname2/proxy/: bar (200; 6.510876ms) May 2 14:01:49.256: INFO: (11) /api/v1/namespaces/proxy-8775/services/https:proxy-service-q725k:tlsportname2/proxy/: tls qux (200; 6.517311ms) May 2 14:01:49.256: INFO: (11) /api/v1/namespaces/proxy-8775/services/http:proxy-service-q725k:portname1/proxy/: foo (200; 6.676812ms) May 2 14:01:49.256: INFO: (11) /api/v1/namespaces/proxy-8775/services/proxy-service-q725k:portname1/proxy/: foo (200; 6.668ms) May 2 14:01:49.256: INFO: (11) /api/v1/namespaces/proxy-8775/services/https:proxy-service-q725k:tlsportname1/proxy/: tls baz (200; 6.766613ms) May 2 14:01:49.256: INFO: (11) /api/v1/namespaces/proxy-8775/services/proxy-service-q725k:portname2/proxy/: bar (200; 6.799131ms) May 2 14:01:49.258: INFO: (12) /api/v1/namespaces/proxy-8775/pods/https:proxy-service-q725k-h4llh:460/proxy/: tls baz (200; 2.409904ms) May 2 14:01:49.259: INFO: (12) /api/v1/namespaces/proxy-8775/pods/proxy-service-q725k-h4llh:1080/proxy/: test<... (200; 2.768869ms) May 2 14:01:49.263: INFO: (12) /api/v1/namespaces/proxy-8775/services/http:proxy-service-q725k:portname2/proxy/: bar (200; 6.775796ms) May 2 14:01:49.263: INFO: (12) /api/v1/namespaces/proxy-8775/services/proxy-service-q725k:portname1/proxy/: foo (200; 6.72384ms) May 2 14:01:49.263: INFO: (12) /api/v1/namespaces/proxy-8775/pods/http:proxy-service-q725k-h4llh:162/proxy/: bar (200; 6.835686ms) May 2 14:01:49.263: INFO: (12) /api/v1/namespaces/proxy-8775/pods/https:proxy-service-q725k-h4llh:462/proxy/: tls qux (200; 6.78309ms) May 2 14:01:49.263: INFO: (12) /api/v1/namespaces/proxy-8775/services/https:proxy-service-q725k:tlsportname2/proxy/: tls qux (200; 6.841461ms) May 2 14:01:49.263: INFO: (12) /api/v1/namespaces/proxy-8775/pods/http:proxy-service-q725k-h4llh:1080/proxy/: ... (200; 6.786784ms) May 2 14:01:49.263: INFO: (12) /api/v1/namespaces/proxy-8775/pods/proxy-service-q725k-h4llh:162/proxy/: bar (200; 6.81614ms) May 2 14:01:49.263: INFO: (12) /api/v1/namespaces/proxy-8775/pods/http:proxy-service-q725k-h4llh:160/proxy/: foo (200; 6.839803ms) May 2 14:01:49.263: INFO: (12) /api/v1/namespaces/proxy-8775/services/proxy-service-q725k:portname2/proxy/: bar (200; 6.83975ms) May 2 14:01:49.263: INFO: (12) /api/v1/namespaces/proxy-8775/pods/proxy-service-q725k-h4llh:160/proxy/: foo (200; 6.860441ms) May 2 14:01:49.263: INFO: (12) /api/v1/namespaces/proxy-8775/pods/proxy-service-q725k-h4llh/proxy/: test (200; 7.065824ms) May 2 14:01:49.263: INFO: (12) /api/v1/namespaces/proxy-8775/services/https:proxy-service-q725k:tlsportname1/proxy/: tls baz (200; 7.024501ms) May 2 14:01:49.263: INFO: (12) /api/v1/namespaces/proxy-8775/pods/https:proxy-service-q725k-h4llh:443/proxy/: test<... (200; 6.203595ms) May 2 14:01:49.269: INFO: (13) /api/v1/namespaces/proxy-8775/pods/http:proxy-service-q725k-h4llh:162/proxy/: bar (200; 6.201691ms) May 2 14:01:49.269: INFO: (13) /api/v1/namespaces/proxy-8775/services/proxy-service-q725k:portname1/proxy/: foo (200; 6.201953ms) May 2 14:01:49.269: INFO: (13) /api/v1/namespaces/proxy-8775/pods/proxy-service-q725k-h4llh/proxy/: test (200; 6.161189ms) May 2 14:01:49.269: INFO: (13) /api/v1/namespaces/proxy-8775/pods/http:proxy-service-q725k-h4llh:1080/proxy/: ... (200; 6.174316ms) May 2 14:01:49.269: INFO: (13) /api/v1/namespaces/proxy-8775/pods/https:proxy-service-q725k-h4llh:462/proxy/: tls qux (200; 6.190676ms) May 2 14:01:49.269: INFO: (13) /api/v1/namespaces/proxy-8775/services/http:proxy-service-q725k:portname1/proxy/: foo (200; 6.201591ms) May 2 14:01:49.269: INFO: (13) /api/v1/namespaces/proxy-8775/pods/http:proxy-service-q725k-h4llh:160/proxy/: foo (200; 6.224552ms) May 2 14:01:49.269: INFO: (13) /api/v1/namespaces/proxy-8775/pods/proxy-service-q725k-h4llh:162/proxy/: bar (200; 6.226749ms) May 2 14:01:49.269: INFO: (13) /api/v1/namespaces/proxy-8775/services/https:proxy-service-q725k:tlsportname2/proxy/: tls qux (200; 6.298784ms) May 2 14:01:49.273: INFO: (14) /api/v1/namespaces/proxy-8775/pods/http:proxy-service-q725k-h4llh:160/proxy/: foo (200; 3.207569ms) May 2 14:01:49.273: INFO: (14) /api/v1/namespaces/proxy-8775/pods/proxy-service-q725k-h4llh:162/proxy/: bar (200; 3.566219ms) May 2 14:01:49.273: INFO: (14) /api/v1/namespaces/proxy-8775/pods/proxy-service-q725k-h4llh:1080/proxy/: test<... (200; 3.557301ms) May 2 14:01:49.273: INFO: (14) /api/v1/namespaces/proxy-8775/pods/https:proxy-service-q725k-h4llh:460/proxy/: tls baz (200; 3.654754ms) May 2 14:01:49.273: INFO: (14) /api/v1/namespaces/proxy-8775/pods/http:proxy-service-q725k-h4llh:162/proxy/: bar (200; 3.789079ms) May 2 14:01:49.273: INFO: (14) /api/v1/namespaces/proxy-8775/pods/https:proxy-service-q725k-h4llh:462/proxy/: tls qux (200; 3.887727ms) May 2 14:01:49.273: INFO: (14) /api/v1/namespaces/proxy-8775/pods/proxy-service-q725k-h4llh/proxy/: test (200; 3.809046ms) May 2 14:01:49.273: INFO: (14) /api/v1/namespaces/proxy-8775/pods/proxy-service-q725k-h4llh:160/proxy/: foo (200; 3.923152ms) May 2 14:01:49.273: INFO: (14) /api/v1/namespaces/proxy-8775/pods/http:proxy-service-q725k-h4llh:1080/proxy/: ... (200; 3.913711ms) May 2 14:01:49.273: INFO: (14) /api/v1/namespaces/proxy-8775/pods/https:proxy-service-q725k-h4llh:443/proxy/: test (200; 1.824895ms) May 2 14:01:49.278: INFO: (15) /api/v1/namespaces/proxy-8775/pods/http:proxy-service-q725k-h4llh:162/proxy/: bar (200; 3.516159ms) May 2 14:01:49.278: INFO: (15) /api/v1/namespaces/proxy-8775/pods/https:proxy-service-q725k-h4llh:460/proxy/: tls baz (200; 3.562047ms) May 2 14:01:49.278: INFO: (15) /api/v1/namespaces/proxy-8775/pods/proxy-service-q725k-h4llh:1080/proxy/: test<... (200; 3.556705ms) May 2 14:01:49.278: INFO: (15) /api/v1/namespaces/proxy-8775/pods/http:proxy-service-q725k-h4llh:160/proxy/: foo (200; 3.557397ms) May 2 14:01:49.278: INFO: (15) /api/v1/namespaces/proxy-8775/pods/http:proxy-service-q725k-h4llh:1080/proxy/: ... (200; 3.678956ms) May 2 14:01:49.279: INFO: (15) /api/v1/namespaces/proxy-8775/pods/proxy-service-q725k-h4llh:160/proxy/: foo (200; 4.365704ms) May 2 14:01:49.279: INFO: (15) /api/v1/namespaces/proxy-8775/services/proxy-service-q725k:portname1/proxy/: foo (200; 4.577216ms) May 2 14:01:49.279: INFO: (15) /api/v1/namespaces/proxy-8775/pods/https:proxy-service-q725k-h4llh:462/proxy/: tls qux (200; 4.600789ms) May 2 14:01:49.279: INFO: (15) /api/v1/namespaces/proxy-8775/pods/proxy-service-q725k-h4llh:162/proxy/: bar (200; 4.691463ms) May 2 14:01:49.279: INFO: (15) /api/v1/namespaces/proxy-8775/services/http:proxy-service-q725k:portname1/proxy/: foo (200; 4.712219ms) May 2 14:01:49.280: INFO: (15) /api/v1/namespaces/proxy-8775/services/https:proxy-service-q725k:tlsportname1/proxy/: tls baz (200; 4.856504ms) May 2 14:01:49.280: INFO: (15) /api/v1/namespaces/proxy-8775/pods/https:proxy-service-q725k-h4llh:443/proxy/: test<... (200; 3.08961ms) May 2 14:01:49.283: INFO: (16) /api/v1/namespaces/proxy-8775/pods/http:proxy-service-q725k-h4llh:160/proxy/: foo (200; 3.2876ms) May 2 14:01:49.284: INFO: (16) /api/v1/namespaces/proxy-8775/pods/http:proxy-service-q725k-h4llh:162/proxy/: bar (200; 3.714521ms) May 2 14:01:49.284: INFO: (16) /api/v1/namespaces/proxy-8775/pods/proxy-service-q725k-h4llh:162/proxy/: bar (200; 3.895384ms) May 2 14:01:49.284: INFO: (16) /api/v1/namespaces/proxy-8775/pods/proxy-service-q725k-h4llh:160/proxy/: foo (200; 3.864758ms) May 2 14:01:49.284: INFO: (16) /api/v1/namespaces/proxy-8775/services/proxy-service-q725k:portname2/proxy/: bar (200; 4.573386ms) May 2 14:01:49.284: INFO: (16) /api/v1/namespaces/proxy-8775/pods/proxy-service-q725k-h4llh/proxy/: test (200; 4.510563ms) May 2 14:01:49.284: INFO: (16) /api/v1/namespaces/proxy-8775/services/proxy-service-q725k:portname1/proxy/: foo (200; 4.568073ms) May 2 14:01:49.285: INFO: (16) /api/v1/namespaces/proxy-8775/services/https:proxy-service-q725k:tlsportname2/proxy/: tls qux (200; 4.857968ms) May 2 14:01:49.285: INFO: (16) /api/v1/namespaces/proxy-8775/services/http:proxy-service-q725k:portname1/proxy/: foo (200; 4.933162ms) May 2 14:01:49.285: INFO: (16) /api/v1/namespaces/proxy-8775/pods/http:proxy-service-q725k-h4llh:1080/proxy/: ... (200; 4.893077ms) May 2 14:01:49.285: INFO: (16) /api/v1/namespaces/proxy-8775/pods/https:proxy-service-q725k-h4llh:443/proxy/: ... (200; 2.333233ms) May 2 14:01:49.287: INFO: (17) /api/v1/namespaces/proxy-8775/pods/proxy-service-q725k-h4llh/proxy/: test (200; 2.4131ms) May 2 14:01:49.290: INFO: (17) /api/v1/namespaces/proxy-8775/services/https:proxy-service-q725k:tlsportname2/proxy/: tls qux (200; 5.130335ms) May 2 14:01:49.290: INFO: (17) /api/v1/namespaces/proxy-8775/services/http:proxy-service-q725k:portname1/proxy/: foo (200; 4.989369ms) May 2 14:01:49.290: INFO: (17) /api/v1/namespaces/proxy-8775/pods/proxy-service-q725k-h4llh:1080/proxy/: test<... (200; 5.158527ms) May 2 14:01:49.291: INFO: (17) /api/v1/namespaces/proxy-8775/services/proxy-service-q725k:portname1/proxy/: foo (200; 5.437111ms) May 2 14:01:49.291: INFO: (17) /api/v1/namespaces/proxy-8775/services/https:proxy-service-q725k:tlsportname1/proxy/: tls baz (200; 5.428672ms) May 2 14:01:49.291: INFO: (17) /api/v1/namespaces/proxy-8775/services/http:proxy-service-q725k:portname2/proxy/: bar (200; 5.367767ms) May 2 14:01:49.291: INFO: (17) /api/v1/namespaces/proxy-8775/services/proxy-service-q725k:portname2/proxy/: bar (200; 5.674764ms) May 2 14:01:49.291: INFO: (17) /api/v1/namespaces/proxy-8775/pods/proxy-service-q725k-h4llh:160/proxy/: foo (200; 5.838911ms) May 2 14:01:49.291: INFO: (17) /api/v1/namespaces/proxy-8775/pods/proxy-service-q725k-h4llh:162/proxy/: bar (200; 6.06156ms) May 2 14:01:49.291: INFO: (17) /api/v1/namespaces/proxy-8775/pods/http:proxy-service-q725k-h4llh:162/proxy/: bar (200; 5.96609ms) May 2 14:01:49.291: INFO: (17) /api/v1/namespaces/proxy-8775/pods/https:proxy-service-q725k-h4llh:460/proxy/: tls baz (200; 6.066009ms) May 2 14:01:49.291: INFO: (17) /api/v1/namespaces/proxy-8775/pods/https:proxy-service-q725k-h4llh:462/proxy/: tls qux (200; 6.072263ms) May 2 14:01:49.291: INFO: (17) /api/v1/namespaces/proxy-8775/pods/http:proxy-service-q725k-h4llh:160/proxy/: foo (200; 6.14854ms) May 2 14:01:49.291: INFO: (17) /api/v1/namespaces/proxy-8775/pods/https:proxy-service-q725k-h4llh:443/proxy/: test (200; 4.514244ms) May 2 14:01:49.296: INFO: (18) /api/v1/namespaces/proxy-8775/services/https:proxy-service-q725k:tlsportname2/proxy/: tls qux (200; 4.557845ms) May 2 14:01:49.296: INFO: (18) /api/v1/namespaces/proxy-8775/pods/https:proxy-service-q725k-h4llh:462/proxy/: tls qux (200; 4.870433ms) May 2 14:01:49.297: INFO: (18) /api/v1/namespaces/proxy-8775/pods/proxy-service-q725k-h4llh:162/proxy/: bar (200; 5.293852ms) May 2 14:01:49.297: INFO: (18) /api/v1/namespaces/proxy-8775/services/http:proxy-service-q725k:portname2/proxy/: bar (200; 5.391542ms) May 2 14:01:49.297: INFO: (18) /api/v1/namespaces/proxy-8775/services/proxy-service-q725k:portname2/proxy/: bar (200; 5.386716ms) May 2 14:01:49.297: INFO: (18) /api/v1/namespaces/proxy-8775/pods/proxy-service-q725k-h4llh:160/proxy/: foo (200; 5.476158ms) May 2 14:01:49.297: INFO: (18) /api/v1/namespaces/proxy-8775/pods/proxy-service-q725k-h4llh:1080/proxy/: test<... (200; 5.511307ms) May 2 14:01:49.297: INFO: (18) /api/v1/namespaces/proxy-8775/pods/http:proxy-service-q725k-h4llh:160/proxy/: foo (200; 5.43397ms) May 2 14:01:49.297: INFO: (18) /api/v1/namespaces/proxy-8775/pods/http:proxy-service-q725k-h4llh:1080/proxy/: ... (200; 5.644342ms) May 2 14:01:49.297: INFO: (18) /api/v1/namespaces/proxy-8775/services/proxy-service-q725k:portname1/proxy/: foo (200; 5.633302ms) May 2 14:01:49.297: INFO: (18) /api/v1/namespaces/proxy-8775/services/https:proxy-service-q725k:tlsportname1/proxy/: tls baz (200; 5.893454ms) May 2 14:01:49.302: INFO: (19) /api/v1/namespaces/proxy-8775/pods/http:proxy-service-q725k-h4llh:162/proxy/: bar (200; 4.115369ms) May 2 14:01:49.302: INFO: (19) /api/v1/namespaces/proxy-8775/pods/proxy-service-q725k-h4llh:1080/proxy/: test<... (200; 4.193061ms) May 2 14:01:49.302: INFO: (19) /api/v1/namespaces/proxy-8775/pods/proxy-service-q725k-h4llh/proxy/: test (200; 4.156062ms) May 2 14:01:49.302: INFO: (19) /api/v1/namespaces/proxy-8775/pods/proxy-service-q725k-h4llh:162/proxy/: bar (200; 4.808685ms) May 2 14:01:49.302: INFO: (19) /api/v1/namespaces/proxy-8775/pods/http:proxy-service-q725k-h4llh:160/proxy/: foo (200; 4.955739ms) May 2 14:01:49.302: INFO: (19) /api/v1/namespaces/proxy-8775/services/proxy-service-q725k:portname2/proxy/: bar (200; 4.994063ms) May 2 14:01:49.302: INFO: (19) /api/v1/namespaces/proxy-8775/pods/proxy-service-q725k-h4llh:160/proxy/: foo (200; 5.062238ms) May 2 14:01:49.302: INFO: (19) /api/v1/namespaces/proxy-8775/pods/https:proxy-service-q725k-h4llh:460/proxy/: tls baz (200; 5.100752ms) May 2 14:01:49.303: INFO: (19) /api/v1/namespaces/proxy-8775/services/proxy-service-q725k:portname1/proxy/: foo (200; 5.225694ms) May 2 14:01:49.303: INFO: (19) /api/v1/namespaces/proxy-8775/services/http:proxy-service-q725k:portname1/proxy/: foo (200; 5.434366ms) May 2 14:01:49.303: INFO: (19) /api/v1/namespaces/proxy-8775/pods/https:proxy-service-q725k-h4llh:462/proxy/: tls qux (200; 5.354491ms) May 2 14:01:49.303: INFO: (19) /api/v1/namespaces/proxy-8775/pods/http:proxy-service-q725k-h4llh:1080/proxy/: ... (200; 5.412588ms) May 2 14:01:49.303: INFO: (19) /api/v1/namespaces/proxy-8775/pods/https:proxy-service-q725k-h4llh:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-aa68592f-0506-4813-8a72-ff0998a57588 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:01:58.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8305" for this suite. May 2 14:02:04.403: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:02:04.479: INFO: namespace secrets-8305 deletion completed in 6.137937307s • [SLOW TEST:6.201 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:02:04.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 2 14:02:04.602: INFO: Waiting up to 5m0s for pod "downward-api-5ea98c02-8b7b-4cc4-bf62-c1f71b22af21" in namespace "downward-api-6302" to be "success or failure" May 2 14:02:04.605: INFO: Pod "downward-api-5ea98c02-8b7b-4cc4-bf62-c1f71b22af21": Phase="Pending", Reason="", readiness=false. Elapsed: 3.25799ms May 2 14:02:06.616: INFO: Pod "downward-api-5ea98c02-8b7b-4cc4-bf62-c1f71b22af21": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014413123s May 2 14:02:08.620: INFO: Pod "downward-api-5ea98c02-8b7b-4cc4-bf62-c1f71b22af21": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018318002s STEP: Saw pod success May 2 14:02:08.620: INFO: Pod "downward-api-5ea98c02-8b7b-4cc4-bf62-c1f71b22af21" satisfied condition "success or failure" May 2 14:02:08.623: INFO: Trying to get logs from node iruya-worker pod downward-api-5ea98c02-8b7b-4cc4-bf62-c1f71b22af21 container dapi-container: STEP: delete the pod May 2 14:02:08.856: INFO: Waiting for pod downward-api-5ea98c02-8b7b-4cc4-bf62-c1f71b22af21 to disappear May 2 14:02:08.909: INFO: Pod downward-api-5ea98c02-8b7b-4cc4-bf62-c1f71b22af21 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:02:08.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6302" for this suite. May 2 14:02:14.936: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:02:15.027: INFO: namespace downward-api-6302 deletion completed in 6.114214072s • [SLOW TEST:10.547 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:02:15.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 2 14:02:15.092: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e323fe3f-9eac-4d78-9ccb-c3bcf4308ac5" in namespace "projected-7637" to be "success or failure" May 2 14:02:15.095: INFO: Pod "downwardapi-volume-e323fe3f-9eac-4d78-9ccb-c3bcf4308ac5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.611669ms May 2 14:02:17.099: INFO: Pod "downwardapi-volume-e323fe3f-9eac-4d78-9ccb-c3bcf4308ac5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007320638s May 2 14:02:19.103: INFO: Pod "downwardapi-volume-e323fe3f-9eac-4d78-9ccb-c3bcf4308ac5": Phase="Running", Reason="", readiness=true. Elapsed: 4.011719062s May 2 14:02:21.122: INFO: Pod "downwardapi-volume-e323fe3f-9eac-4d78-9ccb-c3bcf4308ac5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.03033525s STEP: Saw pod success May 2 14:02:21.122: INFO: Pod "downwardapi-volume-e323fe3f-9eac-4d78-9ccb-c3bcf4308ac5" satisfied condition "success or failure" May 2 14:02:21.125: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-e323fe3f-9eac-4d78-9ccb-c3bcf4308ac5 container client-container: STEP: delete the pod May 2 14:02:21.148: INFO: Waiting for pod downwardapi-volume-e323fe3f-9eac-4d78-9ccb-c3bcf4308ac5 to disappear May 2 14:02:21.177: INFO: Pod downwardapi-volume-e323fe3f-9eac-4d78-9ccb-c3bcf4308ac5 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:02:21.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7637" for this suite. May 2 14:02:27.207: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:02:27.331: INFO: namespace projected-7637 deletion completed in 6.149931953s • [SLOW TEST:12.304 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:02:27.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:02:31.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1785" for this suite. May 2 14:02:37.473: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:02:37.542: INFO: namespace kubelet-test-1785 deletion completed in 6.111407897s • [SLOW TEST:10.211 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:02:37.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-7263 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-7263 STEP: Creating statefulset with conflicting port in namespace statefulset-7263 STEP: Waiting until pod test-pod will start running in namespace statefulset-7263 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-7263 May 2 14:02:41.689: INFO: Observed stateful pod in namespace: statefulset-7263, name: ss-0, uid: de7c2325-1178-4acf-a7f6-82dec40d78ce, status phase: Pending. Waiting for statefulset controller to delete. May 2 14:02:42.149: INFO: Observed stateful pod in namespace: statefulset-7263, name: ss-0, uid: de7c2325-1178-4acf-a7f6-82dec40d78ce, status phase: Failed. Waiting for statefulset controller to delete. May 2 14:02:42.245: INFO: Observed stateful pod in namespace: statefulset-7263, name: ss-0, uid: de7c2325-1178-4acf-a7f6-82dec40d78ce, status phase: Failed. Waiting for statefulset controller to delete. May 2 14:02:42.287: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-7263 STEP: Removing pod with conflicting port in namespace statefulset-7263 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-7263 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 2 14:02:52.630: INFO: Deleting all statefulset in ns statefulset-7263 May 2 14:02:52.633: INFO: Scaling statefulset ss to 0 May 2 14:03:02.654: INFO: Waiting for statefulset status.replicas updated to 0 May 2 14:03:02.656: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:03:02.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7263" for this suite. May 2 14:03:08.717: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:03:08.788: INFO: namespace statefulset-7263 deletion completed in 6.11555236s • [SLOW TEST:31.246 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:03:08.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:03:12.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5975" for this suite. May 2 14:03:50.951: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:03:51.040: INFO: namespace kubelet-test-5975 deletion completed in 38.107761497s • [SLOW TEST:42.251 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:03:51.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-65bbeb24-d766-4576-a3ed-626af3873a25 STEP: Creating a pod to test consume configMaps May 2 14:03:51.103: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0e93983f-d622-478f-aabf-9b589d462f27" in namespace "projected-323" to be "success or failure" May 2 14:03:51.106: INFO: Pod "pod-projected-configmaps-0e93983f-d622-478f-aabf-9b589d462f27": Phase="Pending", Reason="", readiness=false. Elapsed: 3.153636ms May 2 14:03:53.111: INFO: Pod "pod-projected-configmaps-0e93983f-d622-478f-aabf-9b589d462f27": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007449874s May 2 14:03:55.114: INFO: Pod "pod-projected-configmaps-0e93983f-d622-478f-aabf-9b589d462f27": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010861969s STEP: Saw pod success May 2 14:03:55.114: INFO: Pod "pod-projected-configmaps-0e93983f-d622-478f-aabf-9b589d462f27" satisfied condition "success or failure" May 2 14:03:55.117: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-0e93983f-d622-478f-aabf-9b589d462f27 container projected-configmap-volume-test: STEP: delete the pod May 2 14:03:55.156: INFO: Waiting for pod pod-projected-configmaps-0e93983f-d622-478f-aabf-9b589d462f27 to disappear May 2 14:03:55.160: INFO: Pod pod-projected-configmaps-0e93983f-d622-478f-aabf-9b589d462f27 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:03:55.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-323" for this suite. May 2 14:04:01.176: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:04:01.256: INFO: namespace projected-323 deletion completed in 6.092679115s • [SLOW TEST:10.216 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:04:01.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-3a4f3aaa-1c8b-4938-9f1d-ed3f957a061b STEP: Creating a pod to test consume configMaps May 2 14:04:01.366: INFO: Waiting up to 5m0s for pod "pod-configmaps-36674bfa-1770-451c-a0a2-0c754309fee4" in namespace "configmap-4817" to be "success or failure" May 2 14:04:01.370: INFO: Pod "pod-configmaps-36674bfa-1770-451c-a0a2-0c754309fee4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.937468ms May 2 14:04:03.423: INFO: Pod "pod-configmaps-36674bfa-1770-451c-a0a2-0c754309fee4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056179889s May 2 14:04:05.427: INFO: Pod "pod-configmaps-36674bfa-1770-451c-a0a2-0c754309fee4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060229527s STEP: Saw pod success May 2 14:04:05.427: INFO: Pod "pod-configmaps-36674bfa-1770-451c-a0a2-0c754309fee4" satisfied condition "success or failure" May 2 14:04:05.429: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-36674bfa-1770-451c-a0a2-0c754309fee4 container configmap-volume-test: STEP: delete the pod May 2 14:04:05.470: INFO: Waiting for pod pod-configmaps-36674bfa-1770-451c-a0a2-0c754309fee4 to disappear May 2 14:04:05.474: INFO: Pod pod-configmaps-36674bfa-1770-451c-a0a2-0c754309fee4 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:04:05.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4817" for this suite. May 2 14:04:11.485: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:04:11.562: INFO: namespace configmap-4817 deletion completed in 6.085350009s • [SLOW TEST:10.306 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:04:11.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs May 2 14:04:11.671: INFO: Waiting up to 5m0s for pod "pod-76c6140c-af67-47e8-90ec-1615253c26ae" in namespace "emptydir-1249" to be "success or failure" May 2 14:04:11.679: INFO: Pod "pod-76c6140c-af67-47e8-90ec-1615253c26ae": Phase="Pending", Reason="", readiness=false. Elapsed: 8.716544ms May 2 14:04:13.684: INFO: Pod "pod-76c6140c-af67-47e8-90ec-1615253c26ae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012817737s May 2 14:04:15.688: INFO: Pod "pod-76c6140c-af67-47e8-90ec-1615253c26ae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017401502s STEP: Saw pod success May 2 14:04:15.688: INFO: Pod "pod-76c6140c-af67-47e8-90ec-1615253c26ae" satisfied condition "success or failure" May 2 14:04:15.691: INFO: Trying to get logs from node iruya-worker pod pod-76c6140c-af67-47e8-90ec-1615253c26ae container test-container: STEP: delete the pod May 2 14:04:15.743: INFO: Waiting for pod pod-76c6140c-af67-47e8-90ec-1615253c26ae to disappear May 2 14:04:15.751: INFO: Pod pod-76c6140c-af67-47e8-90ec-1615253c26ae no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:04:15.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1249" for this suite. May 2 14:04:21.779: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:04:21.882: INFO: namespace emptydir-1249 deletion completed in 6.128754694s • [SLOW TEST:10.319 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:04:21.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating server pod server in namespace prestop-6896 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-6896 STEP: Deleting pre-stop pod May 2 14:04:35.013: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:04:35.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-6896" for this suite. May 2 14:05:21.088: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:05:21.162: INFO: namespace prestop-6896 deletion completed in 46.13122415s • [SLOW TEST:59.279 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:05:21.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0502 14:06:02.140371 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 2 14:06:02.140: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:06:02.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-892" for this suite. May 2 14:06:12.171: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:06:12.230: INFO: namespace gc-892 deletion completed in 10.08663909s • [SLOW TEST:51.068 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:06:12.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-4f3e93ec-e6b8-4e76-9ad8-5fd1c585ae73 in namespace container-probe-5904 May 2 14:06:16.771: INFO: Started pod liveness-4f3e93ec-e6b8-4e76-9ad8-5fd1c585ae73 in namespace container-probe-5904 STEP: checking the pod's current state and verifying that restartCount is present May 2 14:06:16.774: INFO: Initial restart count of pod liveness-4f3e93ec-e6b8-4e76-9ad8-5fd1c585ae73 is 0 May 2 14:06:36.853: INFO: Restart count of pod container-probe-5904/liveness-4f3e93ec-e6b8-4e76-9ad8-5fd1c585ae73 is now 1 (20.079157409s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:06:36.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5904" for this suite. May 2 14:06:42.964: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:06:43.041: INFO: namespace container-probe-5904 deletion completed in 6.118672217s • [SLOW TEST:30.810 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:06:43.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-9235/configmap-test-8bbe2dd5-a518-4cda-a988-b96f1dc4f03e STEP: Creating a pod to test consume configMaps May 2 14:06:43.146: INFO: Waiting up to 5m0s for pod "pod-configmaps-1a0b2e1a-68e6-4f09-9e70-9707274dd266" in namespace "configmap-9235" to be "success or failure" May 2 14:06:43.169: INFO: Pod "pod-configmaps-1a0b2e1a-68e6-4f09-9e70-9707274dd266": Phase="Pending", Reason="", readiness=false. Elapsed: 23.199415ms May 2 14:06:45.194: INFO: Pod "pod-configmaps-1a0b2e1a-68e6-4f09-9e70-9707274dd266": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047650579s May 2 14:06:47.198: INFO: Pod "pod-configmaps-1a0b2e1a-68e6-4f09-9e70-9707274dd266": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051948251s STEP: Saw pod success May 2 14:06:47.198: INFO: Pod "pod-configmaps-1a0b2e1a-68e6-4f09-9e70-9707274dd266" satisfied condition "success or failure" May 2 14:06:47.202: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-1a0b2e1a-68e6-4f09-9e70-9707274dd266 container env-test: STEP: delete the pod May 2 14:06:47.243: INFO: Waiting for pod pod-configmaps-1a0b2e1a-68e6-4f09-9e70-9707274dd266 to disappear May 2 14:06:47.247: INFO: Pod pod-configmaps-1a0b2e1a-68e6-4f09-9e70-9707274dd266 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:06:47.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9235" for this suite. May 2 14:06:53.263: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:06:53.369: INFO: namespace configmap-9235 deletion completed in 6.119079649s • [SLOW TEST:10.327 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:06:53.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-130cd234-4acc-44b1-a337-9d61d30614e0 STEP: Creating a pod to test consume secrets May 2 14:06:53.525: INFO: Waiting up to 5m0s for pod "pod-secrets-c6747a7a-c01f-4942-842b-c90ca84950f4" in namespace "secrets-8607" to be "success or failure" May 2 14:06:53.569: INFO: Pod "pod-secrets-c6747a7a-c01f-4942-842b-c90ca84950f4": Phase="Pending", Reason="", readiness=false. Elapsed: 43.682091ms May 2 14:06:55.573: INFO: Pod "pod-secrets-c6747a7a-c01f-4942-842b-c90ca84950f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04762052s May 2 14:06:57.581: INFO: Pod "pod-secrets-c6747a7a-c01f-4942-842b-c90ca84950f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056458028s STEP: Saw pod success May 2 14:06:57.581: INFO: Pod "pod-secrets-c6747a7a-c01f-4942-842b-c90ca84950f4" satisfied condition "success or failure" May 2 14:06:57.583: INFO: Trying to get logs from node iruya-worker pod pod-secrets-c6747a7a-c01f-4942-842b-c90ca84950f4 container secret-volume-test: STEP: delete the pod May 2 14:06:57.627: INFO: Waiting for pod pod-secrets-c6747a7a-c01f-4942-842b-c90ca84950f4 to disappear May 2 14:06:57.731: INFO: Pod pod-secrets-c6747a7a-c01f-4942-842b-c90ca84950f4 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:06:57.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8607" for this suite. May 2 14:07:03.878: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:07:03.958: INFO: namespace secrets-8607 deletion completed in 6.22269196s STEP: Destroying namespace "secret-namespace-3981" for this suite. May 2 14:07:10.003: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:07:10.076: INFO: namespace secret-namespace-3981 deletion completed in 6.118214289s • [SLOW TEST:16.707 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:07:10.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9274.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9274.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 2 14:07:16.251: INFO: DNS probes using dns-9274/dns-test-9088cce3-7225-4a1b-824a-e988183a0e11 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:07:16.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9274" for this suite. May 2 14:07:22.447: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:07:22.524: INFO: namespace dns-9274 deletion completed in 6.108434432s • [SLOW TEST:12.448 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:07:22.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod May 2 14:07:26.600: INFO: Pod pod-hostip-0b1d5266-cc50-473d-ae69-288c6e3e8234 has hostIP: 172.17.0.5 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:07:26.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-106" for this suite. May 2 14:07:48.887: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:07:48.967: INFO: namespace pods-106 deletion completed in 22.362836116s • [SLOW TEST:26.442 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:07:48.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-6rw8 STEP: Creating a pod to test atomic-volume-subpath May 2 14:07:50.194: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-6rw8" in namespace "subpath-5793" to be "success or failure" May 2 14:07:50.223: INFO: Pod "pod-subpath-test-projected-6rw8": Phase="Pending", Reason="", readiness=false. Elapsed: 29.127191ms May 2 14:07:52.227: INFO: Pod "pod-subpath-test-projected-6rw8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033335866s May 2 14:07:54.231: INFO: Pod "pod-subpath-test-projected-6rw8": Phase="Running", Reason="", readiness=true. Elapsed: 4.036938236s May 2 14:07:56.235: INFO: Pod "pod-subpath-test-projected-6rw8": Phase="Running", Reason="", readiness=true. Elapsed: 6.041191273s May 2 14:07:58.239: INFO: Pod "pod-subpath-test-projected-6rw8": Phase="Running", Reason="", readiness=true. Elapsed: 8.044746712s May 2 14:08:00.242: INFO: Pod "pod-subpath-test-projected-6rw8": Phase="Running", Reason="", readiness=true. Elapsed: 10.048473003s May 2 14:08:02.247: INFO: Pod "pod-subpath-test-projected-6rw8": Phase="Running", Reason="", readiness=true. Elapsed: 12.052766536s May 2 14:08:04.288: INFO: Pod "pod-subpath-test-projected-6rw8": Phase="Running", Reason="", readiness=true. Elapsed: 14.094322365s May 2 14:08:06.293: INFO: Pod "pod-subpath-test-projected-6rw8": Phase="Running", Reason="", readiness=true. Elapsed: 16.099266024s May 2 14:08:08.298: INFO: Pod "pod-subpath-test-projected-6rw8": Phase="Running", Reason="", readiness=true. Elapsed: 18.103529072s May 2 14:08:10.302: INFO: Pod "pod-subpath-test-projected-6rw8": Phase="Running", Reason="", readiness=true. Elapsed: 20.107772617s May 2 14:08:12.306: INFO: Pod "pod-subpath-test-projected-6rw8": Phase="Running", Reason="", readiness=true. Elapsed: 22.111580484s May 2 14:08:14.310: INFO: Pod "pod-subpath-test-projected-6rw8": Phase="Running", Reason="", readiness=true. Elapsed: 24.115971664s May 2 14:08:16.313: INFO: Pod "pod-subpath-test-projected-6rw8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.119290689s STEP: Saw pod success May 2 14:08:16.313: INFO: Pod "pod-subpath-test-projected-6rw8" satisfied condition "success or failure" May 2 14:08:16.316: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-projected-6rw8 container test-container-subpath-projected-6rw8: STEP: delete the pod May 2 14:08:16.373: INFO: Waiting for pod pod-subpath-test-projected-6rw8 to disappear May 2 14:08:16.386: INFO: Pod pod-subpath-test-projected-6rw8 no longer exists STEP: Deleting pod pod-subpath-test-projected-6rw8 May 2 14:08:16.386: INFO: Deleting pod "pod-subpath-test-projected-6rw8" in namespace "subpath-5793" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:08:16.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5793" for this suite. May 2 14:08:22.540: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:08:22.618: INFO: namespace subpath-5793 deletion completed in 6.222717615s • [SLOW TEST:33.650 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:08:22.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-7deb59c8-3422-4ba7-bb08-949cf34c79fb STEP: Creating a pod to test consume secrets May 2 14:08:22.695: INFO: Waiting up to 5m0s for pod "pod-secrets-38d78cd6-4801-4b10-856b-a6324fce4c4a" in namespace "secrets-3945" to be "success or failure" May 2 14:08:22.698: INFO: Pod "pod-secrets-38d78cd6-4801-4b10-856b-a6324fce4c4a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.266403ms May 2 14:08:24.702: INFO: Pod "pod-secrets-38d78cd6-4801-4b10-856b-a6324fce4c4a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00727203s May 2 14:08:26.706: INFO: Pod "pod-secrets-38d78cd6-4801-4b10-856b-a6324fce4c4a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011296983s STEP: Saw pod success May 2 14:08:26.706: INFO: Pod "pod-secrets-38d78cd6-4801-4b10-856b-a6324fce4c4a" satisfied condition "success or failure" May 2 14:08:26.710: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-38d78cd6-4801-4b10-856b-a6324fce4c4a container secret-volume-test: STEP: delete the pod May 2 14:08:26.752: INFO: Waiting for pod pod-secrets-38d78cd6-4801-4b10-856b-a6324fce4c4a to disappear May 2 14:08:26.779: INFO: Pod pod-secrets-38d78cd6-4801-4b10-856b-a6324fce4c4a no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:08:26.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3945" for this suite. May 2 14:08:32.804: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:08:32.880: INFO: namespace secrets-3945 deletion completed in 6.096077329s • [SLOW TEST:10.262 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:08:32.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-b8b3d013-ff98-4a36-9660-d333accbd21f STEP: Creating configMap with name cm-test-opt-upd-c3520e18-93b8-46b5-b1ca-45f4fd1024c4 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-b8b3d013-ff98-4a36-9660-d333accbd21f STEP: Updating configmap cm-test-opt-upd-c3520e18-93b8-46b5-b1ca-45f4fd1024c4 STEP: Creating configMap with name cm-test-opt-create-33da40f0-4dc7-4a34-8c34-6dfa935fe6a2 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:08:43.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6806" for this suite. May 2 14:09:05.144: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:09:05.216: INFO: namespace projected-6806 deletion completed in 22.08496529s • [SLOW TEST:32.335 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:09:05.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 2 14:09:05.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-8777' May 2 14:09:07.939: INFO: stderr: "" May 2 14:09:07.939: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 May 2 14:09:07.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-8777' May 2 14:09:22.168: INFO: stderr: "" May 2 14:09:22.168: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:09:22.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8777" for this suite. May 2 14:09:28.200: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:09:28.276: INFO: namespace kubectl-8777 deletion completed in 6.093797075s • [SLOW TEST:23.059 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:09:28.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 2 14:09:32.953: INFO: Successfully updated pod "pod-update-b06b2bd1-d94b-4fd2-af4e-35370044da8e" STEP: verifying the updated pod is in kubernetes May 2 14:09:33.005: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:09:33.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5243" for this suite. May 2 14:09:49.025: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:09:49.100: INFO: namespace pods-5243 deletion completed in 16.090524208s • [SLOW TEST:20.823 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:09:49.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-e49ecb2e-7cc5-4224-b74a-7fc21f466b44 STEP: Creating a pod to test consume configMaps May 2 14:09:49.180: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1ae60507-c12b-4dda-855e-d12fa1476d5f" in namespace "projected-5854" to be "success or failure" May 2 14:09:49.193: INFO: Pod "pod-projected-configmaps-1ae60507-c12b-4dda-855e-d12fa1476d5f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.833861ms May 2 14:09:51.254: INFO: Pod "pod-projected-configmaps-1ae60507-c12b-4dda-855e-d12fa1476d5f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074030848s May 2 14:09:53.258: INFO: Pod "pod-projected-configmaps-1ae60507-c12b-4dda-855e-d12fa1476d5f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.078190255s STEP: Saw pod success May 2 14:09:53.259: INFO: Pod "pod-projected-configmaps-1ae60507-c12b-4dda-855e-d12fa1476d5f" satisfied condition "success or failure" May 2 14:09:53.261: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-1ae60507-c12b-4dda-855e-d12fa1476d5f container projected-configmap-volume-test: STEP: delete the pod May 2 14:09:53.283: INFO: Waiting for pod pod-projected-configmaps-1ae60507-c12b-4dda-855e-d12fa1476d5f to disappear May 2 14:09:53.302: INFO: Pod pod-projected-configmaps-1ae60507-c12b-4dda-855e-d12fa1476d5f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:09:53.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5854" for this suite. May 2 14:09:59.320: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:09:59.394: INFO: namespace projected-5854 deletion completed in 6.088230549s • [SLOW TEST:10.294 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:09:59.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 2 14:09:59.507: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 2 14:09:59.520: INFO: Pod name sample-pod: Found 0 pods out of 1 May 2 14:10:04.525: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 2 14:10:04.525: INFO: Creating deployment "test-rolling-update-deployment" May 2 14:10:04.529: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 2 14:10:04.541: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 2 14:10:06.549: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 2 14:10:06.551: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724025404, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724025404, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724025404, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724025404, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 2 14:10:08.555: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 2 14:10:08.563: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-402,SelfLink:/apis/apps/v1/namespaces/deployment-402/deployments/test-rolling-update-deployment,UID:8635b145-dfca-4bbe-a7e3-f60f78bdfab5,ResourceVersion:8634333,Generation:1,CreationTimestamp:2020-05-02 14:10:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-02 14:10:04 +0000 UTC 2020-05-02 14:10:04 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-02 14:10:07 +0000 UTC 2020-05-02 14:10:04 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 2 14:10:08.566: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-402,SelfLink:/apis/apps/v1/namespaces/deployment-402/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:901e31ea-63c7-4dba-97d5-341899e0331b,ResourceVersion:8634321,Generation:1,CreationTimestamp:2020-05-02 14:10:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 8635b145-dfca-4bbe-a7e3-f60f78bdfab5 0xc002f964c7 0xc002f964c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 2 14:10:08.566: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 2 14:10:08.566: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-402,SelfLink:/apis/apps/v1/namespaces/deployment-402/replicasets/test-rolling-update-controller,UID:f70586e6-539f-436f-b236-d0783ce2a4c2,ResourceVersion:8634330,Generation:2,CreationTimestamp:2020-05-02 14:09:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 8635b145-dfca-4bbe-a7e3-f60f78bdfab5 0xc002f963cf 0xc002f963e0}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 2 14:10:08.569: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-j8825" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-j8825,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-402,SelfLink:/api/v1/namespaces/deployment-402/pods/test-rolling-update-deployment-79f6b9d75c-j8825,UID:d455d135-ae75-42ef-af49-9919256bfb29,ResourceVersion:8634320,Generation:0,CreationTimestamp:2020-05-02 14:10:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 901e31ea-63c7-4dba-97d5-341899e0331b 0xc002f96df7 0xc002f96df8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wlhv2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wlhv2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-wlhv2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f96e70} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f96e90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:07 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:07 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:04 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.139,StartTime:2020-05-02 14:10:04 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-02 14:10:07 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://4e812fe81b54d9d5844c0847bd53d306ad0a4e59f0aed8b78163a3c0c9a5bdae}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:10:08.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-402" for this suite. May 2 14:10:14.659: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:10:14.788: INFO: namespace deployment-402 deletion completed in 6.215410883s • [SLOW TEST:15.394 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:10:14.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-844a2b0d-9ec5-4702-aba3-a3fa8cc31a8d STEP: Creating a pod to test consume secrets May 2 14:10:14.863: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4f7ce899-0a9a-4b54-b0ce-f5b994e1001c" in namespace "projected-3856" to be "success or failure" May 2 14:10:14.912: INFO: Pod "pod-projected-secrets-4f7ce899-0a9a-4b54-b0ce-f5b994e1001c": Phase="Pending", Reason="", readiness=false. Elapsed: 48.979727ms May 2 14:10:16.916: INFO: Pod "pod-projected-secrets-4f7ce899-0a9a-4b54-b0ce-f5b994e1001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052996843s May 2 14:10:18.921: INFO: Pod "pod-projected-secrets-4f7ce899-0a9a-4b54-b0ce-f5b994e1001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057481117s STEP: Saw pod success May 2 14:10:18.921: INFO: Pod "pod-projected-secrets-4f7ce899-0a9a-4b54-b0ce-f5b994e1001c" satisfied condition "success or failure" May 2 14:10:18.924: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-4f7ce899-0a9a-4b54-b0ce-f5b994e1001c container projected-secret-volume-test: STEP: delete the pod May 2 14:10:18.947: INFO: Waiting for pod pod-projected-secrets-4f7ce899-0a9a-4b54-b0ce-f5b994e1001c to disappear May 2 14:10:18.951: INFO: Pod pod-projected-secrets-4f7ce899-0a9a-4b54-b0ce-f5b994e1001c no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:10:18.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3856" for this suite. May 2 14:10:25.037: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:10:25.151: INFO: namespace projected-3856 deletion completed in 6.196648621s • [SLOW TEST:10.362 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:10:25.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 May 2 14:10:25.239: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 2 14:10:25.270: INFO: Waiting for terminating namespaces to be deleted... May 2 14:10:25.272: INFO: Logging pods the kubelet thinks is on node iruya-worker before test May 2 14:10:25.277: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 2 14:10:25.277: INFO: Container kube-proxy ready: true, restart count 0 May 2 14:10:25.277: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 2 14:10:25.277: INFO: Container kindnet-cni ready: true, restart count 0 May 2 14:10:25.277: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test May 2 14:10:25.283: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) May 2 14:10:25.283: INFO: Container kube-proxy ready: true, restart count 0 May 2 14:10:25.283: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) May 2 14:10:25.283: INFO: Container kindnet-cni ready: true, restart count 0 May 2 14:10:25.283: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) May 2 14:10:25.283: INFO: Container coredns ready: true, restart count 0 May 2 14:10:25.283: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) May 2 14:10:25.283: INFO: Container coredns ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-worker STEP: verifying the node has the label node iruya-worker2 May 2 14:10:25.362: INFO: Pod coredns-5d4dd4b4db-6jcgz requesting resource cpu=100m on Node iruya-worker2 May 2 14:10:25.362: INFO: Pod coredns-5d4dd4b4db-gm7vr requesting resource cpu=100m on Node iruya-worker2 May 2 14:10:25.362: INFO: Pod kindnet-gwz5g requesting resource cpu=100m on Node iruya-worker May 2 14:10:25.362: INFO: Pod kindnet-mgd8b requesting resource cpu=100m on Node iruya-worker2 May 2 14:10:25.362: INFO: Pod kube-proxy-pmz4p requesting resource cpu=0m on Node iruya-worker May 2 14:10:25.362: INFO: Pod kube-proxy-vwbcj requesting resource cpu=0m on Node iruya-worker2 STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-5aeb5477-d286-4c94-9929-45d272db0d7c.160b3b6b402355dc], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6214/filler-pod-5aeb5477-d286-4c94-9929-45d272db0d7c to iruya-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-5aeb5477-d286-4c94-9929-45d272db0d7c.160b3b6b91c7b106], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-5aeb5477-d286-4c94-9929-45d272db0d7c.160b3b6bf59b56f4], Reason = [Created], Message = [Created container filler-pod-5aeb5477-d286-4c94-9929-45d272db0d7c] STEP: Considering event: Type = [Normal], Name = [filler-pod-5aeb5477-d286-4c94-9929-45d272db0d7c.160b3b6c0c29fc97], Reason = [Started], Message = [Started container filler-pod-5aeb5477-d286-4c94-9929-45d272db0d7c] STEP: Considering event: Type = [Normal], Name = [filler-pod-e888172b-5982-4c56-abfd-76da2965581a.160b3b6b4324f366], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6214/filler-pod-e888172b-5982-4c56-abfd-76da2965581a to iruya-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-e888172b-5982-4c56-abfd-76da2965581a.160b3b6bdb62c127], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-e888172b-5982-4c56-abfd-76da2965581a.160b3b6c17f81068], Reason = [Created], Message = [Created container filler-pod-e888172b-5982-4c56-abfd-76da2965581a] STEP: Considering event: Type = [Normal], Name = [filler-pod-e888172b-5982-4c56-abfd-76da2965581a.160b3b6c2a1c9df3], Reason = [Started], Message = [Started container filler-pod-e888172b-5982-4c56-abfd-76da2965581a] STEP: Considering event: Type = [Warning], Name = [additional-pod.160b3b6ca9d0ba54], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node iruya-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:10:32.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6214" for this suite. May 2 14:10:38.561: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:10:38.632: INFO: namespace sched-pred-6214 deletion completed in 6.083993143s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:13.481 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:10:38.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 2 14:10:38.979: INFO: Creating deployment "nginx-deployment" May 2 14:10:39.061: INFO: Waiting for observed generation 1 May 2 14:10:41.447: INFO: Waiting for all required pods to come up May 2 14:10:41.640: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running May 2 14:10:51.702: INFO: Waiting for deployment "nginx-deployment" to complete May 2 14:10:51.708: INFO: Updating deployment "nginx-deployment" with a non-existent image May 2 14:10:51.713: INFO: Updating deployment nginx-deployment May 2 14:10:51.713: INFO: Waiting for observed generation 2 May 2 14:10:53.723: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 2 14:10:53.726: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 2 14:10:53.727: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 2 14:10:53.735: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 2 14:10:53.735: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 2 14:10:53.738: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 2 14:10:53.744: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas May 2 14:10:53.744: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 May 2 14:10:53.749: INFO: Updating deployment nginx-deployment May 2 14:10:53.750: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas May 2 14:10:53.899: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 2 14:10:53.930: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 2 14:10:54.123: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-8402,SelfLink:/apis/apps/v1/namespaces/deployment-8402/deployments/nginx-deployment,UID:894bc2e6-68e3-4809-ad15-024a55e28f04,ResourceVersion:8634711,Generation:3,CreationTimestamp:2020-05-02 14:10:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-05-02 14:10:52 +0000 UTC 2020-05-02 14:10:39 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2020-05-02 14:10:53 +0000 UTC 2020-05-02 14:10:53 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} May 2 14:10:54.258: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-8402,SelfLink:/apis/apps/v1/namespaces/deployment-8402/replicasets/nginx-deployment-55fb7cb77f,UID:6460ba6e-93f8-4d2f-bb5d-0abe5c6d217e,ResourceVersion:8634752,Generation:3,CreationTimestamp:2020-05-02 14:10:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 894bc2e6-68e3-4809-ad15-024a55e28f04 0xc002cc3667 0xc002cc3668}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 2 14:10:54.258: INFO: All old ReplicaSets of Deployment "nginx-deployment": May 2 14:10:54.258: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-8402,SelfLink:/apis/apps/v1/namespaces/deployment-8402/replicasets/nginx-deployment-7b8c6f4498,UID:2f9ae3e5-e3cc-4215-a9e5-d8e56922c914,ResourceVersion:8634746,Generation:3,CreationTimestamp:2020-05-02 14:10:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 894bc2e6-68e3-4809-ad15-024a55e28f04 0xc002cc3737 0xc002cc3738}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} May 2 14:10:54.355: INFO: Pod "nginx-deployment-55fb7cb77f-4sgv6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-4sgv6,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8402,SelfLink:/api/v1/namespaces/deployment-8402/pods/nginx-deployment-55fb7cb77f-4sgv6,UID:c0529509-cca5-4865-a1bd-3ad8eb2bdc1a,ResourceVersion:8634656,Generation:0,CreationTimestamp:2020-05-02 14:10:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 6460ba6e-93f8-4d2f-bb5d-0abe5c6d217e 0xc00249f977 0xc00249f978}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-54qqg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-54qqg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-54qqg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00249f9f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00249fa10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:51 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-02 14:10:51 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 14:10:54.355: INFO: Pod "nginx-deployment-55fb7cb77f-6s6z7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-6s6z7,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8402,SelfLink:/api/v1/namespaces/deployment-8402/pods/nginx-deployment-55fb7cb77f-6s6z7,UID:54aa3f27-375b-4e25-9d70-83bcbb793616,ResourceVersion:8634743,Generation:0,CreationTimestamp:2020-05-02 14:10:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 6460ba6e-93f8-4d2f-bb5d-0abe5c6d217e 0xc00249fae0 0xc00249fae1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-54qqg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-54qqg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-54qqg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00249fb60} {node.kubernetes.io/unreachable Exists NoExecute 0xc00249fb80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:53 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 14:10:54.356: INFO: Pod "nginx-deployment-55fb7cb77f-6zsxm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-6zsxm,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8402,SelfLink:/api/v1/namespaces/deployment-8402/pods/nginx-deployment-55fb7cb77f-6zsxm,UID:486a40aa-9350-4c38-8f5c-4feb1a3eaed1,ResourceVersion:8634759,Generation:0,CreationTimestamp:2020-05-02 14:10:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 6460ba6e-93f8-4d2f-bb5d-0abe5c6d217e 0xc00249fc07 0xc00249fc08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-54qqg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-54qqg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-54qqg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00249fcb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00249fcd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:53 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-02 14:10:53 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 14:10:54.356: INFO: Pod "nginx-deployment-55fb7cb77f-bjc7j" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-bjc7j,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8402,SelfLink:/api/v1/namespaces/deployment-8402/pods/nginx-deployment-55fb7cb77f-bjc7j,UID:79dfc8e0-4171-4b1f-abc5-5ec13e05295a,ResourceVersion:8634660,Generation:0,CreationTimestamp:2020-05-02 14:10:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 6460ba6e-93f8-4d2f-bb5d-0abe5c6d217e 0xc00249fda0 0xc00249fda1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-54qqg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-54qqg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-54qqg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00249fe20} {node.kubernetes.io/unreachable Exists NoExecute 0xc00249fe40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:51 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-02 14:10:51 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 14:10:54.356: INFO: Pod "nginx-deployment-55fb7cb77f-djcbp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-djcbp,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8402,SelfLink:/api/v1/namespaces/deployment-8402/pods/nginx-deployment-55fb7cb77f-djcbp,UID:ac8cecd3-01c4-4ab6-83c1-8100ca532f7f,ResourceVersion:8634730,Generation:0,CreationTimestamp:2020-05-02 14:10:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 6460ba6e-93f8-4d2f-bb5d-0abe5c6d217e 0xc00249ff10 0xc00249ff11}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-54qqg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-54qqg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-54qqg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00249ff90} {node.kubernetes.io/unreachable Exists NoExecute 0xc00249ffb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:53 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 14:10:54.356: INFO: Pod "nginx-deployment-55fb7cb77f-gttxl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-gttxl,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8402,SelfLink:/api/v1/namespaces/deployment-8402/pods/nginx-deployment-55fb7cb77f-gttxl,UID:076a11e9-527c-453a-9109-92c87d5ad264,ResourceVersion:8634689,Generation:0,CreationTimestamp:2020-05-02 14:10:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 6460ba6e-93f8-4d2f-bb5d-0abe5c6d217e 0xc001d6c037 0xc001d6c038}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-54qqg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-54qqg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-54qqg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d6c0b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d6c0d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:52 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-02 14:10:52 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 14:10:54.356: INFO: Pod "nginx-deployment-55fb7cb77f-jrmqd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-jrmqd,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8402,SelfLink:/api/v1/namespaces/deployment-8402/pods/nginx-deployment-55fb7cb77f-jrmqd,UID:27ad478c-caf1-4546-913b-0320f6bd7167,ResourceVersion:8634690,Generation:0,CreationTimestamp:2020-05-02 14:10:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 6460ba6e-93f8-4d2f-bb5d-0abe5c6d217e 0xc001d6c1a0 0xc001d6c1a1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-54qqg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-54qqg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-54qqg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d6c220} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d6c240}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:52 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-02 14:10:52 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 14:10:54.356: INFO: Pod "nginx-deployment-55fb7cb77f-np8cr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-np8cr,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8402,SelfLink:/api/v1/namespaces/deployment-8402/pods/nginx-deployment-55fb7cb77f-np8cr,UID:c5657023-e408-4590-99fe-39527fd41451,ResourceVersion:8634671,Generation:0,CreationTimestamp:2020-05-02 14:10:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 6460ba6e-93f8-4d2f-bb5d-0abe5c6d217e 0xc001d6c310 0xc001d6c311}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-54qqg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-54qqg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-54qqg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d6c390} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d6c3b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:51 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-02 14:10:51 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 14:10:54.356: INFO: Pod "nginx-deployment-55fb7cb77f-qdvz2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-qdvz2,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8402,SelfLink:/api/v1/namespaces/deployment-8402/pods/nginx-deployment-55fb7cb77f-qdvz2,UID:219f85bb-ccf6-4e39-92fc-915f1f0c7000,ResourceVersion:8634748,Generation:0,CreationTimestamp:2020-05-02 14:10:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 6460ba6e-93f8-4d2f-bb5d-0abe5c6d217e 0xc001d6c480 0xc001d6c481}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-54qqg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-54qqg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-54qqg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d6c500} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d6c520}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:54 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 14:10:54.356: INFO: Pod "nginx-deployment-55fb7cb77f-r9qv4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-r9qv4,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8402,SelfLink:/api/v1/namespaces/deployment-8402/pods/nginx-deployment-55fb7cb77f-r9qv4,UID:5e2d5c89-ca46-41f9-9891-7486cebf1e05,ResourceVersion:8634740,Generation:0,CreationTimestamp:2020-05-02 14:10:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 6460ba6e-93f8-4d2f-bb5d-0abe5c6d217e 0xc001d6c5a7 0xc001d6c5a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-54qqg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-54qqg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-54qqg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d6c620} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d6c640}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:53 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 14:10:54.356: INFO: Pod "nginx-deployment-55fb7cb77f-sg5kb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-sg5kb,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8402,SelfLink:/api/v1/namespaces/deployment-8402/pods/nginx-deployment-55fb7cb77f-sg5kb,UID:22897987-4193-4a28-9a86-9fcd7dc1884f,ResourceVersion:8634731,Generation:0,CreationTimestamp:2020-05-02 14:10:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 6460ba6e-93f8-4d2f-bb5d-0abe5c6d217e 0xc001d6c6c7 0xc001d6c6c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-54qqg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-54qqg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-54qqg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d6c740} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d6c760}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:53 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 14:10:54.356: INFO: Pod "nginx-deployment-55fb7cb77f-tf67w" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-tf67w,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8402,SelfLink:/api/v1/namespaces/deployment-8402/pods/nginx-deployment-55fb7cb77f-tf67w,UID:11115962-6b11-4d6b-9237-4408760ea955,ResourceVersion:8634741,Generation:0,CreationTimestamp:2020-05-02 14:10:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 6460ba6e-93f8-4d2f-bb5d-0abe5c6d217e 0xc001d6c7e7 0xc001d6c7e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-54qqg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-54qqg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-54qqg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d6c980} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d6c9a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:53 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 14:10:54.357: INFO: Pod "nginx-deployment-55fb7cb77f-vz94n" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-vz94n,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8402,SelfLink:/api/v1/namespaces/deployment-8402/pods/nginx-deployment-55fb7cb77f-vz94n,UID:cec9088f-a93a-4de1-9204-27d8044a75f4,ResourceVersion:8634742,Generation:0,CreationTimestamp:2020-05-02 14:10:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 6460ba6e-93f8-4d2f-bb5d-0abe5c6d217e 0xc001d6ca67 0xc001d6ca68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-54qqg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-54qqg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-54qqg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d6cae0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d6cb00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:53 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 14:10:54.357: INFO: Pod "nginx-deployment-7b8c6f4498-5q584" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-5q584,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8402,SelfLink:/api/v1/namespaces/deployment-8402/pods/nginx-deployment-7b8c6f4498-5q584,UID:4b12d091-0862-41d9-8feb-b5ad7481f252,ResourceVersion:8634573,Generation:0,CreationTimestamp:2020-05-02 14:10:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2f9ae3e5-e3cc-4215-a9e5-d8e56922c914 0xc001d6cb87 0xc001d6cb88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-54qqg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-54qqg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-54qqg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d6cc00} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d6cc20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:39 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:44 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:44 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:39 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.141,StartTime:2020-05-02 14:10:39 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-02 14:10:44 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://ac9fb76cd755ec94c7a86eca82cb40c04ce66c36b2ecacc891f9109725c6355e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 14:10:54.357: INFO: Pod "nginx-deployment-7b8c6f4498-7dgcv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-7dgcv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8402,SelfLink:/api/v1/namespaces/deployment-8402/pods/nginx-deployment-7b8c6f4498-7dgcv,UID:1cc8e8b5-8333-46e9-b32c-82a99e5299cc,ResourceVersion:8634737,Generation:0,CreationTimestamp:2020-05-02 14:10:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2f9ae3e5-e3cc-4215-a9e5-d8e56922c914 0xc001d6ccf7 0xc001d6ccf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-54qqg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-54qqg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-54qqg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d6cd70} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d6cd90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:53 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 14:10:54.357: INFO: Pod "nginx-deployment-7b8c6f4498-89cc5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-89cc5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8402,SelfLink:/api/v1/namespaces/deployment-8402/pods/nginx-deployment-7b8c6f4498-89cc5,UID:acb55a73-2490-4fc9-825b-f774ca45eff5,ResourceVersion:8634718,Generation:0,CreationTimestamp:2020-05-02 14:10:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2f9ae3e5-e3cc-4215-a9e5-d8e56922c914 0xc001d6ce47 0xc001d6ce48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-54qqg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-54qqg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-54qqg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d6cf00} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d6cf20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:53 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 14:10:54.357: INFO: Pod "nginx-deployment-7b8c6f4498-bsg89" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bsg89,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8402,SelfLink:/api/v1/namespaces/deployment-8402/pods/nginx-deployment-7b8c6f4498-bsg89,UID:0fac11bc-b4f3-4092-9e67-b804550401b4,ResourceVersion:8634735,Generation:0,CreationTimestamp:2020-05-02 14:10:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2f9ae3e5-e3cc-4215-a9e5-d8e56922c914 0xc001d6cff7 0xc001d6cff8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-54qqg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-54qqg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-54qqg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d6d0d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d6d0f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:53 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 14:10:54.357: INFO: Pod "nginx-deployment-7b8c6f4498-dqgm5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-dqgm5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8402,SelfLink:/api/v1/namespaces/deployment-8402/pods/nginx-deployment-7b8c6f4498-dqgm5,UID:e978f019-479c-4a34-a655-1d6b15c5947f,ResourceVersion:8634717,Generation:0,CreationTimestamp:2020-05-02 14:10:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2f9ae3e5-e3cc-4215-a9e5-d8e56922c914 0xc001d6d187 0xc001d6d188}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-54qqg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-54qqg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-54qqg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d6d200} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d6d220}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:53 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 14:10:54.357: INFO: Pod "nginx-deployment-7b8c6f4498-hlhxq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-hlhxq,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8402,SelfLink:/api/v1/namespaces/deployment-8402/pods/nginx-deployment-7b8c6f4498-hlhxq,UID:1b4bc84c-26f5-45d0-bfa4-195b3cfb6514,ResourceVersion:8634739,Generation:0,CreationTimestamp:2020-05-02 14:10:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2f9ae3e5-e3cc-4215-a9e5-d8e56922c914 0xc001d6d2c7 0xc001d6d2c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-54qqg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-54qqg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-54qqg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d6d340} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d6d360}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:53 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 14:10:54.357: INFO: Pod "nginx-deployment-7b8c6f4498-jkzjw" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jkzjw,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8402,SelfLink:/api/v1/namespaces/deployment-8402/pods/nginx-deployment-7b8c6f4498-jkzjw,UID:ea6bba99-8c0e-4ef2-9df2-0a0f8ef4f7e7,ResourceVersion:8634632,Generation:0,CreationTimestamp:2020-05-02 14:10:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2f9ae3e5-e3cc-4215-a9e5-d8e56922c914 0xc001d6d3e7 0xc001d6d3e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-54qqg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-54qqg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-54qqg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d6d460} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d6d480}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:39 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:50 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:50 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:39 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.144,StartTime:2020-05-02 14:10:39 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-02 14:10:49 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://ee32a43ee0f3f85f32614bd8c923246eebab02c8bb5253df33c27e94ec9b5198}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 14:10:54.358: INFO: Pod "nginx-deployment-7b8c6f4498-k6jwd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-k6jwd,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8402,SelfLink:/api/v1/namespaces/deployment-8402/pods/nginx-deployment-7b8c6f4498-k6jwd,UID:3c78496c-1017-4f4b-aeb9-b1c13f1107b0,ResourceVersion:8634760,Generation:0,CreationTimestamp:2020-05-02 14:10:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2f9ae3e5-e3cc-4215-a9e5-d8e56922c914 0xc001d6d557 0xc001d6d558}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-54qqg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-54qqg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-54qqg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d6d5d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d6d5f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:53 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-02 14:10:54 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 14:10:54.358: INFO: Pod "nginx-deployment-7b8c6f4498-kf4tj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-kf4tj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8402,SelfLink:/api/v1/namespaces/deployment-8402/pods/nginx-deployment-7b8c6f4498-kf4tj,UID:5b7f582a-9f35-43df-8fd5-1dd21c9f8457,ResourceVersion:8634732,Generation:0,CreationTimestamp:2020-05-02 14:10:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2f9ae3e5-e3cc-4215-a9e5-d8e56922c914 0xc001d6d6b7 0xc001d6d6b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-54qqg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-54qqg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-54qqg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d6d730} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d6d750}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:53 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 14:10:54.358: INFO: Pod "nginx-deployment-7b8c6f4498-kpw7n" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-kpw7n,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8402,SelfLink:/api/v1/namespaces/deployment-8402/pods/nginx-deployment-7b8c6f4498-kpw7n,UID:f76554b4-12a7-4e20-8221-48304b5c1fec,ResourceVersion:8634597,Generation:0,CreationTimestamp:2020-05-02 14:10:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2f9ae3e5-e3cc-4215-a9e5-d8e56922c914 0xc001d6d7d7 0xc001d6d7d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-54qqg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-54qqg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-54qqg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d6d860} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d6d880}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:39 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:39 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.142,StartTime:2020-05-02 14:10:39 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-02 14:10:46 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://0e0c792bcf12c60c698ddfc8b27e1179453904c0d446516c4502589b1cc61e61}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 14:10:54.358: INFO: Pod "nginx-deployment-7b8c6f4498-pghjz" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-pghjz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8402,SelfLink:/api/v1/namespaces/deployment-8402/pods/nginx-deployment-7b8c6f4498-pghjz,UID:a7e2c6fd-8a1f-4150-966b-a6d40ca6e17f,ResourceVersion:8634594,Generation:0,CreationTimestamp:2020-05-02 14:10:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2f9ae3e5-e3cc-4215-a9e5-d8e56922c914 0xc001d6d967 0xc001d6d968}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-54qqg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-54qqg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-54qqg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d6d9e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d6da00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:39 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:39 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.62,StartTime:2020-05-02 14:10:39 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-02 14:10:47 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://94ffea04c95da1d9834dc26c9f97cc76845ea89e78bc6639a4167f1675b5c7d9}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 14:10:54.358: INFO: Pod "nginx-deployment-7b8c6f4498-rc7kh" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rc7kh,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8402,SelfLink:/api/v1/namespaces/deployment-8402/pods/nginx-deployment-7b8c6f4498-rc7kh,UID:fbcbc9d7-61a1-412c-93e4-708baf42265d,ResourceVersion:8634605,Generation:0,CreationTimestamp:2020-05-02 14:10:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2f9ae3e5-e3cc-4215-a9e5-d8e56922c914 0xc001d6dad7 0xc001d6dad8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-54qqg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-54qqg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-54qqg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d6db50} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d6db70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:39 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:49 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:49 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:39 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.63,StartTime:2020-05-02 14:10:39 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-02 14:10:47 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://5ba9b40c941683645f26f448df1f3bc774d75707823ca5edfad3cd47ccecc942}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 14:10:54.358: INFO: Pod "nginx-deployment-7b8c6f4498-sg7xd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-sg7xd,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8402,SelfLink:/api/v1/namespaces/deployment-8402/pods/nginx-deployment-7b8c6f4498-sg7xd,UID:f3736ca1-2f92-444b-a821-e1889520d1a3,ResourceVersion:8634755,Generation:0,CreationTimestamp:2020-05-02 14:10:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2f9ae3e5-e3cc-4215-a9e5-d8e56922c914 0xc001d6dc47 0xc001d6dc48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-54qqg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-54qqg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-54qqg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d6dcc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d6dce0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:53 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-02 14:10:53 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 14:10:54.358: INFO: Pod "nginx-deployment-7b8c6f4498-vcpsv" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vcpsv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8402,SelfLink:/api/v1/namespaces/deployment-8402/pods/nginx-deployment-7b8c6f4498-vcpsv,UID:84965024-30b5-4b1d-933e-f5eefa4eced7,ResourceVersion:8634612,Generation:0,CreationTimestamp:2020-05-02 14:10:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2f9ae3e5-e3cc-4215-a9e5-d8e56922c914 0xc001d6dda7 0xc001d6dda8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-54qqg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-54qqg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-54qqg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d6de20} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d6de40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:39 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:49 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:49 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:39 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.64,StartTime:2020-05-02 14:10:39 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-02 14:10:48 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://0c7625561f1f638aa64d2e7c8bbb6c8e49ab7b2cca0afba8c848d2f8ff2c221d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 14:10:54.358: INFO: Pod "nginx-deployment-7b8c6f4498-vrbgz" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vrbgz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8402,SelfLink:/api/v1/namespaces/deployment-8402/pods/nginx-deployment-7b8c6f4498-vrbgz,UID:a9491916-5adf-4f33-9a62-2e1d114aec39,ResourceVersion:8634625,Generation:0,CreationTimestamp:2020-05-02 14:10:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2f9ae3e5-e3cc-4215-a9e5-d8e56922c914 0xc001d6df17 0xc001d6df18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-54qqg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-54qqg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-54qqg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d6df90} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d6dfb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:39 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:50 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:50 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:39 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.66,StartTime:2020-05-02 14:10:39 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-02 14:10:49 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://4e6364c88567d2d9c50d87d629122c7c4ff070a4d6aed24c3a22831b94917f05}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 14:10:54.358: INFO: Pod "nginx-deployment-7b8c6f4498-wvps9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-wvps9,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8402,SelfLink:/api/v1/namespaces/deployment-8402/pods/nginx-deployment-7b8c6f4498-wvps9,UID:5f8ebb90-edb2-41fc-8a93-96e1084a4f3b,ResourceVersion:8634723,Generation:0,CreationTimestamp:2020-05-02 14:10:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2f9ae3e5-e3cc-4215-a9e5-d8e56922c914 0xc0029e2087 0xc0029e2088}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-54qqg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-54qqg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-54qqg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0029e2100} {node.kubernetes.io/unreachable Exists NoExecute 0xc0029e2230}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:53 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 14:10:54.359: INFO: Pod "nginx-deployment-7b8c6f4498-wwg66" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-wwg66,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8402,SelfLink:/api/v1/namespaces/deployment-8402/pods/nginx-deployment-7b8c6f4498-wwg66,UID:503242bd-d121-45ae-982a-39c10b41bb81,ResourceVersion:8634608,Generation:0,CreationTimestamp:2020-05-02 14:10:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2f9ae3e5-e3cc-4215-a9e5-d8e56922c914 0xc0029e23e7 0xc0029e23e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-54qqg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-54qqg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-54qqg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0029e2510} {node.kubernetes.io/unreachable Exists NoExecute 0xc0029e2590}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:39 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:49 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:49 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:39 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.143,StartTime:2020-05-02 14:10:39 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-02 14:10:48 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://6b51c6ecdc8291aa298013d45139f1baff36dd769c20a1fee900325d78ca10be}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 14:10:54.359: INFO: Pod "nginx-deployment-7b8c6f4498-xlt8t" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-xlt8t,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8402,SelfLink:/api/v1/namespaces/deployment-8402/pods/nginx-deployment-7b8c6f4498-xlt8t,UID:4125eb37-7b79-4721-8afa-7a99a0defab9,ResourceVersion:8634745,Generation:0,CreationTimestamp:2020-05-02 14:10:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2f9ae3e5-e3cc-4215-a9e5-d8e56922c914 0xc0029e26f7 0xc0029e26f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-54qqg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-54qqg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-54qqg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0029e2870} {node.kubernetes.io/unreachable Exists NoExecute 0xc0029e2890}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:53 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-02 14:10:53 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 14:10:54.359: INFO: Pod "nginx-deployment-7b8c6f4498-xlx7h" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-xlx7h,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8402,SelfLink:/api/v1/namespaces/deployment-8402/pods/nginx-deployment-7b8c6f4498-xlx7h,UID:9759ccc9-c009-4271-9f3e-3fe874f16f69,ResourceVersion:8634738,Generation:0,CreationTimestamp:2020-05-02 14:10:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2f9ae3e5-e3cc-4215-a9e5-d8e56922c914 0xc0029e2a87 0xc0029e2a88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-54qqg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-54qqg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-54qqg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0029e2b70} {node.kubernetes.io/unreachable Exists NoExecute 0xc0029e2b90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:53 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 14:10:54.359: INFO: Pod "nginx-deployment-7b8c6f4498-xr9p6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-xr9p6,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8402,SelfLink:/api/v1/namespaces/deployment-8402/pods/nginx-deployment-7b8c6f4498-xr9p6,UID:be199961-4bac-423b-b3f5-a6dffb90ad54,ResourceVersion:8634734,Generation:0,CreationTimestamp:2020-05-02 14:10:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2f9ae3e5-e3cc-4215-a9e5-d8e56922c914 0xc0029e2cf7 0xc0029e2cf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-54qqg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-54qqg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-54qqg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0029e2e50} {node.kubernetes.io/unreachable Exists NoExecute 0xc0029e2ec0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:10:53 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:10:54.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8402" for this suite. May 2 14:11:24.472: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:11:24.548: INFO: namespace deployment-8402 deletion completed in 30.148823146s • [SLOW TEST:45.916 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:11:24.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-4ede033d-3960-4524-88a6-7d6ad1037a1f STEP: Creating a pod to test consume secrets May 2 14:11:24.628: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-431f865a-2d6b-418a-9743-df5d5c1fcb08" in namespace "projected-2126" to be "success or failure" May 2 14:11:24.638: INFO: Pod "pod-projected-secrets-431f865a-2d6b-418a-9743-df5d5c1fcb08": Phase="Pending", Reason="", readiness=false. Elapsed: 9.612973ms May 2 14:11:26.643: INFO: Pod "pod-projected-secrets-431f865a-2d6b-418a-9743-df5d5c1fcb08": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014429612s May 2 14:11:28.656: INFO: Pod "pod-projected-secrets-431f865a-2d6b-418a-9743-df5d5c1fcb08": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027982928s STEP: Saw pod success May 2 14:11:28.656: INFO: Pod "pod-projected-secrets-431f865a-2d6b-418a-9743-df5d5c1fcb08" satisfied condition "success or failure" May 2 14:11:28.660: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-431f865a-2d6b-418a-9743-df5d5c1fcb08 container projected-secret-volume-test: STEP: delete the pod May 2 14:11:28.749: INFO: Waiting for pod pod-projected-secrets-431f865a-2d6b-418a-9743-df5d5c1fcb08 to disappear May 2 14:11:28.776: INFO: Pod pod-projected-secrets-431f865a-2d6b-418a-9743-df5d5c1fcb08 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:11:28.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2126" for this suite. May 2 14:11:34.813: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:11:34.878: INFO: namespace projected-2126 deletion completed in 6.098729005s • [SLOW TEST:10.330 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:11:34.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-6321 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6321 to expose endpoints map[] May 2 14:11:35.026: INFO: Get endpoints failed (3.894383ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found May 2 14:11:36.030: INFO: successfully validated that service multi-endpoint-test in namespace services-6321 exposes endpoints map[] (1.00771564s elapsed) STEP: Creating pod pod1 in namespace services-6321 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6321 to expose endpoints map[pod1:[100]] May 2 14:11:40.229: INFO: successfully validated that service multi-endpoint-test in namespace services-6321 exposes endpoints map[pod1:[100]] (4.191548676s elapsed) STEP: Creating pod pod2 in namespace services-6321 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6321 to expose endpoints map[pod1:[100] pod2:[101]] May 2 14:11:44.696: INFO: successfully validated that service multi-endpoint-test in namespace services-6321 exposes endpoints map[pod1:[100] pod2:[101]] (4.462344765s elapsed) STEP: Deleting pod pod1 in namespace services-6321 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6321 to expose endpoints map[pod2:[101]] May 2 14:11:45.724: INFO: successfully validated that service multi-endpoint-test in namespace services-6321 exposes endpoints map[pod2:[101]] (1.022479338s elapsed) STEP: Deleting pod pod2 in namespace services-6321 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6321 to expose endpoints map[] May 2 14:11:46.780: INFO: successfully validated that service multi-endpoint-test in namespace services-6321 exposes endpoints map[] (1.051409795s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:11:46.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6321" for this suite. May 2 14:11:52.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:11:53.055: INFO: namespace services-6321 deletion completed in 6.125216665s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:18.176 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:11:53.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 2 14:11:53.202: INFO: Create a RollingUpdate DaemonSet May 2 14:11:53.206: INFO: Check that daemon pods launch on every node of the cluster May 2 14:11:53.220: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 14:11:53.225: INFO: Number of nodes with available pods: 0 May 2 14:11:53.225: INFO: Node iruya-worker is running more than one daemon pod May 2 14:11:54.230: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 14:11:54.233: INFO: Number of nodes with available pods: 0 May 2 14:11:54.233: INFO: Node iruya-worker is running more than one daemon pod May 2 14:11:55.335: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 14:11:55.338: INFO: Number of nodes with available pods: 0 May 2 14:11:55.338: INFO: Node iruya-worker is running more than one daemon pod May 2 14:11:56.232: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 14:11:56.236: INFO: Number of nodes with available pods: 0 May 2 14:11:56.236: INFO: Node iruya-worker is running more than one daemon pod May 2 14:11:57.238: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 14:11:57.240: INFO: Number of nodes with available pods: 0 May 2 14:11:57.240: INFO: Node iruya-worker is running more than one daemon pod May 2 14:11:58.230: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 14:11:58.250: INFO: Number of nodes with available pods: 1 May 2 14:11:58.250: INFO: Node iruya-worker is running more than one daemon pod May 2 14:11:59.230: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 14:11:59.233: INFO: Number of nodes with available pods: 2 May 2 14:11:59.233: INFO: Number of running nodes: 2, number of available pods: 2 May 2 14:11:59.233: INFO: Update the DaemonSet to trigger a rollout May 2 14:11:59.240: INFO: Updating DaemonSet daemon-set May 2 14:12:12.265: INFO: Roll back the DaemonSet before rollout is complete May 2 14:12:12.272: INFO: Updating DaemonSet daemon-set May 2 14:12:12.272: INFO: Make sure DaemonSet rollback is complete May 2 14:12:12.328: INFO: Wrong image for pod: daemon-set-x2cnc. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. May 2 14:12:12.328: INFO: Pod daemon-set-x2cnc is not available May 2 14:12:12.333: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 14:12:13.346: INFO: Wrong image for pod: daemon-set-x2cnc. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. May 2 14:12:13.346: INFO: Pod daemon-set-x2cnc is not available May 2 14:12:13.349: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 14:12:14.405: INFO: Wrong image for pod: daemon-set-x2cnc. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. May 2 14:12:14.406: INFO: Pod daemon-set-x2cnc is not available May 2 14:12:14.448: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 14:12:15.338: INFO: Wrong image for pod: daemon-set-x2cnc. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. May 2 14:12:15.338: INFO: Pod daemon-set-x2cnc is not available May 2 14:12:15.342: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 14:12:16.336: INFO: Pod daemon-set-6n65x is not available May 2 14:12:16.340: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-162, will wait for the garbage collector to delete the pods May 2 14:12:16.481: INFO: Deleting DaemonSet.extensions daemon-set took: 82.077824ms May 2 14:12:16.781: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.297245ms May 2 14:12:22.285: INFO: Number of nodes with available pods: 0 May 2 14:12:22.285: INFO: Number of running nodes: 0, number of available pods: 0 May 2 14:12:22.287: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-162/daemonsets","resourceVersion":"8635379"},"items":null} May 2 14:12:22.289: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-162/pods","resourceVersion":"8635379"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:12:22.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-162" for this suite. May 2 14:12:28.351: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:12:28.457: INFO: namespace daemonsets-162 deletion completed in 6.156139494s • [SLOW TEST:35.401 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:12:28.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 2 14:12:28.577: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2f549ed4-9556-45ae-befc-fe07664cd657" in namespace "downward-api-9717" to be "success or failure" May 2 14:12:28.580: INFO: Pod "downwardapi-volume-2f549ed4-9556-45ae-befc-fe07664cd657": Phase="Pending", Reason="", readiness=false. Elapsed: 2.984123ms May 2 14:12:30.592: INFO: Pod "downwardapi-volume-2f549ed4-9556-45ae-befc-fe07664cd657": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015626536s May 2 14:12:32.595: INFO: Pod "downwardapi-volume-2f549ed4-9556-45ae-befc-fe07664cd657": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018594157s STEP: Saw pod success May 2 14:12:32.596: INFO: Pod "downwardapi-volume-2f549ed4-9556-45ae-befc-fe07664cd657" satisfied condition "success or failure" May 2 14:12:32.598: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-2f549ed4-9556-45ae-befc-fe07664cd657 container client-container: STEP: delete the pod May 2 14:12:32.640: INFO: Waiting for pod downwardapi-volume-2f549ed4-9556-45ae-befc-fe07664cd657 to disappear May 2 14:12:32.655: INFO: Pod downwardapi-volume-2f549ed4-9556-45ae-befc-fe07664cd657 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:12:32.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9717" for this suite. May 2 14:12:38.678: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:12:38.775: INFO: namespace downward-api-9717 deletion completed in 6.116897673s • [SLOW TEST:10.317 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:12:38.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-9719 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-9719 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9719 May 2 14:12:38.946: INFO: Found 0 stateful pods, waiting for 1 May 2 14:12:48.951: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 2 14:12:48.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9719 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 2 14:12:49.218: INFO: stderr: "I0502 14:12:49.083581 2469 log.go:172] (0xc000918420) (0xc0007b88c0) Create stream\nI0502 14:12:49.083640 2469 log.go:172] (0xc000918420) (0xc0007b88c0) Stream added, broadcasting: 1\nI0502 14:12:49.086152 2469 log.go:172] (0xc000918420) Reply frame received for 1\nI0502 14:12:49.086193 2469 log.go:172] (0xc000918420) (0xc0007b8960) Create stream\nI0502 14:12:49.086205 2469 log.go:172] (0xc000918420) (0xc0007b8960) Stream added, broadcasting: 3\nI0502 14:12:49.087237 2469 log.go:172] (0xc000918420) Reply frame received for 3\nI0502 14:12:49.087264 2469 log.go:172] (0xc000918420) (0xc0007b8a00) Create stream\nI0502 14:12:49.087284 2469 log.go:172] (0xc000918420) (0xc0007b8a00) Stream added, broadcasting: 5\nI0502 14:12:49.088445 2469 log.go:172] (0xc000918420) Reply frame received for 5\nI0502 14:12:49.174405 2469 log.go:172] (0xc000918420) Data frame received for 5\nI0502 14:12:49.174442 2469 log.go:172] (0xc0007b8a00) (5) Data frame handling\nI0502 14:12:49.174466 2469 log.go:172] (0xc0007b8a00) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0502 14:12:49.210787 2469 log.go:172] (0xc000918420) Data frame received for 5\nI0502 14:12:49.210814 2469 log.go:172] (0xc0007b8a00) (5) Data frame handling\nI0502 14:12:49.210833 2469 log.go:172] (0xc000918420) Data frame received for 3\nI0502 14:12:49.210839 2469 log.go:172] (0xc0007b8960) (3) Data frame handling\nI0502 14:12:49.210844 2469 log.go:172] (0xc0007b8960) (3) Data frame sent\nI0502 14:12:49.210848 2469 log.go:172] (0xc000918420) Data frame received for 3\nI0502 14:12:49.210852 2469 log.go:172] (0xc0007b8960) (3) Data frame handling\nI0502 14:12:49.213294 2469 log.go:172] (0xc000918420) Data frame received for 1\nI0502 14:12:49.213335 2469 log.go:172] (0xc0007b88c0) (1) Data frame handling\nI0502 14:12:49.213346 2469 log.go:172] (0xc0007b88c0) (1) Data frame sent\nI0502 14:12:49.213404 2469 log.go:172] (0xc000918420) (0xc0007b88c0) Stream removed, broadcasting: 1\nI0502 14:12:49.213423 2469 log.go:172] (0xc000918420) Go away received\nI0502 14:12:49.213972 2469 log.go:172] (0xc000918420) (0xc0007b88c0) Stream removed, broadcasting: 1\nI0502 14:12:49.214008 2469 log.go:172] (0xc000918420) (0xc0007b8960) Stream removed, broadcasting: 3\nI0502 14:12:49.214029 2469 log.go:172] (0xc000918420) (0xc0007b8a00) Stream removed, broadcasting: 5\n" May 2 14:12:49.218: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 2 14:12:49.218: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 2 14:12:49.222: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 2 14:12:59.226: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 2 14:12:59.226: INFO: Waiting for statefulset status.replicas updated to 0 May 2 14:12:59.305: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999492s May 2 14:13:00.317: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.930084717s May 2 14:13:01.322: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.918185037s May 2 14:13:02.327: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.91296906s May 2 14:13:03.333: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.907684908s May 2 14:13:04.337: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.902269677s May 2 14:13:05.342: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.89757938s May 2 14:13:06.346: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.892875184s May 2 14:13:07.350: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.888726669s May 2 14:13:08.355: INFO: Verifying statefulset ss doesn't scale past 1 for another 885.137831ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9719 May 2 14:13:09.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9719 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 2 14:13:09.581: INFO: stderr: "I0502 14:13:09.484700 2485 log.go:172] (0xc00012a6e0) (0xc0005221e0) Create stream\nI0502 14:13:09.484758 2485 log.go:172] (0xc00012a6e0) (0xc0005221e0) Stream added, broadcasting: 1\nI0502 14:13:09.487472 2485 log.go:172] (0xc00012a6e0) Reply frame received for 1\nI0502 14:13:09.487509 2485 log.go:172] (0xc00012a6e0) (0xc00064e780) Create stream\nI0502 14:13:09.487527 2485 log.go:172] (0xc00012a6e0) (0xc00064e780) Stream added, broadcasting: 3\nI0502 14:13:09.488604 2485 log.go:172] (0xc00012a6e0) Reply frame received for 3\nI0502 14:13:09.488637 2485 log.go:172] (0xc00012a6e0) (0xc000522320) Create stream\nI0502 14:13:09.488659 2485 log.go:172] (0xc00012a6e0) (0xc000522320) Stream added, broadcasting: 5\nI0502 14:13:09.489724 2485 log.go:172] (0xc00012a6e0) Reply frame received for 5\nI0502 14:13:09.574378 2485 log.go:172] (0xc00012a6e0) Data frame received for 3\nI0502 14:13:09.574420 2485 log.go:172] (0xc00064e780) (3) Data frame handling\nI0502 14:13:09.574434 2485 log.go:172] (0xc00064e780) (3) Data frame sent\nI0502 14:13:09.574443 2485 log.go:172] (0xc00012a6e0) Data frame received for 3\nI0502 14:13:09.574450 2485 log.go:172] (0xc00064e780) (3) Data frame handling\nI0502 14:13:09.574478 2485 log.go:172] (0xc00012a6e0) Data frame received for 5\nI0502 14:13:09.574491 2485 log.go:172] (0xc000522320) (5) Data frame handling\nI0502 14:13:09.574508 2485 log.go:172] (0xc000522320) (5) Data frame sent\nI0502 14:13:09.574518 2485 log.go:172] (0xc00012a6e0) Data frame received for 5\nI0502 14:13:09.574525 2485 log.go:172] (0xc000522320) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0502 14:13:09.575652 2485 log.go:172] (0xc00012a6e0) Data frame received for 1\nI0502 14:13:09.575684 2485 log.go:172] (0xc0005221e0) (1) Data frame handling\nI0502 14:13:09.575700 2485 log.go:172] (0xc0005221e0) (1) Data frame sent\nI0502 14:13:09.575717 2485 log.go:172] (0xc00012a6e0) (0xc0005221e0) Stream removed, broadcasting: 1\nI0502 14:13:09.575736 2485 log.go:172] (0xc00012a6e0) Go away received\nI0502 14:13:09.576305 2485 log.go:172] (0xc00012a6e0) (0xc0005221e0) Stream removed, broadcasting: 1\nI0502 14:13:09.576325 2485 log.go:172] (0xc00012a6e0) (0xc00064e780) Stream removed, broadcasting: 3\nI0502 14:13:09.576336 2485 log.go:172] (0xc00012a6e0) (0xc000522320) Stream removed, broadcasting: 5\n" May 2 14:13:09.581: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 2 14:13:09.581: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 2 14:13:09.585: INFO: Found 1 stateful pods, waiting for 3 May 2 14:13:19.590: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 2 14:13:19.590: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 2 14:13:19.590: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 2 14:13:19.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9719 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 2 14:13:19.806: INFO: stderr: "I0502 14:13:19.725727 2505 log.go:172] (0xc000a980b0) (0xc000a7a6e0) Create stream\nI0502 14:13:19.725785 2505 log.go:172] (0xc000a980b0) (0xc000a7a6e0) Stream added, broadcasting: 1\nI0502 14:13:19.727276 2505 log.go:172] (0xc000a980b0) Reply frame received for 1\nI0502 14:13:19.727318 2505 log.go:172] (0xc000a980b0) (0xc0001fa000) Create stream\nI0502 14:13:19.727332 2505 log.go:172] (0xc000a980b0) (0xc0001fa000) Stream added, broadcasting: 3\nI0502 14:13:19.728139 2505 log.go:172] (0xc000a980b0) Reply frame received for 3\nI0502 14:13:19.728186 2505 log.go:172] (0xc000a980b0) (0xc000958500) Create stream\nI0502 14:13:19.728203 2505 log.go:172] (0xc000a980b0) (0xc000958500) Stream added, broadcasting: 5\nI0502 14:13:19.729006 2505 log.go:172] (0xc000a980b0) Reply frame received for 5\nI0502 14:13:19.798991 2505 log.go:172] (0xc000a980b0) Data frame received for 3\nI0502 14:13:19.799052 2505 log.go:172] (0xc0001fa000) (3) Data frame handling\nI0502 14:13:19.799085 2505 log.go:172] (0xc0001fa000) (3) Data frame sent\nI0502 14:13:19.799113 2505 log.go:172] (0xc000a980b0) Data frame received for 3\nI0502 14:13:19.799131 2505 log.go:172] (0xc0001fa000) (3) Data frame handling\nI0502 14:13:19.799153 2505 log.go:172] (0xc000a980b0) Data frame received for 5\nI0502 14:13:19.799170 2505 log.go:172] (0xc000958500) (5) Data frame handling\nI0502 14:13:19.799182 2505 log.go:172] (0xc000958500) (5) Data frame sent\nI0502 14:13:19.799193 2505 log.go:172] (0xc000a980b0) Data frame received for 5\nI0502 14:13:19.799202 2505 log.go:172] (0xc000958500) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0502 14:13:19.800675 2505 log.go:172] (0xc000a980b0) Data frame received for 1\nI0502 14:13:19.800697 2505 log.go:172] (0xc000a7a6e0) (1) Data frame handling\nI0502 14:13:19.800709 2505 log.go:172] (0xc000a7a6e0) (1) Data frame sent\nI0502 14:13:19.800721 2505 log.go:172] (0xc000a980b0) (0xc000a7a6e0) Stream removed, broadcasting: 1\nI0502 14:13:19.800735 2505 log.go:172] (0xc000a980b0) Go away received\nI0502 14:13:19.801421 2505 log.go:172] (0xc000a980b0) (0xc000a7a6e0) Stream removed, broadcasting: 1\nI0502 14:13:19.801449 2505 log.go:172] (0xc000a980b0) (0xc0001fa000) Stream removed, broadcasting: 3\nI0502 14:13:19.801461 2505 log.go:172] (0xc000a980b0) (0xc000958500) Stream removed, broadcasting: 5\n" May 2 14:13:19.806: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 2 14:13:19.806: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 2 14:13:19.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9719 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 2 14:13:20.046: INFO: stderr: "I0502 14:13:19.939918 2520 log.go:172] (0xc000984420) (0xc0003026e0) Create stream\nI0502 14:13:19.939969 2520 log.go:172] (0xc000984420) (0xc0003026e0) Stream added, broadcasting: 1\nI0502 14:13:19.942244 2520 log.go:172] (0xc000984420) Reply frame received for 1\nI0502 14:13:19.942276 2520 log.go:172] (0xc000984420) (0xc00051e5a0) Create stream\nI0502 14:13:19.942283 2520 log.go:172] (0xc000984420) (0xc00051e5a0) Stream added, broadcasting: 3\nI0502 14:13:19.943190 2520 log.go:172] (0xc000984420) Reply frame received for 3\nI0502 14:13:19.943236 2520 log.go:172] (0xc000984420) (0xc0007cc000) Create stream\nI0502 14:13:19.943256 2520 log.go:172] (0xc000984420) (0xc0007cc000) Stream added, broadcasting: 5\nI0502 14:13:19.944098 2520 log.go:172] (0xc000984420) Reply frame received for 5\nI0502 14:13:20.009584 2520 log.go:172] (0xc000984420) Data frame received for 5\nI0502 14:13:20.009650 2520 log.go:172] (0xc0007cc000) (5) Data frame handling\nI0502 14:13:20.009680 2520 log.go:172] (0xc0007cc000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0502 14:13:20.038255 2520 log.go:172] (0xc000984420) Data frame received for 5\nI0502 14:13:20.038300 2520 log.go:172] (0xc0007cc000) (5) Data frame handling\nI0502 14:13:20.038333 2520 log.go:172] (0xc000984420) Data frame received for 3\nI0502 14:13:20.038355 2520 log.go:172] (0xc00051e5a0) (3) Data frame handling\nI0502 14:13:20.038383 2520 log.go:172] (0xc00051e5a0) (3) Data frame sent\nI0502 14:13:20.038420 2520 log.go:172] (0xc000984420) Data frame received for 3\nI0502 14:13:20.038434 2520 log.go:172] (0xc00051e5a0) (3) Data frame handling\nI0502 14:13:20.040402 2520 log.go:172] (0xc000984420) Data frame received for 1\nI0502 14:13:20.040437 2520 log.go:172] (0xc0003026e0) (1) Data frame handling\nI0502 14:13:20.040449 2520 log.go:172] (0xc0003026e0) (1) Data frame sent\nI0502 14:13:20.040466 2520 log.go:172] (0xc000984420) (0xc0003026e0) Stream removed, broadcasting: 1\nI0502 14:13:20.040484 2520 log.go:172] (0xc000984420) Go away received\nI0502 14:13:20.040750 2520 log.go:172] (0xc000984420) (0xc0003026e0) Stream removed, broadcasting: 1\nI0502 14:13:20.040763 2520 log.go:172] (0xc000984420) (0xc00051e5a0) Stream removed, broadcasting: 3\nI0502 14:13:20.040769 2520 log.go:172] (0xc000984420) (0xc0007cc000) Stream removed, broadcasting: 5\n" May 2 14:13:20.046: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 2 14:13:20.046: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 2 14:13:20.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9719 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 2 14:13:20.309: INFO: stderr: "I0502 14:13:20.187024 2543 log.go:172] (0xc000840420) (0xc00082c6e0) Create stream\nI0502 14:13:20.187110 2543 log.go:172] (0xc000840420) (0xc00082c6e0) Stream added, broadcasting: 1\nI0502 14:13:20.189805 2543 log.go:172] (0xc000840420) Reply frame received for 1\nI0502 14:13:20.189850 2543 log.go:172] (0xc000840420) (0xc00076a000) Create stream\nI0502 14:13:20.189865 2543 log.go:172] (0xc000840420) (0xc00076a000) Stream added, broadcasting: 3\nI0502 14:13:20.190982 2543 log.go:172] (0xc000840420) Reply frame received for 3\nI0502 14:13:20.191042 2543 log.go:172] (0xc000840420) (0xc00033a280) Create stream\nI0502 14:13:20.191077 2543 log.go:172] (0xc000840420) (0xc00033a280) Stream added, broadcasting: 5\nI0502 14:13:20.192018 2543 log.go:172] (0xc000840420) Reply frame received for 5\nI0502 14:13:20.264906 2543 log.go:172] (0xc000840420) Data frame received for 5\nI0502 14:13:20.264947 2543 log.go:172] (0xc00033a280) (5) Data frame handling\nI0502 14:13:20.264982 2543 log.go:172] (0xc00033a280) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0502 14:13:20.295162 2543 log.go:172] (0xc000840420) Data frame received for 3\nI0502 14:13:20.295198 2543 log.go:172] (0xc00076a000) (3) Data frame handling\nI0502 14:13:20.295220 2543 log.go:172] (0xc00076a000) (3) Data frame sent\nI0502 14:13:20.295516 2543 log.go:172] (0xc000840420) Data frame received for 5\nI0502 14:13:20.295559 2543 log.go:172] (0xc00033a280) (5) Data frame handling\nI0502 14:13:20.295675 2543 log.go:172] (0xc000840420) Data frame received for 3\nI0502 14:13:20.295697 2543 log.go:172] (0xc00076a000) (3) Data frame handling\nI0502 14:13:20.303181 2543 log.go:172] (0xc000840420) Data frame received for 1\nI0502 14:13:20.303210 2543 log.go:172] (0xc00082c6e0) (1) Data frame handling\nI0502 14:13:20.303225 2543 log.go:172] (0xc00082c6e0) (1) Data frame sent\nI0502 14:13:20.303240 2543 log.go:172] (0xc000840420) (0xc00082c6e0) Stream removed, broadcasting: 1\nI0502 14:13:20.303261 2543 log.go:172] (0xc000840420) Go away received\nI0502 14:13:20.303621 2543 log.go:172] (0xc000840420) (0xc00082c6e0) Stream removed, broadcasting: 1\nI0502 14:13:20.303648 2543 log.go:172] (0xc000840420) (0xc00076a000) Stream removed, broadcasting: 3\nI0502 14:13:20.303662 2543 log.go:172] (0xc000840420) (0xc00033a280) Stream removed, broadcasting: 5\n" May 2 14:13:20.309: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 2 14:13:20.309: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 2 14:13:20.309: INFO: Waiting for statefulset status.replicas updated to 0 May 2 14:13:20.312: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 May 2 14:13:30.321: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 2 14:13:30.321: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 2 14:13:30.321: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 2 14:13:30.343: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999559s May 2 14:13:31.349: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.995912988s May 2 14:13:32.355: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.990088001s May 2 14:13:33.360: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.984221388s May 2 14:13:34.366: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.979256589s May 2 14:13:35.371: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.973319777s May 2 14:13:36.377: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.968087957s May 2 14:13:37.401: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.961839341s May 2 14:13:38.407: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.937571495s May 2 14:13:39.411: INFO: Verifying statefulset ss doesn't scale past 3 for another 932.134801ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9719 May 2 14:13:40.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9719 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 2 14:13:40.648: INFO: stderr: "I0502 14:13:40.559279 2563 log.go:172] (0xc000b00630) (0xc00070cdc0) Create stream\nI0502 14:13:40.559333 2563 log.go:172] (0xc000b00630) (0xc00070cdc0) Stream added, broadcasting: 1\nI0502 14:13:40.563359 2563 log.go:172] (0xc000b00630) Reply frame received for 1\nI0502 14:13:40.563427 2563 log.go:172] (0xc000b00630) (0xc00070c500) Create stream\nI0502 14:13:40.563441 2563 log.go:172] (0xc000b00630) (0xc00070c500) Stream added, broadcasting: 3\nI0502 14:13:40.564401 2563 log.go:172] (0xc000b00630) Reply frame received for 3\nI0502 14:13:40.564452 2563 log.go:172] (0xc000b00630) (0xc000122000) Create stream\nI0502 14:13:40.564467 2563 log.go:172] (0xc000b00630) (0xc000122000) Stream added, broadcasting: 5\nI0502 14:13:40.565500 2563 log.go:172] (0xc000b00630) Reply frame received for 5\nI0502 14:13:40.641387 2563 log.go:172] (0xc000b00630) Data frame received for 3\nI0502 14:13:40.641426 2563 log.go:172] (0xc00070c500) (3) Data frame handling\nI0502 14:13:40.641454 2563 log.go:172] (0xc00070c500) (3) Data frame sent\nI0502 14:13:40.641639 2563 log.go:172] (0xc000b00630) Data frame received for 3\nI0502 14:13:40.641657 2563 log.go:172] (0xc00070c500) (3) Data frame handling\nI0502 14:13:40.641699 2563 log.go:172] (0xc000b00630) Data frame received for 5\nI0502 14:13:40.641731 2563 log.go:172] (0xc000122000) (5) Data frame handling\nI0502 14:13:40.641755 2563 log.go:172] (0xc000122000) (5) Data frame sent\nI0502 14:13:40.641767 2563 log.go:172] (0xc000b00630) Data frame received for 5\nI0502 14:13:40.641774 2563 log.go:172] (0xc000122000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0502 14:13:40.643012 2563 log.go:172] (0xc000b00630) Data frame received for 1\nI0502 14:13:40.643044 2563 log.go:172] (0xc00070cdc0) (1) Data frame handling\nI0502 14:13:40.643068 2563 log.go:172] (0xc00070cdc0) (1) Data frame sent\nI0502 14:13:40.643096 2563 log.go:172] (0xc000b00630) (0xc00070cdc0) Stream removed, broadcasting: 1\nI0502 14:13:40.643123 2563 log.go:172] (0xc000b00630) Go away received\nI0502 14:13:40.643409 2563 log.go:172] (0xc000b00630) (0xc00070cdc0) Stream removed, broadcasting: 1\nI0502 14:13:40.643426 2563 log.go:172] (0xc000b00630) (0xc00070c500) Stream removed, broadcasting: 3\nI0502 14:13:40.643434 2563 log.go:172] (0xc000b00630) (0xc000122000) Stream removed, broadcasting: 5\n" May 2 14:13:40.648: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 2 14:13:40.648: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 2 14:13:40.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9719 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 2 14:13:40.870: INFO: stderr: "I0502 14:13:40.789831 2583 log.go:172] (0xc0006dcc60) (0xc0006d8b40) Create stream\nI0502 14:13:40.789876 2583 log.go:172] (0xc0006dcc60) (0xc0006d8b40) Stream added, broadcasting: 1\nI0502 14:13:40.793422 2583 log.go:172] (0xc0006dcc60) Reply frame received for 1\nI0502 14:13:40.793460 2583 log.go:172] (0xc0006dcc60) (0xc0006d8280) Create stream\nI0502 14:13:40.793470 2583 log.go:172] (0xc0006dcc60) (0xc0006d8280) Stream added, broadcasting: 3\nI0502 14:13:40.794433 2583 log.go:172] (0xc0006dcc60) Reply frame received for 3\nI0502 14:13:40.794477 2583 log.go:172] (0xc0006dcc60) (0xc00001c000) Create stream\nI0502 14:13:40.794488 2583 log.go:172] (0xc0006dcc60) (0xc00001c000) Stream added, broadcasting: 5\nI0502 14:13:40.795314 2583 log.go:172] (0xc0006dcc60) Reply frame received for 5\nI0502 14:13:40.865247 2583 log.go:172] (0xc0006dcc60) Data frame received for 5\nI0502 14:13:40.865274 2583 log.go:172] (0xc00001c000) (5) Data frame handling\nI0502 14:13:40.865282 2583 log.go:172] (0xc00001c000) (5) Data frame sent\nI0502 14:13:40.865288 2583 log.go:172] (0xc0006dcc60) Data frame received for 5\nI0502 14:13:40.865292 2583 log.go:172] (0xc00001c000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0502 14:13:40.865317 2583 log.go:172] (0xc0006dcc60) Data frame received for 3\nI0502 14:13:40.865322 2583 log.go:172] (0xc0006d8280) (3) Data frame handling\nI0502 14:13:40.865328 2583 log.go:172] (0xc0006d8280) (3) Data frame sent\nI0502 14:13:40.865332 2583 log.go:172] (0xc0006dcc60) Data frame received for 3\nI0502 14:13:40.865335 2583 log.go:172] (0xc0006d8280) (3) Data frame handling\nI0502 14:13:40.866853 2583 log.go:172] (0xc0006dcc60) Data frame received for 1\nI0502 14:13:40.866884 2583 log.go:172] (0xc0006d8b40) (1) Data frame handling\nI0502 14:13:40.866901 2583 log.go:172] (0xc0006d8b40) (1) Data frame sent\nI0502 14:13:40.866932 2583 log.go:172] (0xc0006dcc60) (0xc0006d8b40) Stream removed, broadcasting: 1\nI0502 14:13:40.866954 2583 log.go:172] (0xc0006dcc60) Go away received\nI0502 14:13:40.867180 2583 log.go:172] (0xc0006dcc60) (0xc0006d8b40) Stream removed, broadcasting: 1\nI0502 14:13:40.867194 2583 log.go:172] (0xc0006dcc60) (0xc0006d8280) Stream removed, broadcasting: 3\nI0502 14:13:40.867199 2583 log.go:172] (0xc0006dcc60) (0xc00001c000) Stream removed, broadcasting: 5\n" May 2 14:13:40.870: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 2 14:13:40.870: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 2 14:13:40.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9719 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 2 14:13:41.082: INFO: stderr: "I0502 14:13:41.002968 2604 log.go:172] (0xc00012adc0) (0xc0005666e0) Create stream\nI0502 14:13:41.003028 2604 log.go:172] (0xc00012adc0) (0xc0005666e0) Stream added, broadcasting: 1\nI0502 14:13:41.007009 2604 log.go:172] (0xc00012adc0) Reply frame received for 1\nI0502 14:13:41.007062 2604 log.go:172] (0xc00012adc0) (0xc000566000) Create stream\nI0502 14:13:41.007078 2604 log.go:172] (0xc00012adc0) (0xc000566000) Stream added, broadcasting: 3\nI0502 14:13:41.007957 2604 log.go:172] (0xc00012adc0) Reply frame received for 3\nI0502 14:13:41.008011 2604 log.go:172] (0xc00012adc0) (0xc0003e6280) Create stream\nI0502 14:13:41.008024 2604 log.go:172] (0xc00012adc0) (0xc0003e6280) Stream added, broadcasting: 5\nI0502 14:13:41.008816 2604 log.go:172] (0xc00012adc0) Reply frame received for 5\nI0502 14:13:41.076077 2604 log.go:172] (0xc00012adc0) Data frame received for 5\nI0502 14:13:41.076131 2604 log.go:172] (0xc0003e6280) (5) Data frame handling\nI0502 14:13:41.076149 2604 log.go:172] (0xc0003e6280) (5) Data frame sent\nI0502 14:13:41.076161 2604 log.go:172] (0xc00012adc0) Data frame received for 5\nI0502 14:13:41.076172 2604 log.go:172] (0xc0003e6280) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0502 14:13:41.076218 2604 log.go:172] (0xc00012adc0) Data frame received for 3\nI0502 14:13:41.076251 2604 log.go:172] (0xc000566000) (3) Data frame handling\nI0502 14:13:41.076271 2604 log.go:172] (0xc000566000) (3) Data frame sent\nI0502 14:13:41.076283 2604 log.go:172] (0xc00012adc0) Data frame received for 3\nI0502 14:13:41.076293 2604 log.go:172] (0xc000566000) (3) Data frame handling\nI0502 14:13:41.077679 2604 log.go:172] (0xc00012adc0) Data frame received for 1\nI0502 14:13:41.077712 2604 log.go:172] (0xc0005666e0) (1) Data frame handling\nI0502 14:13:41.077743 2604 log.go:172] (0xc0005666e0) (1) Data frame sent\nI0502 14:13:41.077772 2604 log.go:172] (0xc00012adc0) (0xc0005666e0) Stream removed, broadcasting: 1\nI0502 14:13:41.077988 2604 log.go:172] (0xc00012adc0) Go away received\nI0502 14:13:41.078192 2604 log.go:172] (0xc00012adc0) (0xc0005666e0) Stream removed, broadcasting: 1\nI0502 14:13:41.078218 2604 log.go:172] (0xc00012adc0) (0xc000566000) Stream removed, broadcasting: 3\nI0502 14:13:41.078237 2604 log.go:172] (0xc00012adc0) (0xc0003e6280) Stream removed, broadcasting: 5\n" May 2 14:13:41.082: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 2 14:13:41.082: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 2 14:13:41.082: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 2 14:14:01.121: INFO: Deleting all statefulset in ns statefulset-9719 May 2 14:14:01.124: INFO: Scaling statefulset ss to 0 May 2 14:14:01.131: INFO: Waiting for statefulset status.replicas updated to 0 May 2 14:14:01.133: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:14:01.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9719" for this suite. May 2 14:14:07.159: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:14:07.243: INFO: namespace statefulset-9719 deletion completed in 6.093374909s • [SLOW TEST:88.468 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:14:07.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 2 14:14:07.362: INFO: Waiting up to 5m0s for pod "downwardapi-volume-93362cf1-c4a8-462e-97a3-17e6d60ceaad" in namespace "projected-8056" to be "success or failure" May 2 14:14:07.627: INFO: Pod "downwardapi-volume-93362cf1-c4a8-462e-97a3-17e6d60ceaad": Phase="Pending", Reason="", readiness=false. Elapsed: 264.42968ms May 2 14:14:09.791: INFO: Pod "downwardapi-volume-93362cf1-c4a8-462e-97a3-17e6d60ceaad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.428308417s May 2 14:14:11.795: INFO: Pod "downwardapi-volume-93362cf1-c4a8-462e-97a3-17e6d60ceaad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.432560932s STEP: Saw pod success May 2 14:14:11.795: INFO: Pod "downwardapi-volume-93362cf1-c4a8-462e-97a3-17e6d60ceaad" satisfied condition "success or failure" May 2 14:14:11.798: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-93362cf1-c4a8-462e-97a3-17e6d60ceaad container client-container: STEP: delete the pod May 2 14:14:11.816: INFO: Waiting for pod downwardapi-volume-93362cf1-c4a8-462e-97a3-17e6d60ceaad to disappear May 2 14:14:11.821: INFO: Pod downwardapi-volume-93362cf1-c4a8-462e-97a3-17e6d60ceaad no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:14:11.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8056" for this suite. May 2 14:14:17.968: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:14:18.047: INFO: namespace projected-8056 deletion completed in 6.222957193s • [SLOW TEST:10.803 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:14:18.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-7331 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet May 2 14:14:18.167: INFO: Found 0 stateful pods, waiting for 3 May 2 14:14:28.173: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 2 14:14:28.173: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 2 14:14:28.173: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false May 2 14:14:38.172: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 2 14:14:38.173: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 2 14:14:38.173: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 2 14:14:38.200: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 2 14:14:48.254: INFO: Updating stateful set ss2 May 2 14:14:48.319: INFO: Waiting for Pod statefulset-7331/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted May 2 14:14:58.525: INFO: Found 2 stateful pods, waiting for 3 May 2 14:15:08.530: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 2 14:15:08.530: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 2 14:15:08.530: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 2 14:15:08.555: INFO: Updating stateful set ss2 May 2 14:15:08.573: INFO: Waiting for Pod statefulset-7331/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 2 14:15:18.582: INFO: Waiting for Pod statefulset-7331/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 2 14:15:28.599: INFO: Updating stateful set ss2 May 2 14:15:28.613: INFO: Waiting for StatefulSet statefulset-7331/ss2 to complete update May 2 14:15:28.613: INFO: Waiting for Pod statefulset-7331/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 2 14:15:38.621: INFO: Deleting all statefulset in ns statefulset-7331 May 2 14:15:38.624: INFO: Scaling statefulset ss2 to 0 May 2 14:15:58.637: INFO: Waiting for statefulset status.replicas updated to 0 May 2 14:15:58.640: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:15:58.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7331" for this suite. May 2 14:16:04.695: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:16:04.785: INFO: namespace statefulset-7331 deletion completed in 6.108710351s • [SLOW TEST:106.737 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:16:04.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:16:08.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-1090" for this suite. May 2 14:16:15.016: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:16:15.092: INFO: namespace emptydir-wrapper-1090 deletion completed in 6.104984426s • [SLOW TEST:10.306 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:16:15.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 2 14:16:15.184: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 14:16:15.190: INFO: Number of nodes with available pods: 0 May 2 14:16:15.190: INFO: Node iruya-worker is running more than one daemon pod May 2 14:16:16.196: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 14:16:16.200: INFO: Number of nodes with available pods: 0 May 2 14:16:16.200: INFO: Node iruya-worker is running more than one daemon pod May 2 14:16:17.356: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 14:16:17.358: INFO: Number of nodes with available pods: 0 May 2 14:16:17.358: INFO: Node iruya-worker is running more than one daemon pod May 2 14:16:18.272: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 14:16:18.276: INFO: Number of nodes with available pods: 0 May 2 14:16:18.276: INFO: Node iruya-worker is running more than one daemon pod May 2 14:16:19.195: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 14:16:19.199: INFO: Number of nodes with available pods: 1 May 2 14:16:19.199: INFO: Node iruya-worker2 is running more than one daemon pod May 2 14:16:20.195: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 14:16:20.199: INFO: Number of nodes with available pods: 2 May 2 14:16:20.199: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 2 14:16:20.215: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 14:16:20.221: INFO: Number of nodes with available pods: 2 May 2 14:16:20.221: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8178, will wait for the garbage collector to delete the pods May 2 14:16:21.345: INFO: Deleting DaemonSet.extensions daemon-set took: 7.2573ms May 2 14:16:21.645: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.238503ms May 2 14:16:31.966: INFO: Number of nodes with available pods: 0 May 2 14:16:31.966: INFO: Number of running nodes: 0, number of available pods: 0 May 2 14:16:31.968: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8178/daemonsets","resourceVersion":"8636486"},"items":null} May 2 14:16:31.971: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8178/pods","resourceVersion":"8636486"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:16:31.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8178" for this suite. May 2 14:16:38.001: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:16:38.077: INFO: namespace daemonsets-8178 deletion completed in 6.0936018s • [SLOW TEST:22.985 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:16:38.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-2527/configmap-test-53081f5d-e51f-4016-ac4e-aa6ca4050dc8 STEP: Creating a pod to test consume configMaps May 2 14:16:38.161: INFO: Waiting up to 5m0s for pod "pod-configmaps-48221502-322c-4b7d-9f96-b9cefca4ba4d" in namespace "configmap-2527" to be "success or failure" May 2 14:16:38.165: INFO: Pod "pod-configmaps-48221502-322c-4b7d-9f96-b9cefca4ba4d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.388746ms May 2 14:16:40.169: INFO: Pod "pod-configmaps-48221502-322c-4b7d-9f96-b9cefca4ba4d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008008103s May 2 14:16:42.173: INFO: Pod "pod-configmaps-48221502-322c-4b7d-9f96-b9cefca4ba4d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011992038s STEP: Saw pod success May 2 14:16:42.173: INFO: Pod "pod-configmaps-48221502-322c-4b7d-9f96-b9cefca4ba4d" satisfied condition "success or failure" May 2 14:16:42.176: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-48221502-322c-4b7d-9f96-b9cefca4ba4d container env-test: STEP: delete the pod May 2 14:16:42.326: INFO: Waiting for pod pod-configmaps-48221502-322c-4b7d-9f96-b9cefca4ba4d to disappear May 2 14:16:42.329: INFO: Pod pod-configmaps-48221502-322c-4b7d-9f96-b9cefca4ba4d no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:16:42.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2527" for this suite. May 2 14:16:48.485: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:16:48.559: INFO: namespace configmap-2527 deletion completed in 6.22599453s • [SLOW TEST:10.481 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:16:48.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:16:48.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4633" for this suite. May 2 14:16:54.728: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:16:54.805: INFO: namespace kubelet-test-4633 deletion completed in 6.085801755s • [SLOW TEST:6.246 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:16:54.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info May 2 14:16:54.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' May 2 14:16:54.964: INFO: stderr: "" May 2 14:16:54.964: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:16:54.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2913" for this suite. May 2 14:17:00.985: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:17:01.064: INFO: namespace kubectl-2913 deletion completed in 6.09748858s • [SLOW TEST:6.259 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:17:01.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 2 14:17:09.230: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 2 14:17:09.246: INFO: Pod pod-with-prestop-exec-hook still exists May 2 14:17:11.247: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 2 14:17:11.251: INFO: Pod pod-with-prestop-exec-hook still exists May 2 14:17:13.247: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 2 14:17:13.251: INFO: Pod pod-with-prestop-exec-hook still exists May 2 14:17:15.247: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 2 14:17:15.250: INFO: Pod pod-with-prestop-exec-hook still exists May 2 14:17:17.247: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 2 14:17:17.250: INFO: Pod pod-with-prestop-exec-hook still exists May 2 14:17:19.247: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 2 14:17:19.251: INFO: Pod pod-with-prestop-exec-hook still exists May 2 14:17:21.247: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 2 14:17:21.252: INFO: Pod pod-with-prestop-exec-hook still exists May 2 14:17:23.247: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 2 14:17:23.250: INFO: Pod pod-with-prestop-exec-hook still exists May 2 14:17:25.247: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 2 14:17:25.251: INFO: Pod pod-with-prestop-exec-hook still exists May 2 14:17:27.247: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 2 14:17:27.251: INFO: Pod pod-with-prestop-exec-hook still exists May 2 14:17:29.247: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 2 14:17:29.251: INFO: Pod pod-with-prestop-exec-hook still exists May 2 14:17:31.247: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 2 14:17:31.251: INFO: Pod pod-with-prestop-exec-hook still exists May 2 14:17:33.247: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 2 14:17:33.250: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:17:33.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5531" for this suite. May 2 14:17:55.278: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:17:55.356: INFO: namespace container-lifecycle-hook-5531 deletion completed in 22.09635247s • [SLOW TEST:54.291 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:17:55.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-86e74855-d747-4e40-90b5-e2d6a1238101 May 2 14:17:55.432: INFO: Pod name my-hostname-basic-86e74855-d747-4e40-90b5-e2d6a1238101: Found 0 pods out of 1 May 2 14:18:00.437: INFO: Pod name my-hostname-basic-86e74855-d747-4e40-90b5-e2d6a1238101: Found 1 pods out of 1 May 2 14:18:00.437: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-86e74855-d747-4e40-90b5-e2d6a1238101" are running May 2 14:18:00.440: INFO: Pod "my-hostname-basic-86e74855-d747-4e40-90b5-e2d6a1238101-pzd4g" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-02 14:17:55 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-02 14:17:58 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-02 14:17:58 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-02 14:17:55 +0000 UTC Reason: Message:}]) May 2 14:18:00.440: INFO: Trying to dial the pod May 2 14:18:05.454: INFO: Controller my-hostname-basic-86e74855-d747-4e40-90b5-e2d6a1238101: Got expected result from replica 1 [my-hostname-basic-86e74855-d747-4e40-90b5-e2d6a1238101-pzd4g]: "my-hostname-basic-86e74855-d747-4e40-90b5-e2d6a1238101-pzd4g", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:18:05.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1935" for this suite. May 2 14:18:11.483: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:18:11.555: INFO: namespace replication-controller-1935 deletion completed in 6.097210321s • [SLOW TEST:16.199 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:18:11.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-9885, will wait for the garbage collector to delete the pods May 2 14:18:17.696: INFO: Deleting Job.batch foo took: 6.642712ms May 2 14:18:17.796: INFO: Terminating Job.batch foo pods took: 100.275992ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:18:52.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-9885" for this suite. May 2 14:18:58.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:18:58.419: INFO: namespace job-9885 deletion completed in 6.097031518s • [SLOW TEST:46.864 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:18:58.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 2 14:18:58.521: INFO: Waiting up to 5m0s for pod "downward-api-e8422265-a287-4771-935d-db95b9a4f9fc" in namespace "downward-api-49" to be "success or failure" May 2 14:18:58.524: INFO: Pod "downward-api-e8422265-a287-4771-935d-db95b9a4f9fc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.093117ms May 2 14:19:00.530: INFO: Pod "downward-api-e8422265-a287-4771-935d-db95b9a4f9fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008342887s May 2 14:19:02.534: INFO: Pod "downward-api-e8422265-a287-4771-935d-db95b9a4f9fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013132914s STEP: Saw pod success May 2 14:19:02.534: INFO: Pod "downward-api-e8422265-a287-4771-935d-db95b9a4f9fc" satisfied condition "success or failure" May 2 14:19:02.538: INFO: Trying to get logs from node iruya-worker pod downward-api-e8422265-a287-4771-935d-db95b9a4f9fc container dapi-container: STEP: delete the pod May 2 14:19:02.565: INFO: Waiting for pod downward-api-e8422265-a287-4771-935d-db95b9a4f9fc to disappear May 2 14:19:02.569: INFO: Pod downward-api-e8422265-a287-4771-935d-db95b9a4f9fc no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:19:02.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-49" for this suite. May 2 14:19:08.623: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:19:08.707: INFO: namespace downward-api-49 deletion completed in 6.134365425s • [SLOW TEST:10.287 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:19:08.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 2 14:19:08.790: INFO: Waiting up to 5m0s for pod "downwardapi-volume-21005219-50bf-49c3-8f4d-3dd8a43a35ba" in namespace "downward-api-3678" to be "success or failure" May 2 14:19:08.802: INFO: Pod "downwardapi-volume-21005219-50bf-49c3-8f4d-3dd8a43a35ba": Phase="Pending", Reason="", readiness=false. Elapsed: 11.812421ms May 2 14:19:10.807: INFO: Pod "downwardapi-volume-21005219-50bf-49c3-8f4d-3dd8a43a35ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016356706s May 2 14:19:12.812: INFO: Pod "downwardapi-volume-21005219-50bf-49c3-8f4d-3dd8a43a35ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021249546s STEP: Saw pod success May 2 14:19:12.812: INFO: Pod "downwardapi-volume-21005219-50bf-49c3-8f4d-3dd8a43a35ba" satisfied condition "success or failure" May 2 14:19:12.815: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-21005219-50bf-49c3-8f4d-3dd8a43a35ba container client-container: STEP: delete the pod May 2 14:19:12.857: INFO: Waiting for pod downwardapi-volume-21005219-50bf-49c3-8f4d-3dd8a43a35ba to disappear May 2 14:19:12.887: INFO: Pod downwardapi-volume-21005219-50bf-49c3-8f4d-3dd8a43a35ba no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:19:12.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3678" for this suite. May 2 14:19:18.915: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:19:18.999: INFO: namespace downward-api-3678 deletion completed in 6.108259264s • [SLOW TEST:10.292 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:19:18.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 2 14:19:23.128: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:19:23.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7843" for this suite. May 2 14:19:29.175: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:19:29.248: INFO: namespace container-runtime-7843 deletion completed in 6.085952563s • [SLOW TEST:10.249 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:19:29.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions May 2 14:19:29.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' May 2 14:19:29.536: INFO: stderr: "" May 2 14:19:29.536: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:19:29.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4759" for this suite. May 2 14:19:35.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:19:35.641: INFO: namespace kubectl-4759 deletion completed in 6.101008856s • [SLOW TEST:6.393 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:19:35.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 2 14:19:35.849: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"1665e51e-4aeb-40e2-ace7-470c97cb21ac", Controller:(*bool)(0xc001c43262), BlockOwnerDeletion:(*bool)(0xc001c43263)}} May 2 14:19:35.922: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"ff6486cf-1b5a-46a5-b9a2-3fa88a674962", Controller:(*bool)(0xc002cc22a2), BlockOwnerDeletion:(*bool)(0xc002cc22a3)}} May 2 14:19:35.932: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"decd402e-cd93-481d-bd0a-706fbf66936c", Controller:(*bool)(0xc002cc243a), BlockOwnerDeletion:(*bool)(0xc002cc243b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:19:40.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9166" for this suite. May 2 14:19:46.992: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:19:47.070: INFO: namespace gc-9166 deletion completed in 6.092968294s • [SLOW TEST:11.429 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:19:47.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-44be8d36-2d18-402f-b38f-21eb6dbcebe2 STEP: Creating a pod to test consume secrets May 2 14:19:47.165: INFO: Waiting up to 5m0s for pod "pod-secrets-daf41596-0910-4863-9ccd-292ab379c63c" in namespace "secrets-4641" to be "success or failure" May 2 14:19:47.186: INFO: Pod "pod-secrets-daf41596-0910-4863-9ccd-292ab379c63c": Phase="Pending", Reason="", readiness=false. Elapsed: 20.727756ms May 2 14:19:49.190: INFO: Pod "pod-secrets-daf41596-0910-4863-9ccd-292ab379c63c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025032073s May 2 14:19:51.194: INFO: Pod "pod-secrets-daf41596-0910-4863-9ccd-292ab379c63c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029320347s STEP: Saw pod success May 2 14:19:51.195: INFO: Pod "pod-secrets-daf41596-0910-4863-9ccd-292ab379c63c" satisfied condition "success or failure" May 2 14:19:51.198: INFO: Trying to get logs from node iruya-worker pod pod-secrets-daf41596-0910-4863-9ccd-292ab379c63c container secret-volume-test: STEP: delete the pod May 2 14:19:51.236: INFO: Waiting for pod pod-secrets-daf41596-0910-4863-9ccd-292ab379c63c to disappear May 2 14:19:51.262: INFO: Pod pod-secrets-daf41596-0910-4863-9ccd-292ab379c63c no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:19:51.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4641" for this suite. May 2 14:19:57.280: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:19:57.392: INFO: namespace secrets-4641 deletion completed in 6.126428822s • [SLOW TEST:10.321 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:19:57.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-l2d6 STEP: Creating a pod to test atomic-volume-subpath May 2 14:19:57.503: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-l2d6" in namespace "subpath-9639" to be "success or failure" May 2 14:19:57.520: INFO: Pod "pod-subpath-test-configmap-l2d6": Phase="Pending", Reason="", readiness=false. Elapsed: 16.585449ms May 2 14:19:59.598: INFO: Pod "pod-subpath-test-configmap-l2d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094905057s May 2 14:20:01.603: INFO: Pod "pod-subpath-test-configmap-l2d6": Phase="Running", Reason="", readiness=true. Elapsed: 4.099644566s May 2 14:20:03.606: INFO: Pod "pod-subpath-test-configmap-l2d6": Phase="Running", Reason="", readiness=true. Elapsed: 6.103066681s May 2 14:20:05.611: INFO: Pod "pod-subpath-test-configmap-l2d6": Phase="Running", Reason="", readiness=true. Elapsed: 8.107586692s May 2 14:20:07.615: INFO: Pod "pod-subpath-test-configmap-l2d6": Phase="Running", Reason="", readiness=true. Elapsed: 10.11190054s May 2 14:20:09.620: INFO: Pod "pod-subpath-test-configmap-l2d6": Phase="Running", Reason="", readiness=true. Elapsed: 12.116439308s May 2 14:20:11.624: INFO: Pod "pod-subpath-test-configmap-l2d6": Phase="Running", Reason="", readiness=true. Elapsed: 14.121087277s May 2 14:20:13.629: INFO: Pod "pod-subpath-test-configmap-l2d6": Phase="Running", Reason="", readiness=true. Elapsed: 16.125643211s May 2 14:20:15.633: INFO: Pod "pod-subpath-test-configmap-l2d6": Phase="Running", Reason="", readiness=true. Elapsed: 18.13007384s May 2 14:20:17.637: INFO: Pod "pod-subpath-test-configmap-l2d6": Phase="Running", Reason="", readiness=true. Elapsed: 20.13389642s May 2 14:20:19.641: INFO: Pod "pod-subpath-test-configmap-l2d6": Phase="Running", Reason="", readiness=true. Elapsed: 22.137503145s May 2 14:20:21.658: INFO: Pod "pod-subpath-test-configmap-l2d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.155008212s STEP: Saw pod success May 2 14:20:21.658: INFO: Pod "pod-subpath-test-configmap-l2d6" satisfied condition "success or failure" May 2 14:20:21.755: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-l2d6 container test-container-subpath-configmap-l2d6: STEP: delete the pod May 2 14:20:21.822: INFO: Waiting for pod pod-subpath-test-configmap-l2d6 to disappear May 2 14:20:21.939: INFO: Pod pod-subpath-test-configmap-l2d6 no longer exists STEP: Deleting pod pod-subpath-test-configmap-l2d6 May 2 14:20:21.939: INFO: Deleting pod "pod-subpath-test-configmap-l2d6" in namespace "subpath-9639" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:20:21.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9639" for this suite. May 2 14:20:27.960: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:20:28.033: INFO: namespace subpath-9639 deletion completed in 6.086951405s • [SLOW TEST:30.641 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:20:28.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container May 2 14:20:34.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-b28e8a94-3c62-4502-b5e7-df083f35dba6 -c busybox-main-container --namespace=emptydir-1387 -- cat /usr/share/volumeshare/shareddata.txt' May 2 14:20:36.832: INFO: stderr: "I0502 14:20:36.737918 2661 log.go:172] (0xc000180420) (0xc0005dcbe0) Create stream\nI0502 14:20:36.737959 2661 log.go:172] (0xc000180420) (0xc0005dcbe0) Stream added, broadcasting: 1\nI0502 14:20:36.740571 2661 log.go:172] (0xc000180420) Reply frame received for 1\nI0502 14:20:36.740618 2661 log.go:172] (0xc000180420) (0xc000cc2000) Create stream\nI0502 14:20:36.740632 2661 log.go:172] (0xc000180420) (0xc000cc2000) Stream added, broadcasting: 3\nI0502 14:20:36.742389 2661 log.go:172] (0xc000180420) Reply frame received for 3\nI0502 14:20:36.742453 2661 log.go:172] (0xc000180420) (0xc00067c000) Create stream\nI0502 14:20:36.742472 2661 log.go:172] (0xc000180420) (0xc00067c000) Stream added, broadcasting: 5\nI0502 14:20:36.743587 2661 log.go:172] (0xc000180420) Reply frame received for 5\nI0502 14:20:36.823305 2661 log.go:172] (0xc000180420) Data frame received for 5\nI0502 14:20:36.823366 2661 log.go:172] (0xc00067c000) (5) Data frame handling\nI0502 14:20:36.823404 2661 log.go:172] (0xc000180420) Data frame received for 3\nI0502 14:20:36.823423 2661 log.go:172] (0xc000cc2000) (3) Data frame handling\nI0502 14:20:36.823441 2661 log.go:172] (0xc000cc2000) (3) Data frame sent\nI0502 14:20:36.823460 2661 log.go:172] (0xc000180420) Data frame received for 3\nI0502 14:20:36.823482 2661 log.go:172] (0xc000cc2000) (3) Data frame handling\nI0502 14:20:36.825488 2661 log.go:172] (0xc000180420) Data frame received for 1\nI0502 14:20:36.825527 2661 log.go:172] (0xc0005dcbe0) (1) Data frame handling\nI0502 14:20:36.825549 2661 log.go:172] (0xc0005dcbe0) (1) Data frame sent\nI0502 14:20:36.825575 2661 log.go:172] (0xc000180420) (0xc0005dcbe0) Stream removed, broadcasting: 1\nI0502 14:20:36.825601 2661 log.go:172] (0xc000180420) Go away received\nI0502 14:20:36.826033 2661 log.go:172] (0xc000180420) (0xc0005dcbe0) Stream removed, broadcasting: 1\nI0502 14:20:36.826061 2661 log.go:172] (0xc000180420) (0xc000cc2000) Stream removed, broadcasting: 3\nI0502 14:20:36.826081 2661 log.go:172] (0xc000180420) (0xc00067c000) Stream removed, broadcasting: 5\n" May 2 14:20:36.832: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:20:36.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1387" for this suite. May 2 14:20:42.857: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:20:42.939: INFO: namespace emptydir-1387 deletion completed in 6.102715953s • [SLOW TEST:14.906 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:20:42.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod May 2 14:20:47.613: INFO: Successfully updated pod "annotationupdated50e823c-9ecf-4de6-beaf-30ab722b9598" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:20:51.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3475" for this suite. May 2 14:21:13.752: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:21:13.829: INFO: namespace downward-api-3475 deletion completed in 22.123348795s • [SLOW TEST:30.889 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:21:13.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 2 14:21:13.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-8667' May 2 14:21:13.999: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 2 14:21:13.999: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created May 2 14:21:14.011: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller May 2 14:21:14.026: INFO: scanned /root for discovery docs: May 2 14:21:14.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-8667' May 2 14:21:29.868: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 2 14:21:29.868: INFO: stdout: "Created e2e-test-nginx-rc-db1def138d1006a558886c0c0e5842ba\nScaling up e2e-test-nginx-rc-db1def138d1006a558886c0c0e5842ba from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-db1def138d1006a558886c0c0e5842ba up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-db1def138d1006a558886c0c0e5842ba to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" May 2 14:21:29.868: INFO: stdout: "Created e2e-test-nginx-rc-db1def138d1006a558886c0c0e5842ba\nScaling up e2e-test-nginx-rc-db1def138d1006a558886c0c0e5842ba from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-db1def138d1006a558886c0c0e5842ba up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-db1def138d1006a558886c0c0e5842ba to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. May 2 14:21:29.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-8667' May 2 14:21:29.966: INFO: stderr: "" May 2 14:21:29.966: INFO: stdout: "e2e-test-nginx-rc-db1def138d1006a558886c0c0e5842ba-fxf9x " May 2 14:21:29.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-db1def138d1006a558886c0c0e5842ba-fxf9x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8667' May 2 14:21:30.067: INFO: stderr: "" May 2 14:21:30.068: INFO: stdout: "true" May 2 14:21:30.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-db1def138d1006a558886c0c0e5842ba-fxf9x -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8667' May 2 14:21:30.157: INFO: stderr: "" May 2 14:21:30.157: INFO: stdout: "docker.io/library/nginx:1.14-alpine" May 2 14:21:30.157: INFO: e2e-test-nginx-rc-db1def138d1006a558886c0c0e5842ba-fxf9x is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 May 2 14:21:30.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-8667' May 2 14:21:30.275: INFO: stderr: "" May 2 14:21:30.275: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:21:30.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8667" for this suite. May 2 14:21:36.400: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:21:36.472: INFO: namespace kubectl-8667 deletion completed in 6.135144412s • [SLOW TEST:22.643 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:21:36.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod May 2 14:21:41.093: INFO: Successfully updated pod "annotationupdate8b03d6fe-9e56-4bc8-a2be-ebdbd9749c37" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:21:43.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3107" for this suite. May 2 14:22:05.179: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:22:05.255: INFO: namespace projected-3107 deletion completed in 22.095246404s • [SLOW TEST:28.782 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:22:05.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 2 14:22:05.389: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9cf01dd1-e59c-47b7-b44a-4d80b8f20830" in namespace "downward-api-570" to be "success or failure" May 2 14:22:05.393: INFO: Pod "downwardapi-volume-9cf01dd1-e59c-47b7-b44a-4d80b8f20830": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092319ms May 2 14:22:07.397: INFO: Pod "downwardapi-volume-9cf01dd1-e59c-47b7-b44a-4d80b8f20830": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007729484s May 2 14:22:09.421: INFO: Pod "downwardapi-volume-9cf01dd1-e59c-47b7-b44a-4d80b8f20830": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032697894s STEP: Saw pod success May 2 14:22:09.422: INFO: Pod "downwardapi-volume-9cf01dd1-e59c-47b7-b44a-4d80b8f20830" satisfied condition "success or failure" May 2 14:22:09.426: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-9cf01dd1-e59c-47b7-b44a-4d80b8f20830 container client-container: STEP: delete the pod May 2 14:22:09.455: INFO: Waiting for pod downwardapi-volume-9cf01dd1-e59c-47b7-b44a-4d80b8f20830 to disappear May 2 14:22:09.471: INFO: Pod downwardapi-volume-9cf01dd1-e59c-47b7-b44a-4d80b8f20830 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:22:09.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-570" for this suite. May 2 14:22:15.492: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:22:15.567: INFO: namespace downward-api-570 deletion completed in 6.09297229s • [SLOW TEST:10.312 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:22:15.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-dda1c0f4-3419-4e7d-8949-f8f7a2a55aa0 STEP: Creating secret with name s-test-opt-upd-3f33d1e2-b013-41c1-934a-3bd4844110a7 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-dda1c0f4-3419-4e7d-8949-f8f7a2a55aa0 STEP: Updating secret s-test-opt-upd-3f33d1e2-b013-41c1-934a-3bd4844110a7 STEP: Creating secret with name s-test-opt-create-0f4139e0-ff3b-4ff8-8a13-3e56ffb174da STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:23:48.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3414" for this suite. May 2 14:24:10.420: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:24:10.504: INFO: namespace projected-3414 deletion completed in 22.115949987s • [SLOW TEST:114.936 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:24:10.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 2 14:24:10.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-797' May 2 14:24:10.661: INFO: stderr: "" May 2 14:24:10.661: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created May 2 14:24:15.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-797 -o json' May 2 14:24:15.810: INFO: stderr: "" May 2 14:24:15.810: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-02T14:24:10Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-797\",\n \"resourceVersion\": \"8637999\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-797/pods/e2e-test-nginx-pod\",\n \"uid\": \"cf35ca87-8ce5-41cf-9550-046ea98564f3\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-lvcgp\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-lvcgp\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-lvcgp\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-02T14:24:10Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-02T14:24:13Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-02T14:24:13Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-02T14:24:10Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://e19e5c4d1dfd1f275683889efb421711604cc67284eaf01c9c6eeae7561e0b8a\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-02T14:24:13Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.5\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.183\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-02T14:24:10Z\"\n }\n}\n" STEP: replace the image in the pod May 2 14:24:15.811: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-797' May 2 14:24:16.158: INFO: stderr: "" May 2 14:24:16.158: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 May 2 14:24:16.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-797' May 2 14:24:21.881: INFO: stderr: "" May 2 14:24:21.881: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:24:21.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-797" for this suite. May 2 14:24:27.931: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:24:28.006: INFO: namespace kubectl-797 deletion completed in 6.092180845s • [SLOW TEST:17.502 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:24:28.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 2 14:24:28.134: INFO: Pod name pod-release: Found 0 pods out of 1 May 2 14:24:33.139: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:24:34.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6466" for this suite. May 2 14:24:40.243: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:24:40.314: INFO: namespace replication-controller-6466 deletion completed in 6.128178224s • [SLOW TEST:12.308 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:24:40.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:25:11.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2252" for this suite. May 2 14:25:17.273: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:25:17.362: INFO: namespace container-runtime-2252 deletion completed in 6.09766983s • [SLOW TEST:37.048 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:25:17.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 2 14:25:24.325: INFO: 10 pods remaining May 2 14:25:24.325: INFO: 10 pods has nil DeletionTimestamp May 2 14:25:24.325: INFO: May 2 14:25:25.942: INFO: 9 pods remaining May 2 14:25:25.942: INFO: 0 pods has nil DeletionTimestamp May 2 14:25:25.942: INFO: May 2 14:25:27.246: INFO: 0 pods remaining May 2 14:25:27.246: INFO: 0 pods has nil DeletionTimestamp May 2 14:25:27.246: INFO: STEP: Gathering metrics W0502 14:25:28.296204 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 2 14:25:28.296: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:25:28.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6643" for this suite. May 2 14:25:34.439: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:25:34.532: INFO: namespace gc-6643 deletion completed in 6.188447902s • [SLOW TEST:17.169 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:25:34.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 2 14:25:34.631: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d70d9ba5-e561-43f8-9f14-067759ca03bb" in namespace "downward-api-8099" to be "success or failure" May 2 14:25:34.659: INFO: Pod "downwardapi-volume-d70d9ba5-e561-43f8-9f14-067759ca03bb": Phase="Pending", Reason="", readiness=false. Elapsed: 27.547176ms May 2 14:25:36.664: INFO: Pod "downwardapi-volume-d70d9ba5-e561-43f8-9f14-067759ca03bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032626295s May 2 14:25:38.668: INFO: Pod "downwardapi-volume-d70d9ba5-e561-43f8-9f14-067759ca03bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036618964s STEP: Saw pod success May 2 14:25:38.668: INFO: Pod "downwardapi-volume-d70d9ba5-e561-43f8-9f14-067759ca03bb" satisfied condition "success or failure" May 2 14:25:38.670: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-d70d9ba5-e561-43f8-9f14-067759ca03bb container client-container: STEP: delete the pod May 2 14:25:38.692: INFO: Waiting for pod downwardapi-volume-d70d9ba5-e561-43f8-9f14-067759ca03bb to disappear May 2 14:25:38.712: INFO: Pod downwardapi-volume-d70d9ba5-e561-43f8-9f14-067759ca03bb no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:25:38.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8099" for this suite. May 2 14:25:44.749: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:25:44.851: INFO: namespace downward-api-8099 deletion completed in 6.13423528s • [SLOW TEST:10.319 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:25:44.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 2 14:25:44.910: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 2 14:25:49.915: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 2 14:25:49.915: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 2 14:25:51.922: INFO: Creating deployment "test-rollover-deployment" May 2 14:25:51.936: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 2 14:25:53.942: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 2 14:25:53.947: INFO: Ensure that both replica sets have 1 created replica May 2 14:25:53.952: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 2 14:25:53.958: INFO: Updating deployment test-rollover-deployment May 2 14:25:53.958: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 2 14:25:56.014: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 2 14:25:56.020: INFO: Make sure deployment "test-rollover-deployment" is complete May 2 14:25:56.025: INFO: all replica sets need to contain the pod-template-hash label May 2 14:25:56.026: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724026352, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724026352, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724026354, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724026351, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 2 14:25:58.036: INFO: all replica sets need to contain the pod-template-hash label May 2 14:25:58.036: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724026352, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724026352, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724026357, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724026351, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 2 14:26:00.034: INFO: all replica sets need to contain the pod-template-hash label May 2 14:26:00.034: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724026352, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724026352, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724026357, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724026351, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 2 14:26:02.032: INFO: all replica sets need to contain the pod-template-hash label May 2 14:26:02.032: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724026352, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724026352, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724026357, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724026351, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 2 14:26:04.036: INFO: all replica sets need to contain the pod-template-hash label May 2 14:26:04.036: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724026352, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724026352, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724026357, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724026351, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 2 14:26:06.035: INFO: all replica sets need to contain the pod-template-hash label May 2 14:26:06.035: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724026352, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724026352, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724026357, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724026351, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 2 14:26:08.075: INFO: May 2 14:26:08.075: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724026352, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724026352, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724026367, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724026351, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 2 14:26:10.034: INFO: May 2 14:26:10.034: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 2 14:26:10.043: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-3126,SelfLink:/apis/apps/v1/namespaces/deployment-3126/deployments/test-rollover-deployment,UID:0d50e1ed-b201-4772-985b-2c90e0ce4024,ResourceVersion:8638620,Generation:2,CreationTimestamp:2020-05-02 14:25:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-02 14:25:52 +0000 UTC 2020-05-02 14:25:52 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-02 14:26:08 +0000 UTC 2020-05-02 14:25:51 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 2 14:26:10.047: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-3126,SelfLink:/apis/apps/v1/namespaces/deployment-3126/replicasets/test-rollover-deployment-854595fc44,UID:ea205c4c-9066-4d2e-a77b-06db23d0635d,ResourceVersion:8638609,Generation:2,CreationTimestamp:2020-05-02 14:25:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 0d50e1ed-b201-4772-985b-2c90e0ce4024 0xc00309d197 0xc00309d198}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 2 14:26:10.047: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 2 14:26:10.047: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-3126,SelfLink:/apis/apps/v1/namespaces/deployment-3126/replicasets/test-rollover-controller,UID:08886770-b809-41f2-88c8-c64322a10874,ResourceVersion:8638619,Generation:2,CreationTimestamp:2020-05-02 14:25:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 0d50e1ed-b201-4772-985b-2c90e0ce4024 0xc00309d0c7 0xc00309d0c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 2 14:26:10.047: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-3126,SelfLink:/apis/apps/v1/namespaces/deployment-3126/replicasets/test-rollover-deployment-9b8b997cf,UID:433ec605-2356-4a3c-a239-ac88571e3449,ResourceVersion:8638565,Generation:2,CreationTimestamp:2020-05-02 14:25:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 0d50e1ed-b201-4772-985b-2c90e0ce4024 0xc00309d260 0xc00309d261}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 2 14:26:10.051: INFO: Pod "test-rollover-deployment-854595fc44-pnldh" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-pnldh,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-3126,SelfLink:/api/v1/namespaces/deployment-3126/pods/test-rollover-deployment-854595fc44-pnldh,UID:096ea56d-bc68-47ca-8864-87846fb611e2,ResourceVersion:8638587,Generation:0,CreationTimestamp:2020-05-02 14:25:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 ea205c4c-9066-4d2e-a77b-06db23d0635d 0xc0037767b7 0xc0037767b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-69zrl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-69zrl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-69zrl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003776830} {node.kubernetes.io/unreachable Exists NoExecute 0xc003776850}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:25:54 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:25:57 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:25:57 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:25:54 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.108,StartTime:2020-05-02 14:25:54 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-02 14:25:57 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://d0c0f79c47bea17d0a693c244cc044219716f5731c3181ae827ce372a1c9d032}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:26:10.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3126" for this suite. May 2 14:26:18.072: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:26:18.143: INFO: namespace deployment-3126 deletion completed in 8.088377821s • [SLOW TEST:33.291 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:26:18.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5062.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5062.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5062.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5062.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5062.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5062.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5062.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5062.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5062.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5062.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5062.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5062.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5062.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 223.39.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.39.223_udp@PTR;check="$$(dig +tcp +noall +answer +search 223.39.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.39.223_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5062.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5062.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5062.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5062.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5062.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5062.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5062.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5062.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5062.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5062.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5062.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5062.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5062.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 223.39.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.39.223_udp@PTR;check="$$(dig +tcp +noall +answer +search 223.39.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.39.223_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 2 14:26:24.426: INFO: Unable to read wheezy_udp@dns-test-service.dns-5062.svc.cluster.local from pod dns-5062/dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c: the server could not find the requested resource (get pods dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c) May 2 14:26:24.429: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5062.svc.cluster.local from pod dns-5062/dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c: the server could not find the requested resource (get pods dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c) May 2 14:26:24.431: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5062.svc.cluster.local from pod dns-5062/dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c: the server could not find the requested resource (get pods dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c) May 2 14:26:24.434: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5062.svc.cluster.local from pod dns-5062/dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c: the server could not find the requested resource (get pods dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c) May 2 14:26:24.450: INFO: Unable to read jessie_udp@dns-test-service.dns-5062.svc.cluster.local from pod dns-5062/dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c: the server could not find the requested resource (get pods dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c) May 2 14:26:24.452: INFO: Unable to read jessie_tcp@dns-test-service.dns-5062.svc.cluster.local from pod dns-5062/dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c: the server could not find the requested resource (get pods dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c) May 2 14:26:24.454: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5062.svc.cluster.local from pod dns-5062/dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c: the server could not find the requested resource (get pods dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c) May 2 14:26:24.455: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5062.svc.cluster.local from pod dns-5062/dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c: the server could not find the requested resource (get pods dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c) May 2 14:26:24.468: INFO: Lookups using dns-5062/dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c failed for: [wheezy_udp@dns-test-service.dns-5062.svc.cluster.local wheezy_tcp@dns-test-service.dns-5062.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5062.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5062.svc.cluster.local jessie_udp@dns-test-service.dns-5062.svc.cluster.local jessie_tcp@dns-test-service.dns-5062.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5062.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5062.svc.cluster.local] May 2 14:26:29.473: INFO: Unable to read wheezy_udp@dns-test-service.dns-5062.svc.cluster.local from pod dns-5062/dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c: the server could not find the requested resource (get pods dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c) May 2 14:26:29.476: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5062.svc.cluster.local from pod dns-5062/dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c: the server could not find the requested resource (get pods dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c) May 2 14:26:29.480: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5062.svc.cluster.local from pod dns-5062/dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c: the server could not find the requested resource (get pods dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c) May 2 14:26:29.484: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5062.svc.cluster.local from pod dns-5062/dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c: the server could not find the requested resource (get pods dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c) May 2 14:26:29.508: INFO: Unable to read jessie_udp@dns-test-service.dns-5062.svc.cluster.local from pod dns-5062/dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c: the server could not find the requested resource (get pods dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c) May 2 14:26:29.511: INFO: Unable to read jessie_tcp@dns-test-service.dns-5062.svc.cluster.local from pod dns-5062/dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c: the server could not find the requested resource (get pods dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c) May 2 14:26:29.514: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5062.svc.cluster.local from pod dns-5062/dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c: the server could not find the requested resource (get pods dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c) May 2 14:26:29.517: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5062.svc.cluster.local from pod dns-5062/dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c: the server could not find the requested resource (get pods dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c) May 2 14:26:29.539: INFO: Lookups using dns-5062/dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c failed for: [wheezy_udp@dns-test-service.dns-5062.svc.cluster.local wheezy_tcp@dns-test-service.dns-5062.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5062.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5062.svc.cluster.local jessie_udp@dns-test-service.dns-5062.svc.cluster.local jessie_tcp@dns-test-service.dns-5062.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5062.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5062.svc.cluster.local] May 2 14:26:34.473: INFO: Unable to read wheezy_udp@dns-test-service.dns-5062.svc.cluster.local from pod dns-5062/dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c: the server could not find the requested resource (get pods dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c) May 2 14:26:34.477: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5062.svc.cluster.local from pod dns-5062/dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c: the server could not find the requested resource (get pods dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c) May 2 14:26:34.480: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5062.svc.cluster.local from pod dns-5062/dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c: the server could not find the requested resource (get pods dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c) May 2 14:26:34.484: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5062.svc.cluster.local from pod dns-5062/dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c: the server could not find the requested resource (get pods dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c) May 2 14:26:34.504: INFO: Unable to read jessie_udp@dns-test-service.dns-5062.svc.cluster.local from pod dns-5062/dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c: the server could not find the requested resource (get pods dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c) May 2 14:26:34.506: INFO: Unable to read jessie_tcp@dns-test-service.dns-5062.svc.cluster.local from pod dns-5062/dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c: the server could not find the requested resource (get pods dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c) May 2 14:26:34.509: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5062.svc.cluster.local from pod dns-5062/dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c: the server could not find the requested resource (get pods dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c) May 2 14:26:34.512: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5062.svc.cluster.local from pod dns-5062/dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c: the server could not find the requested resource (get pods dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c) May 2 14:26:34.531: INFO: Lookups using dns-5062/dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c failed for: [wheezy_udp@dns-test-service.dns-5062.svc.cluster.local wheezy_tcp@dns-test-service.dns-5062.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5062.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5062.svc.cluster.local jessie_udp@dns-test-service.dns-5062.svc.cluster.local jessie_tcp@dns-test-service.dns-5062.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5062.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5062.svc.cluster.local] May 2 14:26:39.474: INFO: Unable to read wheezy_udp@dns-test-service.dns-5062.svc.cluster.local from pod dns-5062/dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c: the server could not find the requested resource (get pods dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c) May 2 14:26:39.477: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5062.svc.cluster.local from pod dns-5062/dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c: the server could not find the requested resource (get pods dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c) May 2 14:26:39.480: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5062.svc.cluster.local from pod dns-5062/dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c: the server could not find the requested resource (get pods dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c) May 2 14:26:39.484: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5062.svc.cluster.local from pod dns-5062/dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c: the server could not find the requested resource (get pods dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c) May 2 14:26:39.506: INFO: Unable to read jessie_udp@dns-test-service.dns-5062.svc.cluster.local from pod dns-5062/dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c: the server could not find the requested resource (get pods dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c) May 2 14:26:39.509: INFO: Unable to read jessie_tcp@dns-test-service.dns-5062.svc.cluster.local from pod dns-5062/dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c: the server could not find the requested resource (get pods dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c) May 2 14:26:39.512: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5062.svc.cluster.local from pod dns-5062/dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c: the server could not find the requested resource (get pods dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c) May 2 14:26:39.515: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5062.svc.cluster.local from pod dns-5062/dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c: the server could not find the requested resource (get pods dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c) May 2 14:26:39.534: INFO: Lookups using dns-5062/dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c failed for: [wheezy_udp@dns-test-service.dns-5062.svc.cluster.local wheezy_tcp@dns-test-service.dns-5062.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5062.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5062.svc.cluster.local jessie_udp@dns-test-service.dns-5062.svc.cluster.local jessie_tcp@dns-test-service.dns-5062.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5062.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5062.svc.cluster.local] May 2 14:26:44.473: INFO: Unable to read wheezy_udp@dns-test-service.dns-5062.svc.cluster.local from pod dns-5062/dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c: the server could not find the requested resource (get pods dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c) May 2 14:26:44.477: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5062.svc.cluster.local from pod dns-5062/dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c: the server could not find the requested resource (get pods dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c) May 2 14:26:44.480: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5062.svc.cluster.local from pod dns-5062/dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c: the server could not find the requested resource (get pods dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c) May 2 14:26:44.483: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5062.svc.cluster.local from pod dns-5062/dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c: the server could not find the requested resource (get pods dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c) May 2 14:26:44.507: INFO: Unable to read jessie_udp@dns-test-service.dns-5062.svc.cluster.local from pod dns-5062/dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c: the server could not find the requested resource (get pods dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c) May 2 14:26:44.510: INFO: Unable to read jessie_tcp@dns-test-service.dns-5062.svc.cluster.local from pod dns-5062/dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c: the server could not find the requested resource (get pods dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c) May 2 14:26:44.513: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5062.svc.cluster.local from pod dns-5062/dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c: the server could not find the requested resource (get pods dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c) May 2 14:26:44.516: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5062.svc.cluster.local from pod dns-5062/dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c: the server could not find the requested resource (get pods dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c) May 2 14:26:44.536: INFO: Lookups using dns-5062/dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c failed for: [wheezy_udp@dns-test-service.dns-5062.svc.cluster.local wheezy_tcp@dns-test-service.dns-5062.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5062.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5062.svc.cluster.local jessie_udp@dns-test-service.dns-5062.svc.cluster.local jessie_tcp@dns-test-service.dns-5062.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5062.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5062.svc.cluster.local] May 2 14:26:49.473: INFO: Unable to read wheezy_udp@dns-test-service.dns-5062.svc.cluster.local from pod dns-5062/dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c: the server could not find the requested resource (get pods dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c) May 2 14:26:49.477: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5062.svc.cluster.local from pod dns-5062/dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c: the server could not find the requested resource (get pods dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c) May 2 14:26:49.480: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5062.svc.cluster.local from pod dns-5062/dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c: the server could not find the requested resource (get pods dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c) May 2 14:26:49.483: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5062.svc.cluster.local from pod dns-5062/dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c: the server could not find the requested resource (get pods dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c) May 2 14:26:49.505: INFO: Unable to read jessie_udp@dns-test-service.dns-5062.svc.cluster.local from pod dns-5062/dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c: the server could not find the requested resource (get pods dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c) May 2 14:26:49.508: INFO: Unable to read jessie_tcp@dns-test-service.dns-5062.svc.cluster.local from pod dns-5062/dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c: the server could not find the requested resource (get pods dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c) May 2 14:26:49.510: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5062.svc.cluster.local from pod dns-5062/dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c: the server could not find the requested resource (get pods dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c) May 2 14:26:49.515: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5062.svc.cluster.local from pod dns-5062/dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c: the server could not find the requested resource (get pods dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c) May 2 14:26:49.531: INFO: Lookups using dns-5062/dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c failed for: [wheezy_udp@dns-test-service.dns-5062.svc.cluster.local wheezy_tcp@dns-test-service.dns-5062.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5062.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5062.svc.cluster.local jessie_udp@dns-test-service.dns-5062.svc.cluster.local jessie_tcp@dns-test-service.dns-5062.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5062.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5062.svc.cluster.local] May 2 14:26:54.518: INFO: DNS probes using dns-5062/dns-test-7e6bd451-0384-469d-b074-c1315c2fd29c succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:26:55.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5062" for this suite. May 2 14:27:01.238: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:27:01.319: INFO: namespace dns-5062 deletion completed in 6.175125857s • [SLOW TEST:43.176 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:27:01.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 2 14:27:01.404: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:27:02.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9914" for this suite. May 2 14:27:08.511: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:27:08.593: INFO: namespace custom-resource-definition-9914 deletion completed in 6.100287968s • [SLOW TEST:7.274 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:27:08.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 2 14:27:08.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' May 2 14:27:08.779: INFO: stderr: "" May 2 14:27:08.779: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.11\", GitCommit:\"d94a81c724ea8e1ccc9002d89b7fe81d58f89ede\", GitTreeState:\"clean\", BuildDate:\"2020-05-01T18:07:33Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:28:37Z\", GoVersion:\"go1.12.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:27:08.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4612" for this suite. May 2 14:27:14.805: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:27:14.874: INFO: namespace kubectl-4612 deletion completed in 6.089857399s • [SLOW TEST:6.279 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:27:14.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-9jc2 STEP: Creating a pod to test atomic-volume-subpath May 2 14:27:14.982: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-9jc2" in namespace "subpath-4201" to be "success or failure" May 2 14:27:14.992: INFO: Pod "pod-subpath-test-configmap-9jc2": Phase="Pending", Reason="", readiness=false. Elapsed: 9.623946ms May 2 14:27:16.996: INFO: Pod "pod-subpath-test-configmap-9jc2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013631783s May 2 14:27:19.001: INFO: Pod "pod-subpath-test-configmap-9jc2": Phase="Running", Reason="", readiness=true. Elapsed: 4.018887507s May 2 14:27:21.005: INFO: Pod "pod-subpath-test-configmap-9jc2": Phase="Running", Reason="", readiness=true. Elapsed: 6.02322895s May 2 14:27:23.009: INFO: Pod "pod-subpath-test-configmap-9jc2": Phase="Running", Reason="", readiness=true. Elapsed: 8.026998916s May 2 14:27:25.013: INFO: Pod "pod-subpath-test-configmap-9jc2": Phase="Running", Reason="", readiness=true. Elapsed: 10.031190238s May 2 14:27:27.018: INFO: Pod "pod-subpath-test-configmap-9jc2": Phase="Running", Reason="", readiness=true. Elapsed: 12.035587861s May 2 14:27:29.022: INFO: Pod "pod-subpath-test-configmap-9jc2": Phase="Running", Reason="", readiness=true. Elapsed: 14.040066452s May 2 14:27:31.027: INFO: Pod "pod-subpath-test-configmap-9jc2": Phase="Running", Reason="", readiness=true. Elapsed: 16.044429168s May 2 14:27:33.031: INFO: Pod "pod-subpath-test-configmap-9jc2": Phase="Running", Reason="", readiness=true. Elapsed: 18.048905023s May 2 14:27:35.036: INFO: Pod "pod-subpath-test-configmap-9jc2": Phase="Running", Reason="", readiness=true. Elapsed: 20.053788584s May 2 14:27:37.040: INFO: Pod "pod-subpath-test-configmap-9jc2": Phase="Running", Reason="", readiness=true. Elapsed: 22.057742999s May 2 14:27:39.044: INFO: Pod "pod-subpath-test-configmap-9jc2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.062205729s STEP: Saw pod success May 2 14:27:39.044: INFO: Pod "pod-subpath-test-configmap-9jc2" satisfied condition "success or failure" May 2 14:27:39.048: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-configmap-9jc2 container test-container-subpath-configmap-9jc2: STEP: delete the pod May 2 14:27:39.072: INFO: Waiting for pod pod-subpath-test-configmap-9jc2 to disappear May 2 14:27:39.076: INFO: Pod pod-subpath-test-configmap-9jc2 no longer exists STEP: Deleting pod pod-subpath-test-configmap-9jc2 May 2 14:27:39.076: INFO: Deleting pod "pod-subpath-test-configmap-9jc2" in namespace "subpath-4201" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:27:39.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4201" for this suite. May 2 14:27:45.092: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:27:45.174: INFO: namespace subpath-4201 deletion completed in 6.094084822s • [SLOW TEST:30.299 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:27:45.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-71b8ab09-99f5-4406-8b4c-e788251952bb STEP: Creating a pod to test consume configMaps May 2 14:27:45.247: INFO: Waiting up to 5m0s for pod "pod-configmaps-17314b57-7881-47d9-9121-6c73d431b5d8" in namespace "configmap-6612" to be "success or failure" May 2 14:27:45.250: INFO: Pod "pod-configmaps-17314b57-7881-47d9-9121-6c73d431b5d8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.28738ms May 2 14:27:47.254: INFO: Pod "pod-configmaps-17314b57-7881-47d9-9121-6c73d431b5d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007101289s May 2 14:27:49.257: INFO: Pod "pod-configmaps-17314b57-7881-47d9-9121-6c73d431b5d8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010340308s May 2 14:27:51.261: INFO: Pod "pod-configmaps-17314b57-7881-47d9-9121-6c73d431b5d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014770754s STEP: Saw pod success May 2 14:27:51.261: INFO: Pod "pod-configmaps-17314b57-7881-47d9-9121-6c73d431b5d8" satisfied condition "success or failure" May 2 14:27:51.265: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-17314b57-7881-47d9-9121-6c73d431b5d8 container configmap-volume-test: STEP: delete the pod May 2 14:27:51.288: INFO: Waiting for pod pod-configmaps-17314b57-7881-47d9-9121-6c73d431b5d8 to disappear May 2 14:27:51.292: INFO: Pod pod-configmaps-17314b57-7881-47d9-9121-6c73d431b5d8 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:27:51.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6612" for this suite. May 2 14:27:57.334: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:27:57.406: INFO: namespace configmap-6612 deletion completed in 6.110592219s • [SLOW TEST:12.231 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:27:57.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-7beadff6-06d8-4c21-850a-693c51eacab1 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-7beadff6-06d8-4c21-850a-693c51eacab1 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:28:03.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-886" for this suite. May 2 14:28:25.631: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:28:25.727: INFO: namespace projected-886 deletion completed in 22.121277502s • [SLOW TEST:28.321 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:28:25.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 2 14:28:33.868: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 2 14:28:33.882: INFO: Pod pod-with-poststart-http-hook still exists May 2 14:28:35.882: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 2 14:28:35.887: INFO: Pod pod-with-poststart-http-hook still exists May 2 14:28:37.882: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 2 14:28:37.886: INFO: Pod pod-with-poststart-http-hook still exists May 2 14:28:39.882: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 2 14:28:39.887: INFO: Pod pod-with-poststart-http-hook still exists May 2 14:28:41.882: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 2 14:28:41.886: INFO: Pod pod-with-poststart-http-hook still exists May 2 14:28:43.882: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 2 14:28:43.887: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:28:43.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4948" for this suite. May 2 14:29:05.902: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:29:06.007: INFO: namespace container-lifecycle-hook-4948 deletion completed in 22.116296549s • [SLOW TEST:40.279 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:29:06.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 2 14:29:06.114: INFO: (0) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.744903ms) May 2 14:29:06.117: INFO: (1) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.285723ms) May 2 14:29:06.120: INFO: (2) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.543992ms) May 2 14:29:06.122: INFO: (3) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.508857ms) May 2 14:29:06.125: INFO: (4) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.686025ms) May 2 14:29:06.128: INFO: (5) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.151314ms) May 2 14:29:06.132: INFO: (6) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.747995ms) May 2 14:29:06.136: INFO: (7) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.953021ms) May 2 14:29:06.139: INFO: (8) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.147544ms) May 2 14:29:06.142: INFO: (9) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.949723ms) May 2 14:29:06.145: INFO: (10) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.514852ms) May 2 14:29:06.147: INFO: (11) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.404175ms) May 2 14:29:06.150: INFO: (12) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.51915ms) May 2 14:29:06.152: INFO: (13) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.239605ms) May 2 14:29:06.154: INFO: (14) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.345524ms) May 2 14:29:06.157: INFO: (15) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.341158ms) May 2 14:29:06.160: INFO: (16) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.17991ms) May 2 14:29:06.181: INFO: (17) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 20.800997ms) May 2 14:29:06.183: INFO: (18) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.851457ms) May 2 14:29:06.186: INFO: (19) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.908044ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:29:06.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-8817" for this suite. May 2 14:29:12.207: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:29:12.323: INFO: namespace proxy-8817 deletion completed in 6.133475323s • [SLOW TEST:6.315 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:29:12.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-5991 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-5991 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5991 May 2 14:29:12.444: INFO: Found 0 stateful pods, waiting for 1 May 2 14:29:22.448: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 2 14:29:22.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5991 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 2 14:29:22.736: INFO: stderr: "I0502 14:29:22.599938 2920 log.go:172] (0xc000ab2420) (0xc0002d2820) Create stream\nI0502 14:29:22.599994 2920 log.go:172] (0xc000ab2420) (0xc0002d2820) Stream added, broadcasting: 1\nI0502 14:29:22.602435 2920 log.go:172] (0xc000ab2420) Reply frame received for 1\nI0502 14:29:22.602479 2920 log.go:172] (0xc000ab2420) (0xc0002d28c0) Create stream\nI0502 14:29:22.602493 2920 log.go:172] (0xc000ab2420) (0xc0002d28c0) Stream added, broadcasting: 3\nI0502 14:29:22.603626 2920 log.go:172] (0xc000ab2420) Reply frame received for 3\nI0502 14:29:22.603667 2920 log.go:172] (0xc000ab2420) (0xc000892000) Create stream\nI0502 14:29:22.603679 2920 log.go:172] (0xc000ab2420) (0xc000892000) Stream added, broadcasting: 5\nI0502 14:29:22.605642 2920 log.go:172] (0xc000ab2420) Reply frame received for 5\nI0502 14:29:22.687177 2920 log.go:172] (0xc000ab2420) Data frame received for 5\nI0502 14:29:22.687201 2920 log.go:172] (0xc000892000) (5) Data frame handling\nI0502 14:29:22.687215 2920 log.go:172] (0xc000892000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0502 14:29:22.728336 2920 log.go:172] (0xc000ab2420) Data frame received for 5\nI0502 14:29:22.728388 2920 log.go:172] (0xc000892000) (5) Data frame handling\nI0502 14:29:22.728413 2920 log.go:172] (0xc000ab2420) Data frame received for 3\nI0502 14:29:22.728424 2920 log.go:172] (0xc0002d28c0) (3) Data frame handling\nI0502 14:29:22.728435 2920 log.go:172] (0xc0002d28c0) (3) Data frame sent\nI0502 14:29:22.728731 2920 log.go:172] (0xc000ab2420) Data frame received for 3\nI0502 14:29:22.728753 2920 log.go:172] (0xc0002d28c0) (3) Data frame handling\nI0502 14:29:22.730689 2920 log.go:172] (0xc000ab2420) Data frame received for 1\nI0502 14:29:22.730709 2920 log.go:172] (0xc0002d2820) (1) Data frame handling\nI0502 14:29:22.730722 2920 log.go:172] (0xc0002d2820) (1) Data frame sent\nI0502 14:29:22.730732 2920 log.go:172] (0xc000ab2420) (0xc0002d2820) Stream removed, broadcasting: 1\nI0502 14:29:22.730894 2920 log.go:172] (0xc000ab2420) Go away received\nI0502 14:29:22.731183 2920 log.go:172] (0xc000ab2420) (0xc0002d2820) Stream removed, broadcasting: 1\nI0502 14:29:22.731208 2920 log.go:172] (0xc000ab2420) (0xc0002d28c0) Stream removed, broadcasting: 3\nI0502 14:29:22.731219 2920 log.go:172] (0xc000ab2420) (0xc000892000) Stream removed, broadcasting: 5\n" May 2 14:29:22.736: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 2 14:29:22.736: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 2 14:29:22.741: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 2 14:29:32.746: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 2 14:29:32.746: INFO: Waiting for statefulset status.replicas updated to 0 May 2 14:29:32.766: INFO: POD NODE PHASE GRACE CONDITIONS May 2 14:29:32.767: INFO: ss-0 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:12 +0000 UTC }] May 2 14:29:32.767: INFO: May 2 14:29:32.767: INFO: StatefulSet ss has not reached scale 3, at 1 May 2 14:29:33.772: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.989336015s May 2 14:29:34.776: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.984203619s May 2 14:29:35.782: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.980055976s May 2 14:29:36.787: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.974313565s May 2 14:29:37.792: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.968901132s May 2 14:29:38.797: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.963957389s May 2 14:29:39.803: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.958490993s May 2 14:29:40.808: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.952715106s May 2 14:29:41.813: INFO: Verifying statefulset ss doesn't scale past 3 for another 947.492318ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5991 May 2 14:29:42.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5991 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 2 14:29:43.141: INFO: stderr: "I0502 14:29:43.059449 2941 log.go:172] (0xc00071aa50) (0xc000302aa0) Create stream\nI0502 14:29:43.059485 2941 log.go:172] (0xc00071aa50) (0xc000302aa0) Stream added, broadcasting: 1\nI0502 14:29:43.062917 2941 log.go:172] (0xc00071aa50) Reply frame received for 1\nI0502 14:29:43.062970 2941 log.go:172] (0xc00071aa50) (0xc00097e000) Create stream\nI0502 14:29:43.063863 2941 log.go:172] (0xc00071aa50) (0xc00097e000) Stream added, broadcasting: 3\nI0502 14:29:43.066217 2941 log.go:172] (0xc00071aa50) Reply frame received for 3\nI0502 14:29:43.066282 2941 log.go:172] (0xc00071aa50) (0xc000457ae0) Create stream\nI0502 14:29:43.066313 2941 log.go:172] (0xc00071aa50) (0xc000457ae0) Stream added, broadcasting: 5\nI0502 14:29:43.067134 2941 log.go:172] (0xc00071aa50) Reply frame received for 5\nI0502 14:29:43.132787 2941 log.go:172] (0xc00071aa50) Data frame received for 5\nI0502 14:29:43.132820 2941 log.go:172] (0xc000457ae0) (5) Data frame handling\nI0502 14:29:43.132834 2941 log.go:172] (0xc000457ae0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0502 14:29:43.132855 2941 log.go:172] (0xc00071aa50) Data frame received for 5\nI0502 14:29:43.132911 2941 log.go:172] (0xc000457ae0) (5) Data frame handling\nI0502 14:29:43.132959 2941 log.go:172] (0xc00071aa50) Data frame received for 3\nI0502 14:29:43.132975 2941 log.go:172] (0xc00097e000) (3) Data frame handling\nI0502 14:29:43.132993 2941 log.go:172] (0xc00097e000) (3) Data frame sent\nI0502 14:29:43.133016 2941 log.go:172] (0xc00071aa50) Data frame received for 3\nI0502 14:29:43.133031 2941 log.go:172] (0xc00097e000) (3) Data frame handling\nI0502 14:29:43.135003 2941 log.go:172] (0xc00071aa50) Data frame received for 1\nI0502 14:29:43.135123 2941 log.go:172] (0xc000302aa0) (1) Data frame handling\nI0502 14:29:43.135155 2941 log.go:172] (0xc000302aa0) (1) Data frame sent\nI0502 14:29:43.135180 2941 log.go:172] (0xc00071aa50) (0xc000302aa0) Stream removed, broadcasting: 1\nI0502 14:29:43.135216 2941 log.go:172] (0xc00071aa50) Go away received\nI0502 14:29:43.135625 2941 log.go:172] (0xc00071aa50) (0xc000302aa0) Stream removed, broadcasting: 1\nI0502 14:29:43.135659 2941 log.go:172] (0xc00071aa50) (0xc00097e000) Stream removed, broadcasting: 3\nI0502 14:29:43.135673 2941 log.go:172] (0xc00071aa50) (0xc000457ae0) Stream removed, broadcasting: 5\n" May 2 14:29:43.141: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 2 14:29:43.141: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 2 14:29:43.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5991 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 2 14:29:43.397: INFO: stderr: "I0502 14:29:43.307363 2964 log.go:172] (0xc000a14420) (0xc000320820) Create stream\nI0502 14:29:43.307434 2964 log.go:172] (0xc000a14420) (0xc000320820) Stream added, broadcasting: 1\nI0502 14:29:43.310035 2964 log.go:172] (0xc000a14420) Reply frame received for 1\nI0502 14:29:43.310103 2964 log.go:172] (0xc000a14420) (0xc0007fe000) Create stream\nI0502 14:29:43.310126 2964 log.go:172] (0xc000a14420) (0xc0007fe000) Stream added, broadcasting: 3\nI0502 14:29:43.311202 2964 log.go:172] (0xc000a14420) Reply frame received for 3\nI0502 14:29:43.311247 2964 log.go:172] (0xc000a14420) (0xc0003208c0) Create stream\nI0502 14:29:43.311277 2964 log.go:172] (0xc000a14420) (0xc0003208c0) Stream added, broadcasting: 5\nI0502 14:29:43.312924 2964 log.go:172] (0xc000a14420) Reply frame received for 5\nI0502 14:29:43.388375 2964 log.go:172] (0xc000a14420) Data frame received for 5\nI0502 14:29:43.388406 2964 log.go:172] (0xc0003208c0) (5) Data frame handling\nI0502 14:29:43.388417 2964 log.go:172] (0xc0003208c0) (5) Data frame sent\nI0502 14:29:43.388426 2964 log.go:172] (0xc000a14420) Data frame received for 5\nI0502 14:29:43.388433 2964 log.go:172] (0xc0003208c0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0502 14:29:43.388470 2964 log.go:172] (0xc000a14420) Data frame received for 3\nI0502 14:29:43.388551 2964 log.go:172] (0xc0007fe000) (3) Data frame handling\nI0502 14:29:43.388585 2964 log.go:172] (0xc0007fe000) (3) Data frame sent\nI0502 14:29:43.388599 2964 log.go:172] (0xc000a14420) Data frame received for 3\nI0502 14:29:43.388609 2964 log.go:172] (0xc0007fe000) (3) Data frame handling\nI0502 14:29:43.390668 2964 log.go:172] (0xc000a14420) Data frame received for 1\nI0502 14:29:43.390698 2964 log.go:172] (0xc000320820) (1) Data frame handling\nI0502 14:29:43.390731 2964 log.go:172] (0xc000320820) (1) Data frame sent\nI0502 14:29:43.390752 2964 log.go:172] (0xc000a14420) (0xc000320820) Stream removed, broadcasting: 1\nI0502 14:29:43.390777 2964 log.go:172] (0xc000a14420) Go away received\nI0502 14:29:43.391261 2964 log.go:172] (0xc000a14420) (0xc000320820) Stream removed, broadcasting: 1\nI0502 14:29:43.391285 2964 log.go:172] (0xc000a14420) (0xc0007fe000) Stream removed, broadcasting: 3\nI0502 14:29:43.391304 2964 log.go:172] (0xc000a14420) (0xc0003208c0) Stream removed, broadcasting: 5\n" May 2 14:29:43.398: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 2 14:29:43.398: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 2 14:29:43.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5991 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 2 14:29:43.593: INFO: stderr: "I0502 14:29:43.506737 2986 log.go:172] (0xc00087e0b0) (0xc00079e640) Create stream\nI0502 14:29:43.506787 2986 log.go:172] (0xc00087e0b0) (0xc00079e640) Stream added, broadcasting: 1\nI0502 14:29:43.517062 2986 log.go:172] (0xc00087e0b0) Reply frame received for 1\nI0502 14:29:43.517368 2986 log.go:172] (0xc00087e0b0) (0xc00079e6e0) Create stream\nI0502 14:29:43.517401 2986 log.go:172] (0xc00087e0b0) (0xc00079e6e0) Stream added, broadcasting: 3\nI0502 14:29:43.518796 2986 log.go:172] (0xc00087e0b0) Reply frame received for 3\nI0502 14:29:43.518825 2986 log.go:172] (0xc00087e0b0) (0xc0008ea000) Create stream\nI0502 14:29:43.518832 2986 log.go:172] (0xc00087e0b0) (0xc0008ea000) Stream added, broadcasting: 5\nI0502 14:29:43.519507 2986 log.go:172] (0xc00087e0b0) Reply frame received for 5\nI0502 14:29:43.584809 2986 log.go:172] (0xc00087e0b0) Data frame received for 3\nI0502 14:29:43.584837 2986 log.go:172] (0xc00079e6e0) (3) Data frame handling\nI0502 14:29:43.584849 2986 log.go:172] (0xc00079e6e0) (3) Data frame sent\nI0502 14:29:43.584855 2986 log.go:172] (0xc00087e0b0) Data frame received for 3\nI0502 14:29:43.584862 2986 log.go:172] (0xc00079e6e0) (3) Data frame handling\nI0502 14:29:43.584992 2986 log.go:172] (0xc00087e0b0) Data frame received for 5\nI0502 14:29:43.585018 2986 log.go:172] (0xc0008ea000) (5) Data frame handling\nI0502 14:29:43.585043 2986 log.go:172] (0xc0008ea000) (5) Data frame sent\nI0502 14:29:43.585064 2986 log.go:172] (0xc00087e0b0) Data frame received for 5\nI0502 14:29:43.585083 2986 log.go:172] (0xc0008ea000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0502 14:29:43.587539 2986 log.go:172] (0xc00087e0b0) Data frame received for 1\nI0502 14:29:43.587559 2986 log.go:172] (0xc00079e640) (1) Data frame handling\nI0502 14:29:43.587566 2986 log.go:172] (0xc00079e640) (1) Data frame sent\nI0502 14:29:43.587573 2986 log.go:172] (0xc00087e0b0) (0xc00079e640) Stream removed, broadcasting: 1\nI0502 14:29:43.587580 2986 log.go:172] (0xc00087e0b0) Go away received\nI0502 14:29:43.588035 2986 log.go:172] (0xc00087e0b0) (0xc00079e640) Stream removed, broadcasting: 1\nI0502 14:29:43.588059 2986 log.go:172] (0xc00087e0b0) (0xc00079e6e0) Stream removed, broadcasting: 3\nI0502 14:29:43.588071 2986 log.go:172] (0xc00087e0b0) (0xc0008ea000) Stream removed, broadcasting: 5\n" May 2 14:29:43.593: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 2 14:29:43.593: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 2 14:29:43.598: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 2 14:29:43.598: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 2 14:29:43.598: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 2 14:29:43.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5991 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 2 14:29:43.807: INFO: stderr: "I0502 14:29:43.732006 3005 log.go:172] (0xc000118dc0) (0xc0002f8820) Create stream\nI0502 14:29:43.732063 3005 log.go:172] (0xc000118dc0) (0xc0002f8820) Stream added, broadcasting: 1\nI0502 14:29:43.737390 3005 log.go:172] (0xc000118dc0) Reply frame received for 1\nI0502 14:29:43.737630 3005 log.go:172] (0xc000118dc0) (0xc00058a000) Create stream\nI0502 14:29:43.737880 3005 log.go:172] (0xc000118dc0) (0xc00058a000) Stream added, broadcasting: 3\nI0502 14:29:43.738714 3005 log.go:172] (0xc000118dc0) Reply frame received for 3\nI0502 14:29:43.738740 3005 log.go:172] (0xc000118dc0) (0xc00058a0a0) Create stream\nI0502 14:29:43.738749 3005 log.go:172] (0xc000118dc0) (0xc00058a0a0) Stream added, broadcasting: 5\nI0502 14:29:43.739429 3005 log.go:172] (0xc000118dc0) Reply frame received for 5\nI0502 14:29:43.801073 3005 log.go:172] (0xc000118dc0) Data frame received for 5\nI0502 14:29:43.801104 3005 log.go:172] (0xc00058a0a0) (5) Data frame handling\nI0502 14:29:43.801320 3005 log.go:172] (0xc000118dc0) Data frame received for 3\nI0502 14:29:43.801350 3005 log.go:172] (0xc00058a000) (3) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0502 14:29:43.801364 3005 log.go:172] (0xc00058a000) (3) Data frame sent\nI0502 14:29:43.801376 3005 log.go:172] (0xc00058a0a0) (5) Data frame sent\nI0502 14:29:43.801393 3005 log.go:172] (0xc000118dc0) Data frame received for 5\nI0502 14:29:43.801402 3005 log.go:172] (0xc00058a0a0) (5) Data frame handling\nI0502 14:29:43.801427 3005 log.go:172] (0xc000118dc0) Data frame received for 3\nI0502 14:29:43.801478 3005 log.go:172] (0xc00058a000) (3) Data frame handling\nI0502 14:29:43.802952 3005 log.go:172] (0xc000118dc0) Data frame received for 1\nI0502 14:29:43.802989 3005 log.go:172] (0xc0002f8820) (1) Data frame handling\nI0502 14:29:43.803009 3005 log.go:172] (0xc0002f8820) (1) Data frame sent\nI0502 14:29:43.803030 3005 log.go:172] (0xc000118dc0) (0xc0002f8820) Stream removed, broadcasting: 1\nI0502 14:29:43.803051 3005 log.go:172] (0xc000118dc0) Go away received\nI0502 14:29:43.803521 3005 log.go:172] (0xc000118dc0) (0xc0002f8820) Stream removed, broadcasting: 1\nI0502 14:29:43.803546 3005 log.go:172] (0xc000118dc0) (0xc00058a000) Stream removed, broadcasting: 3\nI0502 14:29:43.803558 3005 log.go:172] (0xc000118dc0) (0xc00058a0a0) Stream removed, broadcasting: 5\n" May 2 14:29:43.808: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 2 14:29:43.808: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 2 14:29:43.808: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5991 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 2 14:29:44.063: INFO: stderr: "I0502 14:29:43.946580 3025 log.go:172] (0xc000a74420) (0xc000540820) Create stream\nI0502 14:29:43.946652 3025 log.go:172] (0xc000a74420) (0xc000540820) Stream added, broadcasting: 1\nI0502 14:29:43.949970 3025 log.go:172] (0xc000a74420) Reply frame received for 1\nI0502 14:29:43.950049 3025 log.go:172] (0xc000a74420) (0xc00087a000) Create stream\nI0502 14:29:43.950093 3025 log.go:172] (0xc000a74420) (0xc00087a000) Stream added, broadcasting: 3\nI0502 14:29:43.952267 3025 log.go:172] (0xc000a74420) Reply frame received for 3\nI0502 14:29:43.952323 3025 log.go:172] (0xc000a74420) (0xc00087a0a0) Create stream\nI0502 14:29:43.952344 3025 log.go:172] (0xc000a74420) (0xc00087a0a0) Stream added, broadcasting: 5\nI0502 14:29:43.953659 3025 log.go:172] (0xc000a74420) Reply frame received for 5\nI0502 14:29:44.020905 3025 log.go:172] (0xc000a74420) Data frame received for 5\nI0502 14:29:44.020934 3025 log.go:172] (0xc00087a0a0) (5) Data frame handling\nI0502 14:29:44.020956 3025 log.go:172] (0xc00087a0a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0502 14:29:44.055252 3025 log.go:172] (0xc000a74420) Data frame received for 5\nI0502 14:29:44.055357 3025 log.go:172] (0xc00087a0a0) (5) Data frame handling\nI0502 14:29:44.055410 3025 log.go:172] (0xc000a74420) Data frame received for 3\nI0502 14:29:44.055429 3025 log.go:172] (0xc00087a000) (3) Data frame handling\nI0502 14:29:44.055441 3025 log.go:172] (0xc00087a000) (3) Data frame sent\nI0502 14:29:44.055633 3025 log.go:172] (0xc000a74420) Data frame received for 3\nI0502 14:29:44.055676 3025 log.go:172] (0xc00087a000) (3) Data frame handling\nI0502 14:29:44.057517 3025 log.go:172] (0xc000a74420) Data frame received for 1\nI0502 14:29:44.057530 3025 log.go:172] (0xc000540820) (1) Data frame handling\nI0502 14:29:44.057535 3025 log.go:172] (0xc000540820) (1) Data frame sent\nI0502 14:29:44.057542 3025 log.go:172] (0xc000a74420) (0xc000540820) Stream removed, broadcasting: 1\nI0502 14:29:44.057678 3025 log.go:172] (0xc000a74420) Go away received\nI0502 14:29:44.057826 3025 log.go:172] (0xc000a74420) (0xc000540820) Stream removed, broadcasting: 1\nI0502 14:29:44.057839 3025 log.go:172] (0xc000a74420) (0xc00087a000) Stream removed, broadcasting: 3\nI0502 14:29:44.057843 3025 log.go:172] (0xc000a74420) (0xc00087a0a0) Stream removed, broadcasting: 5\n" May 2 14:29:44.063: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 2 14:29:44.063: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 2 14:29:44.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5991 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 2 14:29:44.405: INFO: stderr: "I0502 14:29:44.302989 3045 log.go:172] (0xc00094e370) (0xc00080c640) Create stream\nI0502 14:29:44.303053 3045 log.go:172] (0xc00094e370) (0xc00080c640) Stream added, broadcasting: 1\nI0502 14:29:44.304959 3045 log.go:172] (0xc00094e370) Reply frame received for 1\nI0502 14:29:44.305005 3045 log.go:172] (0xc00094e370) (0xc0007f4000) Create stream\nI0502 14:29:44.305025 3045 log.go:172] (0xc00094e370) (0xc0007f4000) Stream added, broadcasting: 3\nI0502 14:29:44.306015 3045 log.go:172] (0xc00094e370) Reply frame received for 3\nI0502 14:29:44.306048 3045 log.go:172] (0xc00094e370) (0xc00080c6e0) Create stream\nI0502 14:29:44.306062 3045 log.go:172] (0xc00094e370) (0xc00080c6e0) Stream added, broadcasting: 5\nI0502 14:29:44.306788 3045 log.go:172] (0xc00094e370) Reply frame received for 5\nI0502 14:29:44.370886 3045 log.go:172] (0xc00094e370) Data frame received for 5\nI0502 14:29:44.370917 3045 log.go:172] (0xc00080c6e0) (5) Data frame handling\nI0502 14:29:44.370938 3045 log.go:172] (0xc00080c6e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0502 14:29:44.398192 3045 log.go:172] (0xc00094e370) Data frame received for 5\nI0502 14:29:44.398228 3045 log.go:172] (0xc00080c6e0) (5) Data frame handling\nI0502 14:29:44.398289 3045 log.go:172] (0xc00094e370) Data frame received for 3\nI0502 14:29:44.398325 3045 log.go:172] (0xc0007f4000) (3) Data frame handling\nI0502 14:29:44.398363 3045 log.go:172] (0xc0007f4000) (3) Data frame sent\nI0502 14:29:44.398383 3045 log.go:172] (0xc00094e370) Data frame received for 3\nI0502 14:29:44.398406 3045 log.go:172] (0xc0007f4000) (3) Data frame handling\nI0502 14:29:44.400009 3045 log.go:172] (0xc00094e370) Data frame received for 1\nI0502 14:29:44.400053 3045 log.go:172] (0xc00080c640) (1) Data frame handling\nI0502 14:29:44.400079 3045 log.go:172] (0xc00080c640) (1) Data frame sent\nI0502 14:29:44.400125 3045 log.go:172] (0xc00094e370) (0xc00080c640) Stream removed, broadcasting: 1\nI0502 14:29:44.400157 3045 log.go:172] (0xc00094e370) Go away received\nI0502 14:29:44.400625 3045 log.go:172] (0xc00094e370) (0xc00080c640) Stream removed, broadcasting: 1\nI0502 14:29:44.400649 3045 log.go:172] (0xc00094e370) (0xc0007f4000) Stream removed, broadcasting: 3\nI0502 14:29:44.400661 3045 log.go:172] (0xc00094e370) (0xc00080c6e0) Stream removed, broadcasting: 5\n" May 2 14:29:44.405: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 2 14:29:44.405: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 2 14:29:44.405: INFO: Waiting for statefulset status.replicas updated to 0 May 2 14:29:44.409: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 May 2 14:29:54.418: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 2 14:29:54.418: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 2 14:29:54.418: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 2 14:29:54.445: INFO: POD NODE PHASE GRACE CONDITIONS May 2 14:29:54.446: INFO: ss-0 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:12 +0000 UTC }] May 2 14:29:54.446: INFO: ss-1 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:32 +0000 UTC }] May 2 14:29:54.446: INFO: ss-2 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:32 +0000 UTC }] May 2 14:29:54.446: INFO: May 2 14:29:54.446: INFO: StatefulSet ss has not reached scale 0, at 3 May 2 14:29:55.450: INFO: POD NODE PHASE GRACE CONDITIONS May 2 14:29:55.450: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:12 +0000 UTC }] May 2 14:29:55.450: INFO: ss-1 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:32 +0000 UTC }] May 2 14:29:55.450: INFO: ss-2 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:32 +0000 UTC }] May 2 14:29:55.450: INFO: May 2 14:29:55.450: INFO: StatefulSet ss has not reached scale 0, at 3 May 2 14:29:56.474: INFO: POD NODE PHASE GRACE CONDITIONS May 2 14:29:56.474: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:12 +0000 UTC }] May 2 14:29:56.474: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:32 +0000 UTC }] May 2 14:29:56.474: INFO: ss-2 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:32 +0000 UTC }] May 2 14:29:56.474: INFO: May 2 14:29:56.474: INFO: StatefulSet ss has not reached scale 0, at 3 May 2 14:29:57.477: INFO: POD NODE PHASE GRACE CONDITIONS May 2 14:29:57.477: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:12 +0000 UTC }] May 2 14:29:57.477: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:32 +0000 UTC }] May 2 14:29:57.477: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:32 +0000 UTC }] May 2 14:29:57.477: INFO: May 2 14:29:57.477: INFO: StatefulSet ss has not reached scale 0, at 3 May 2 14:29:58.494: INFO: POD NODE PHASE GRACE CONDITIONS May 2 14:29:58.494: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:12 +0000 UTC }] May 2 14:29:58.494: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:32 +0000 UTC }] May 2 14:29:58.494: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:32 +0000 UTC }] May 2 14:29:58.494: INFO: May 2 14:29:58.494: INFO: StatefulSet ss has not reached scale 0, at 3 May 2 14:29:59.499: INFO: POD NODE PHASE GRACE CONDITIONS May 2 14:29:59.499: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:12 +0000 UTC }] May 2 14:29:59.499: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:32 +0000 UTC }] May 2 14:29:59.499: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:32 +0000 UTC }] May 2 14:29:59.500: INFO: May 2 14:29:59.500: INFO: StatefulSet ss has not reached scale 0, at 3 May 2 14:30:00.503: INFO: POD NODE PHASE GRACE CONDITIONS May 2 14:30:00.503: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:12 +0000 UTC }] May 2 14:30:00.503: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:32 +0000 UTC }] May 2 14:30:00.503: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:32 +0000 UTC }] May 2 14:30:00.503: INFO: May 2 14:30:00.503: INFO: StatefulSet ss has not reached scale 0, at 3 May 2 14:30:01.509: INFO: POD NODE PHASE GRACE CONDITIONS May 2 14:30:01.509: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:12 +0000 UTC }] May 2 14:30:01.509: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:32 +0000 UTC }] May 2 14:30:01.509: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 14:29:32 +0000 UTC }] May 2 14:30:01.509: INFO: May 2 14:30:01.509: INFO: StatefulSet ss has not reached scale 0, at 3 May 2 14:30:02.512: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.92963156s May 2 14:30:03.517: INFO: Verifying statefulset ss doesn't scale past 0 for another 926.528121ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5991 May 2 14:30:04.521: INFO: Scaling statefulset ss to 0 May 2 14:30:04.531: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 2 14:30:04.534: INFO: Deleting all statefulset in ns statefulset-5991 May 2 14:30:04.536: INFO: Scaling statefulset ss to 0 May 2 14:30:04.545: INFO: Waiting for statefulset status.replicas updated to 0 May 2 14:30:04.547: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:30:04.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5991" for this suite. May 2 14:30:10.573: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:30:10.673: INFO: namespace statefulset-5991 deletion completed in 6.110859733s • [SLOW TEST:58.349 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:30:10.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-3e161dc7-d9ca-4bb0-8bff-3cc4a0642a80 STEP: Creating a pod to test consume configMaps May 2 14:30:10.807: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a7fe75e0-8537-4b9a-ad97-710fc867e486" in namespace "projected-9534" to be "success or failure" May 2 14:30:10.810: INFO: Pod "pod-projected-configmaps-a7fe75e0-8537-4b9a-ad97-710fc867e486": Phase="Pending", Reason="", readiness=false. Elapsed: 3.612118ms May 2 14:30:12.846: INFO: Pod "pod-projected-configmaps-a7fe75e0-8537-4b9a-ad97-710fc867e486": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039449678s May 2 14:30:14.849: INFO: Pod "pod-projected-configmaps-a7fe75e0-8537-4b9a-ad97-710fc867e486": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04236976s STEP: Saw pod success May 2 14:30:14.849: INFO: Pod "pod-projected-configmaps-a7fe75e0-8537-4b9a-ad97-710fc867e486" satisfied condition "success or failure" May 2 14:30:14.851: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-a7fe75e0-8537-4b9a-ad97-710fc867e486 container projected-configmap-volume-test: STEP: delete the pod May 2 14:30:14.883: INFO: Waiting for pod pod-projected-configmaps-a7fe75e0-8537-4b9a-ad97-710fc867e486 to disappear May 2 14:30:14.918: INFO: Pod pod-projected-configmaps-a7fe75e0-8537-4b9a-ad97-710fc867e486 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:30:14.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9534" for this suite. May 2 14:30:20.945: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:30:21.043: INFO: namespace projected-9534 deletion completed in 6.121474159s • [SLOW TEST:10.369 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:30:21.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC May 2 14:30:21.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8415' May 2 14:30:21.424: INFO: stderr: "" May 2 14:30:21.424: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 2 14:30:22.500: INFO: Selector matched 1 pods for map[app:redis] May 2 14:30:22.500: INFO: Found 0 / 1 May 2 14:30:23.469: INFO: Selector matched 1 pods for map[app:redis] May 2 14:30:23.469: INFO: Found 0 / 1 May 2 14:30:24.434: INFO: Selector matched 1 pods for map[app:redis] May 2 14:30:24.434: INFO: Found 0 / 1 May 2 14:30:25.429: INFO: Selector matched 1 pods for map[app:redis] May 2 14:30:25.429: INFO: Found 1 / 1 May 2 14:30:25.429: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 2 14:30:25.433: INFO: Selector matched 1 pods for map[app:redis] May 2 14:30:25.433: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 2 14:30:25.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-cq9xf --namespace=kubectl-8415 -p {"metadata":{"annotations":{"x":"y"}}}' May 2 14:30:25.536: INFO: stderr: "" May 2 14:30:25.536: INFO: stdout: "pod/redis-master-cq9xf patched\n" STEP: checking annotations May 2 14:30:25.555: INFO: Selector matched 1 pods for map[app:redis] May 2 14:30:25.555: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:30:25.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8415" for this suite. May 2 14:30:47.623: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:30:47.695: INFO: namespace kubectl-8415 deletion completed in 22.13637705s • [SLOW TEST:26.651 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:30:47.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 2 14:30:47.856: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-3724,SelfLink:/api/v1/namespaces/watch-3724/configmaps/e2e-watch-test-resource-version,UID:bbb3f476-2f70-4cce-924f-f0c5e9a8f442,ResourceVersion:8639651,Generation:0,CreationTimestamp:2020-05-02 14:30:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 2 14:30:47.856: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-3724,SelfLink:/api/v1/namespaces/watch-3724/configmaps/e2e-watch-test-resource-version,UID:bbb3f476-2f70-4cce-924f-f0c5e9a8f442,ResourceVersion:8639652,Generation:0,CreationTimestamp:2020-05-02 14:30:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:30:47.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3724" for this suite. May 2 14:30:53.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:30:53.972: INFO: namespace watch-3724 deletion completed in 6.112024873s • [SLOW TEST:6.277 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:30:53.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-1b5f323e-ef56-4efa-be93-2bc523b152b3 STEP: Creating secret with name secret-projected-all-test-volume-aab59bc0-568d-4e33-ac49-65160f83310f STEP: Creating a pod to test Check all projections for projected volume plugin May 2 14:30:54.094: INFO: Waiting up to 5m0s for pod "projected-volume-9821e186-0367-4384-acac-90c032554605" in namespace "projected-2688" to be "success or failure" May 2 14:30:54.097: INFO: Pod "projected-volume-9821e186-0367-4384-acac-90c032554605": Phase="Pending", Reason="", readiness=false. Elapsed: 2.639945ms May 2 14:30:56.100: INFO: Pod "projected-volume-9821e186-0367-4384-acac-90c032554605": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005746076s May 2 14:30:58.104: INFO: Pod "projected-volume-9821e186-0367-4384-acac-90c032554605": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010058432s STEP: Saw pod success May 2 14:30:58.104: INFO: Pod "projected-volume-9821e186-0367-4384-acac-90c032554605" satisfied condition "success or failure" May 2 14:30:58.107: INFO: Trying to get logs from node iruya-worker pod projected-volume-9821e186-0367-4384-acac-90c032554605 container projected-all-volume-test: STEP: delete the pod May 2 14:30:58.130: INFO: Waiting for pod projected-volume-9821e186-0367-4384-acac-90c032554605 to disappear May 2 14:30:58.139: INFO: Pod projected-volume-9821e186-0367-4384-acac-90c032554605 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:30:58.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2688" for this suite. May 2 14:31:04.180: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:31:04.257: INFO: namespace projected-2688 deletion completed in 6.114764351s • [SLOW TEST:10.285 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:31:04.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:31:09.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9729" for this suite. May 2 14:31:15.865: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:31:15.940: INFO: namespace watch-9729 deletion completed in 6.170985777s • [SLOW TEST:11.683 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:31:15.941: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-62c00c59-1cef-4860-b454-876b1d8a29af STEP: Creating a pod to test consume configMaps May 2 14:31:16.090: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-79617e76-b1b2-40b9-9930-632f51ae590b" in namespace "projected-3053" to be "success or failure" May 2 14:31:16.107: INFO: Pod "pod-projected-configmaps-79617e76-b1b2-40b9-9930-632f51ae590b": Phase="Pending", Reason="", readiness=false. Elapsed: 16.48217ms May 2 14:31:18.111: INFO: Pod "pod-projected-configmaps-79617e76-b1b2-40b9-9930-632f51ae590b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020508937s May 2 14:31:20.115: INFO: Pod "pod-projected-configmaps-79617e76-b1b2-40b9-9930-632f51ae590b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024575443s STEP: Saw pod success May 2 14:31:20.115: INFO: Pod "pod-projected-configmaps-79617e76-b1b2-40b9-9930-632f51ae590b" satisfied condition "success or failure" May 2 14:31:20.118: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-79617e76-b1b2-40b9-9930-632f51ae590b container projected-configmap-volume-test: STEP: delete the pod May 2 14:31:20.137: INFO: Waiting for pod pod-projected-configmaps-79617e76-b1b2-40b9-9930-632f51ae590b to disappear May 2 14:31:20.168: INFO: Pod pod-projected-configmaps-79617e76-b1b2-40b9-9930-632f51ae590b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:31:20.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3053" for this suite. May 2 14:31:26.186: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:31:26.270: INFO: namespace projected-3053 deletion completed in 6.099014618s • [SLOW TEST:10.330 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:31:26.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:32:26.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2777" for this suite. May 2 14:32:48.367: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:32:48.441: INFO: namespace container-probe-2777 deletion completed in 22.105285412s • [SLOW TEST:82.168 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:32:48.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-9711c5ba-6447-4ebe-aa8b-a40da36a16db STEP: Creating a pod to test consume configMaps May 2 14:32:48.508: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3a3a3d3f-3dea-409a-8e3a-1d07b8d96c19" in namespace "projected-9508" to be "success or failure" May 2 14:32:48.512: INFO: Pod "pod-projected-configmaps-3a3a3d3f-3dea-409a-8e3a-1d07b8d96c19": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034073ms May 2 14:32:50.633: INFO: Pod "pod-projected-configmaps-3a3a3d3f-3dea-409a-8e3a-1d07b8d96c19": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124681639s May 2 14:32:52.637: INFO: Pod "pod-projected-configmaps-3a3a3d3f-3dea-409a-8e3a-1d07b8d96c19": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.129013212s STEP: Saw pod success May 2 14:32:52.638: INFO: Pod "pod-projected-configmaps-3a3a3d3f-3dea-409a-8e3a-1d07b8d96c19" satisfied condition "success or failure" May 2 14:32:52.640: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-3a3a3d3f-3dea-409a-8e3a-1d07b8d96c19 container projected-configmap-volume-test: STEP: delete the pod May 2 14:32:52.664: INFO: Waiting for pod pod-projected-configmaps-3a3a3d3f-3dea-409a-8e3a-1d07b8d96c19 to disappear May 2 14:32:52.683: INFO: Pod pod-projected-configmaps-3a3a3d3f-3dea-409a-8e3a-1d07b8d96c19 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:32:52.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9508" for this suite. May 2 14:32:58.720: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:32:58.799: INFO: namespace projected-9508 deletion completed in 6.111895029s • [SLOW TEST:10.357 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 2 14:32:58.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs May 2 14:32:58.861: INFO: Waiting up to 5m0s for pod "pod-0e43ef44-c3dd-4bb2-bcc6-cbf80920a8bd" in namespace "emptydir-1867" to be "success or failure" May 2 14:32:58.874: INFO: Pod "pod-0e43ef44-c3dd-4bb2-bcc6-cbf80920a8bd": Phase="Pending", Reason="", readiness=false. Elapsed: 13.028214ms May 2 14:33:00.879: INFO: Pod "pod-0e43ef44-c3dd-4bb2-bcc6-cbf80920a8bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017430062s May 2 14:33:02.883: INFO: Pod "pod-0e43ef44-c3dd-4bb2-bcc6-cbf80920a8bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022095173s STEP: Saw pod success May 2 14:33:02.883: INFO: Pod "pod-0e43ef44-c3dd-4bb2-bcc6-cbf80920a8bd" satisfied condition "success or failure" May 2 14:33:02.887: INFO: Trying to get logs from node iruya-worker2 pod pod-0e43ef44-c3dd-4bb2-bcc6-cbf80920a8bd container test-container: STEP: delete the pod May 2 14:33:02.905: INFO: Waiting for pod pod-0e43ef44-c3dd-4bb2-bcc6-cbf80920a8bd to disappear May 2 14:33:02.925: INFO: Pod pod-0e43ef44-c3dd-4bb2-bcc6-cbf80920a8bd no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 2 14:33:02.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1867" for this suite. May 2 14:33:08.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 14:33:09.077: INFO: namespace emptydir-1867 deletion completed in 6.148318832s • [SLOW TEST:10.277 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSMay 2 14:33:09.077: INFO: Running AfterSuite actions on all nodes May 2 14:33:09.077: INFO: Running AfterSuite actions on node 1 May 2 14:33:09.077: INFO: Skipping dumping logs from cluster Ran 215 of 4412 Specs in 5844.517 seconds SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped PASS