I0512 12:55:56.439277 6 e2e.go:243] Starting e2e run "ccb131b2-2f45-424e-9856-67b9421c922f" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1589288155 - Will randomize all specs Will run 215 of 4412 specs May 12 12:55:56.641: INFO: >>> kubeConfig: /root/.kube/config May 12 12:55:56.644: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 12 12:55:56.667: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 12 12:55:56.700: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 12 12:55:56.700: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 12 12:55:56.700: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 12 12:55:56.708: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 12 12:55:56.708: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 12 12:55:56.708: INFO: e2e test version: v1.15.11 May 12 12:55:56.709: INFO: kube-apiserver version: v1.15.7 SSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 12:55:56.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath May 12 12:55:56.768: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-mmlk STEP: Creating a pod to test atomic-volume-subpath May 12 12:55:56.782: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-mmlk" in namespace "subpath-236" to be "success or failure" May 12 12:55:56.786: INFO: Pod "pod-subpath-test-downwardapi-mmlk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.287081ms May 12 12:55:58.858: INFO: Pod "pod-subpath-test-downwardapi-mmlk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076496192s May 12 12:56:00.863: INFO: Pod "pod-subpath-test-downwardapi-mmlk": Phase="Running", Reason="", readiness=true. Elapsed: 4.080890856s May 12 12:56:02.867: INFO: Pod "pod-subpath-test-downwardapi-mmlk": Phase="Running", Reason="", readiness=true. Elapsed: 6.084827612s May 12 12:56:04.870: INFO: Pod "pod-subpath-test-downwardapi-mmlk": Phase="Running", Reason="", readiness=true. Elapsed: 8.087843319s May 12 12:56:06.883: INFO: Pod "pod-subpath-test-downwardapi-mmlk": Phase="Running", Reason="", readiness=true. Elapsed: 10.100938633s May 12 12:56:08.887: INFO: Pod "pod-subpath-test-downwardapi-mmlk": Phase="Running", Reason="", readiness=true. Elapsed: 12.105056064s May 12 12:56:10.891: INFO: Pod "pod-subpath-test-downwardapi-mmlk": Phase="Running", Reason="", readiness=true. Elapsed: 14.108939586s May 12 12:56:13.092: INFO: Pod "pod-subpath-test-downwardapi-mmlk": Phase="Running", Reason="", readiness=true. Elapsed: 16.310139564s May 12 12:56:15.096: INFO: Pod "pod-subpath-test-downwardapi-mmlk": Phase="Running", Reason="", readiness=true. Elapsed: 18.313957516s May 12 12:56:17.099: INFO: Pod "pod-subpath-test-downwardapi-mmlk": Phase="Running", Reason="", readiness=true. Elapsed: 20.317672196s May 12 12:56:19.104: INFO: Pod "pod-subpath-test-downwardapi-mmlk": Phase="Running", Reason="", readiness=true. Elapsed: 22.322030715s May 12 12:56:21.107: INFO: Pod "pod-subpath-test-downwardapi-mmlk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.325651027s STEP: Saw pod success May 12 12:56:21.107: INFO: Pod "pod-subpath-test-downwardapi-mmlk" satisfied condition "success or failure" May 12 12:56:21.110: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-downwardapi-mmlk container test-container-subpath-downwardapi-mmlk: STEP: delete the pod May 12 12:56:21.159: INFO: Waiting for pod pod-subpath-test-downwardapi-mmlk to disappear May 12 12:56:21.174: INFO: Pod pod-subpath-test-downwardapi-mmlk no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-mmlk May 12 12:56:21.174: INFO: Deleting pod "pod-subpath-test-downwardapi-mmlk" in namespace "subpath-236" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 12:56:21.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-236" for this suite. May 12 12:56:27.190: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 12:56:27.252: INFO: namespace subpath-236 deletion completed in 6.071263368s • [SLOW TEST:30.543 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 12:56:27.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 12 12:56:27.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-4145' May 12 12:56:30.840: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 12 12:56:30.841: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 May 12 12:56:30.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-4145' May 12 12:56:30.993: INFO: stderr: "" May 12 12:56:30.993: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 12:56:30.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4145" for this suite. May 12 12:56:37.014: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 12:56:37.079: INFO: namespace kubectl-4145 deletion completed in 6.083344619s • [SLOW TEST:9.827 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 12:56:37.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command May 12 12:56:37.166: INFO: Waiting up to 5m0s for pod "client-containers-799cb7da-4811-4066-b1e4-20ea5ce8ba16" in namespace "containers-1018" to be "success or failure" May 12 12:56:37.169: INFO: Pod "client-containers-799cb7da-4811-4066-b1e4-20ea5ce8ba16": Phase="Pending", Reason="", readiness=false. Elapsed: 3.40455ms May 12 12:56:39.174: INFO: Pod "client-containers-799cb7da-4811-4066-b1e4-20ea5ce8ba16": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00783897s May 12 12:56:41.178: INFO: Pod "client-containers-799cb7da-4811-4066-b1e4-20ea5ce8ba16": Phase="Running", Reason="", readiness=true. Elapsed: 4.011655081s May 12 12:56:43.181: INFO: Pod "client-containers-799cb7da-4811-4066-b1e4-20ea5ce8ba16": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015005938s STEP: Saw pod success May 12 12:56:43.181: INFO: Pod "client-containers-799cb7da-4811-4066-b1e4-20ea5ce8ba16" satisfied condition "success or failure" May 12 12:56:43.184: INFO: Trying to get logs from node iruya-worker2 pod client-containers-799cb7da-4811-4066-b1e4-20ea5ce8ba16 container test-container: STEP: delete the pod May 12 12:56:43.215: INFO: Waiting for pod client-containers-799cb7da-4811-4066-b1e4-20ea5ce8ba16 to disappear May 12 12:56:43.236: INFO: Pod client-containers-799cb7da-4811-4066-b1e4-20ea5ce8ba16 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 12:56:43.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1018" for this suite. May 12 12:56:49.335: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 12:56:49.404: INFO: namespace containers-1018 deletion completed in 6.164800784s • [SLOW TEST:12.325 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 12:56:49.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 12 12:56:50.267: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 12 12:56:50.416: INFO: Number of nodes with available pods: 0 May 12 12:56:50.416: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 12 12:56:50.637: INFO: Number of nodes with available pods: 0 May 12 12:56:50.637: INFO: Node iruya-worker is running more than one daemon pod May 12 12:56:51.641: INFO: Number of nodes with available pods: 0 May 12 12:56:51.641: INFO: Node iruya-worker is running more than one daemon pod May 12 12:56:52.799: INFO: Number of nodes with available pods: 0 May 12 12:56:52.799: INFO: Node iruya-worker is running more than one daemon pod May 12 12:56:53.640: INFO: Number of nodes with available pods: 0 May 12 12:56:53.640: INFO: Node iruya-worker is running more than one daemon pod May 12 12:56:54.727: INFO: Number of nodes with available pods: 0 May 12 12:56:54.727: INFO: Node iruya-worker is running more than one daemon pod May 12 12:56:55.641: INFO: Number of nodes with available pods: 0 May 12 12:56:55.641: INFO: Node iruya-worker is running more than one daemon pod May 12 12:56:56.641: INFO: Number of nodes with available pods: 1 May 12 12:56:56.641: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 12 12:56:56.734: INFO: Number of nodes with available pods: 1 May 12 12:56:56.734: INFO: Number of running nodes: 0, number of available pods: 1 May 12 12:56:57.738: INFO: Number of nodes with available pods: 0 May 12 12:56:57.739: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 12 12:56:57.777: INFO: Number of nodes with available pods: 0 May 12 12:56:57.777: INFO: Node iruya-worker is running more than one daemon pod May 12 12:56:59.039: INFO: Number of nodes with available pods: 0 May 12 12:56:59.039: INFO: Node iruya-worker is running more than one daemon pod May 12 12:56:59.780: INFO: Number of nodes with available pods: 0 May 12 12:56:59.780: INFO: Node iruya-worker is running more than one daemon pod May 12 12:57:00.781: INFO: Number of nodes with available pods: 0 May 12 12:57:00.781: INFO: Node iruya-worker is running more than one daemon pod May 12 12:57:01.780: INFO: Number of nodes with available pods: 0 May 12 12:57:01.780: INFO: Node iruya-worker is running more than one daemon pod May 12 12:57:02.884: INFO: Number of nodes with available pods: 0 May 12 12:57:02.884: INFO: Node iruya-worker is running more than one daemon pod May 12 12:57:03.781: INFO: Number of nodes with available pods: 0 May 12 12:57:03.781: INFO: Node iruya-worker is running more than one daemon pod May 12 12:57:04.811: INFO: Number of nodes with available pods: 0 May 12 12:57:04.811: INFO: Node iruya-worker is running more than one daemon pod May 12 12:57:05.781: INFO: Number of nodes with available pods: 0 May 12 12:57:05.781: INFO: Node iruya-worker is running more than one daemon pod May 12 12:57:06.781: INFO: Number of nodes with available pods: 1 May 12 12:57:06.781: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8168, will wait for the garbage collector to delete the pods May 12 12:57:06.991: INFO: Deleting DaemonSet.extensions daemon-set took: 153.611667ms May 12 12:57:07.291: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.247965ms May 12 12:57:10.994: INFO: Number of nodes with available pods: 0 May 12 12:57:10.994: INFO: Number of running nodes: 0, number of available pods: 0 May 12 12:57:10.999: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8168/daemonsets","resourceVersion":"10480834"},"items":null} May 12 12:57:11.001: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8168/pods","resourceVersion":"10480834"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 12:57:11.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8168" for this suite. May 12 12:57:17.116: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 12:57:17.178: INFO: namespace daemonsets-8168 deletion completed in 6.1015759s • [SLOW TEST:27.774 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 12:57:17.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-6aee6bb5-fa21-467e-bb27-41445a658178 STEP: Creating a pod to test consume configMaps May 12 12:57:17.328: INFO: Waiting up to 5m0s for pod "pod-configmaps-2ca7fc6a-b388-4486-9f17-bb46bed3528b" in namespace "configmap-5773" to be "success or failure" May 12 12:57:17.348: INFO: Pod "pod-configmaps-2ca7fc6a-b388-4486-9f17-bb46bed3528b": Phase="Pending", Reason="", readiness=false. Elapsed: 20.337996ms May 12 12:57:19.561: INFO: Pod "pod-configmaps-2ca7fc6a-b388-4486-9f17-bb46bed3528b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.233256504s May 12 12:57:21.564: INFO: Pod "pod-configmaps-2ca7fc6a-b388-4486-9f17-bb46bed3528b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.23619271s May 12 12:57:23.567: INFO: Pod "pod-configmaps-2ca7fc6a-b388-4486-9f17-bb46bed3528b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.239733223s STEP: Saw pod success May 12 12:57:23.567: INFO: Pod "pod-configmaps-2ca7fc6a-b388-4486-9f17-bb46bed3528b" satisfied condition "success or failure" May 12 12:57:23.570: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-2ca7fc6a-b388-4486-9f17-bb46bed3528b container configmap-volume-test: STEP: delete the pod May 12 12:57:23.682: INFO: Waiting for pod pod-configmaps-2ca7fc6a-b388-4486-9f17-bb46bed3528b to disappear May 12 12:57:23.686: INFO: Pod pod-configmaps-2ca7fc6a-b388-4486-9f17-bb46bed3528b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 12:57:23.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5773" for this suite. May 12 12:57:29.719: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 12:57:29.774: INFO: namespace configmap-5773 deletion completed in 6.083931521s • [SLOW TEST:12.596 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 12:57:29.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod May 12 12:57:30.107: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 12:57:42.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2152" for this suite. May 12 12:57:48.936: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 12:57:49.004: INFO: namespace init-container-2152 deletion completed in 6.113936491s • [SLOW TEST:19.230 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 12:57:49.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-9d6e53bb-50d9-4b98-ac37-a133534b7632 STEP: Creating secret with name secret-projected-all-test-volume-3c743e08-4c41-4604-807e-8ac91a313fa0 STEP: Creating a pod to test Check all projections for projected volume plugin May 12 12:57:49.120: INFO: Waiting up to 5m0s for pod "projected-volume-a1296b59-f3b6-4367-bd23-9317f7415457" in namespace "projected-1087" to be "success or failure" May 12 12:57:49.124: INFO: Pod "projected-volume-a1296b59-f3b6-4367-bd23-9317f7415457": Phase="Pending", Reason="", readiness=false. Elapsed: 4.328106ms May 12 12:57:51.129: INFO: Pod "projected-volume-a1296b59-f3b6-4367-bd23-9317f7415457": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009047099s May 12 12:57:53.132: INFO: Pod "projected-volume-a1296b59-f3b6-4367-bd23-9317f7415457": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012677935s STEP: Saw pod success May 12 12:57:53.132: INFO: Pod "projected-volume-a1296b59-f3b6-4367-bd23-9317f7415457" satisfied condition "success or failure" May 12 12:57:53.135: INFO: Trying to get logs from node iruya-worker pod projected-volume-a1296b59-f3b6-4367-bd23-9317f7415457 container projected-all-volume-test: STEP: delete the pod May 12 12:57:53.402: INFO: Waiting for pod projected-volume-a1296b59-f3b6-4367-bd23-9317f7415457 to disappear May 12 12:57:53.723: INFO: Pod projected-volume-a1296b59-f3b6-4367-bd23-9317f7415457 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 12:57:53.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1087" for this suite. May 12 12:57:59.790: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 12:57:59.880: INFO: namespace projected-1087 deletion completed in 6.154068219s • [SLOW TEST:10.876 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 12:57:59.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0512 12:58:09.984610 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 12 12:58:09.984: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 12:58:09.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8063" for this suite. May 12 12:58:16.014: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 12:58:16.136: INFO: namespace gc-8063 deletion completed in 6.147878417s • [SLOW TEST:16.255 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 12:58:16.137: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 12 12:58:21.614: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 12:58:21.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6069" for this suite. May 12 12:58:27.655: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 12:58:27.725: INFO: namespace container-runtime-6069 deletion completed in 6.09501524s • [SLOW TEST:11.588 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 12:58:27.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 12 12:58:33.932: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 12:58:33.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1238" for this suite. May 12 12:58:39.967: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 12:58:40.041: INFO: namespace container-runtime-1238 deletion completed in 6.087087471s • [SLOW TEST:12.316 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 12:58:40.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 12 12:58:40.320: INFO: (0) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 5.310275ms) May 12 12:58:40.348: INFO: (1) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 28.839711ms) May 12 12:58:40.384: INFO: (2) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 35.900012ms) May 12 12:58:40.388: INFO: (3) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.687396ms) May 12 12:58:40.391: INFO: (4) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.121339ms) May 12 12:58:40.394: INFO: (5) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.110072ms) May 12 12:58:40.398: INFO: (6) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.35159ms) May 12 12:58:40.400: INFO: (7) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.648231ms) May 12 12:58:40.403: INFO: (8) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.988046ms) May 12 12:58:40.406: INFO: (9) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.942909ms) May 12 12:58:40.409: INFO: (10) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.803703ms) May 12 12:58:40.443: INFO: (11) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 33.372998ms) May 12 12:58:40.446: INFO: (12) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.410633ms) May 12 12:58:40.449: INFO: (13) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.013229ms) May 12 12:58:40.452: INFO: (14) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.855003ms) May 12 12:58:40.455: INFO: (15) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.101286ms) May 12 12:58:40.458: INFO: (16) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.906417ms) May 12 12:58:40.461: INFO: (17) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.642381ms) May 12 12:58:40.463: INFO: (18) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.641297ms) May 12 12:58:40.466: INFO: (19) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.637171ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 12:58:40.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-1408" for this suite. May 12 12:58:46.501: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 12:58:46.580: INFO: namespace proxy-1408 deletion completed in 6.110457141s • [SLOW TEST:6.538 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 12:58:46.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted May 12 12:59:01.122: INFO: 5 pods remaining May 12 12:59:01.122: INFO: 5 pods has nil DeletionTimestamp May 12 12:59:01.122: INFO: STEP: Gathering metrics W0512 12:59:05.321840 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 12 12:59:05.321: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 12:59:05.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4534" for this suite. May 12 12:59:19.600: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 12:59:19.666: INFO: namespace gc-4534 deletion completed in 14.341818166s • [SLOW TEST:33.086 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 12:59:19.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 12:59:20.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7093" for this suite. May 12 12:59:43.053: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 12:59:43.178: INFO: namespace pods-7093 deletion completed in 22.308798164s • [SLOW TEST:23.512 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 12:59:43.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 12 12:59:58.494: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 12 12:59:58.803: INFO: Pod pod-with-poststart-exec-hook still exists May 12 13:00:00.803: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 12 13:00:00.807: INFO: Pod pod-with-poststart-exec-hook still exists May 12 13:00:02.803: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 12 13:00:02.806: INFO: Pod pod-with-poststart-exec-hook still exists May 12 13:00:04.803: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 12 13:00:04.807: INFO: Pod pod-with-poststart-exec-hook still exists May 12 13:00:06.803: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 12 13:00:06.807: INFO: Pod pod-with-poststart-exec-hook still exists May 12 13:00:08.803: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 12 13:00:08.808: INFO: Pod pod-with-poststart-exec-hook still exists May 12 13:00:10.803: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 12 13:00:10.807: INFO: Pod pod-with-poststart-exec-hook still exists May 12 13:00:12.803: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 12 13:00:12.807: INFO: Pod pod-with-poststart-exec-hook still exists May 12 13:00:14.803: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 12 13:00:14.808: INFO: Pod pod-with-poststart-exec-hook still exists May 12 13:00:16.803: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 12 13:00:16.806: INFO: Pod pod-with-poststart-exec-hook still exists May 12 13:00:18.803: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 12 13:00:18.806: INFO: Pod pod-with-poststart-exec-hook still exists May 12 13:00:20.803: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 12 13:00:20.807: INFO: Pod pod-with-poststart-exec-hook still exists May 12 13:00:22.803: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 12 13:00:22.807: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:00:22.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2582" for this suite. May 12 13:00:48.821: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:00:48.892: INFO: namespace container-lifecycle-hook-2582 deletion completed in 26.081385774s • [SLOW TEST:65.713 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:00:48.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 12 13:00:53.143: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-1d7592dc-e200-42ef-9034-549578639342,GenerateName:,Namespace:events-8511,SelfLink:/api/v1/namespaces/events-8511/pods/send-events-1d7592dc-e200-42ef-9034-549578639342,UID:14ae9934-ef1b-4785-b201-92e6f2ea182b,ResourceVersion:10481727,Generation:0,CreationTimestamp:2020-05-12 13:00:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 92964749,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6g8xr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6g8xr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-6g8xr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023d4760} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023d4780}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:00:49 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:00:52 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:00:52 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:00:49 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.11,StartTime:2020-05-12 13:00:49 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-05-12 13:00:52 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://5348f34b23ab31d1bd0a66e76513a281721da2c216b6a9d3f03b9e3d10efce5e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod May 12 13:00:55.148: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 12 13:00:57.152: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:00:57.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-8511" for this suite. May 12 13:01:37.210: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:01:37.289: INFO: namespace events-8511 deletion completed in 40.117637282s • [SLOW TEST:48.397 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:01:37.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:02:37.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1538" for this suite. May 12 13:02:59.482: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:02:59.568: INFO: namespace container-probe-1538 deletion completed in 22.154108209s • [SLOW TEST:82.278 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:02:59.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container May 12 13:03:09.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-68e679a8-8bca-463e-a5df-bf8fd99452ca -c busybox-main-container --namespace=emptydir-8831 -- cat /usr/share/volumeshare/shareddata.txt' May 12 13:03:10.161: INFO: stderr: "I0512 13:03:10.104801 84 log.go:172] (0xc000922370) (0xc0005cc8c0) Create stream\nI0512 13:03:10.104868 84 log.go:172] (0xc000922370) (0xc0005cc8c0) Stream added, broadcasting: 1\nI0512 13:03:10.107289 84 log.go:172] (0xc000922370) Reply frame received for 1\nI0512 13:03:10.107332 84 log.go:172] (0xc000922370) (0xc000808000) Create stream\nI0512 13:03:10.107347 84 log.go:172] (0xc000922370) (0xc000808000) Stream added, broadcasting: 3\nI0512 13:03:10.108091 84 log.go:172] (0xc000922370) Reply frame received for 3\nI0512 13:03:10.108117 84 log.go:172] (0xc000922370) (0xc0003f4000) Create stream\nI0512 13:03:10.108123 84 log.go:172] (0xc000922370) (0xc0003f4000) Stream added, broadcasting: 5\nI0512 13:03:10.108790 84 log.go:172] (0xc000922370) Reply frame received for 5\nI0512 13:03:10.156754 84 log.go:172] (0xc000922370) Data frame received for 3\nI0512 13:03:10.156772 84 log.go:172] (0xc000808000) (3) Data frame handling\nI0512 13:03:10.156791 84 log.go:172] (0xc000922370) Data frame received for 5\nI0512 13:03:10.156815 84 log.go:172] (0xc0003f4000) (5) Data frame handling\nI0512 13:03:10.156831 84 log.go:172] (0xc000808000) (3) Data frame sent\nI0512 13:03:10.156837 84 log.go:172] (0xc000922370) Data frame received for 3\nI0512 13:03:10.156840 84 log.go:172] (0xc000808000) (3) Data frame handling\nI0512 13:03:10.157785 84 log.go:172] (0xc000922370) Data frame received for 1\nI0512 13:03:10.157834 84 log.go:172] (0xc0005cc8c0) (1) Data frame handling\nI0512 13:03:10.157848 84 log.go:172] (0xc0005cc8c0) (1) Data frame sent\nI0512 13:03:10.157860 84 log.go:172] (0xc000922370) (0xc0005cc8c0) Stream removed, broadcasting: 1\nI0512 13:03:10.157872 84 log.go:172] (0xc000922370) Go away received\nI0512 13:03:10.158142 84 log.go:172] (0xc000922370) (0xc0005cc8c0) Stream removed, broadcasting: 1\nI0512 13:03:10.158158 84 log.go:172] (0xc000922370) (0xc000808000) Stream removed, broadcasting: 3\nI0512 13:03:10.158170 84 log.go:172] (0xc000922370) (0xc0003f4000) Stream removed, broadcasting: 5\n" May 12 13:03:10.161: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:03:10.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8831" for this suite. May 12 13:03:18.584: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:03:18.649: INFO: namespace emptydir-8831 deletion completed in 8.350634592s • [SLOW TEST:19.081 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:03:18.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 12 13:03:19.566: INFO: Waiting up to 5m0s for pod "downwardapi-volume-38204689-fc1a-4430-90d4-bf9af650b54d" in namespace "downward-api-6803" to be "success or failure" May 12 13:03:19.607: INFO: Pod "downwardapi-volume-38204689-fc1a-4430-90d4-bf9af650b54d": Phase="Pending", Reason="", readiness=false. Elapsed: 40.576262ms May 12 13:03:22.232: INFO: Pod "downwardapi-volume-38204689-fc1a-4430-90d4-bf9af650b54d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.665602024s May 12 13:03:24.236: INFO: Pod "downwardapi-volume-38204689-fc1a-4430-90d4-bf9af650b54d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.669493836s May 12 13:03:26.239: INFO: Pod "downwardapi-volume-38204689-fc1a-4430-90d4-bf9af650b54d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.672687006s STEP: Saw pod success May 12 13:03:26.239: INFO: Pod "downwardapi-volume-38204689-fc1a-4430-90d4-bf9af650b54d" satisfied condition "success or failure" May 12 13:03:26.241: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-38204689-fc1a-4430-90d4-bf9af650b54d container client-container: STEP: delete the pod May 12 13:03:26.407: INFO: Waiting for pod downwardapi-volume-38204689-fc1a-4430-90d4-bf9af650b54d to disappear May 12 13:03:26.433: INFO: Pod downwardapi-volume-38204689-fc1a-4430-90d4-bf9af650b54d no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:03:26.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6803" for this suite. May 12 13:03:32.514: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:03:32.633: INFO: namespace downward-api-6803 deletion completed in 6.196699893s • [SLOW TEST:13.983 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:03:32.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-194a8a1b-dd5a-4bd7-acbc-177274baccd7 STEP: Creating secret with name s-test-opt-upd-45a9f904-e9c5-401f-baf7-a2a8b52b399f STEP: Creating the pod STEP: Deleting secret s-test-opt-del-194a8a1b-dd5a-4bd7-acbc-177274baccd7 STEP: Updating secret s-test-opt-upd-45a9f904-e9c5-401f-baf7-a2a8b52b399f STEP: Creating secret with name s-test-opt-create-1dc21e81-2d56-4f45-8a85-afc02a6a93ba STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:04:55.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6441" for this suite. May 12 13:05:17.242: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:05:17.335: INFO: namespace secrets-6441 deletion completed in 22.153489019s • [SLOW TEST:104.701 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:05:17.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:06:00.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5244" for this suite. May 12 13:06:06.795: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:06:06.873: INFO: namespace container-runtime-5244 deletion completed in 6.662098601s • [SLOW TEST:49.537 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:06:06.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod May 12 13:06:14.695: INFO: Successfully updated pod "labelsupdate053ffda8-220e-4e8c-b37d-2b116fcbd274" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:06:16.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8104" for this suite. May 12 13:06:38.866: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:06:38.943: INFO: namespace projected-8104 deletion completed in 22.11602541s • [SLOW TEST:32.070 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:06:38.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 12 13:06:44.570: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:06:44.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2642" for this suite. May 12 13:06:50.654: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:06:50.728: INFO: namespace container-runtime-2642 deletion completed in 6.089063638s • [SLOW TEST:11.785 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:06:50.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-917 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-917 to expose endpoints map[] May 12 13:06:51.041: INFO: Get endpoints failed (27.078138ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found May 12 13:06:52.044: INFO: successfully validated that service multi-endpoint-test in namespace services-917 exposes endpoints map[] (1.029890015s elapsed) STEP: Creating pod pod1 in namespace services-917 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-917 to expose endpoints map[pod1:[100]] May 12 13:06:56.551: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.5018887s elapsed, will retry) May 12 13:06:57.603: INFO: successfully validated that service multi-endpoint-test in namespace services-917 exposes endpoints map[pod1:[100]] (5.553453312s elapsed) STEP: Creating pod pod2 in namespace services-917 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-917 to expose endpoints map[pod1:[100] pod2:[101]] May 12 13:07:01.993: INFO: successfully validated that service multi-endpoint-test in namespace services-917 exposes endpoints map[pod1:[100] pod2:[101]] (4.387114376s elapsed) STEP: Deleting pod pod1 in namespace services-917 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-917 to expose endpoints map[pod2:[101]] May 12 13:07:03.018: INFO: successfully validated that service multi-endpoint-test in namespace services-917 exposes endpoints map[pod2:[101]] (1.021395331s elapsed) STEP: Deleting pod pod2 in namespace services-917 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-917 to expose endpoints map[] May 12 13:07:04.110: INFO: successfully validated that service multi-endpoint-test in namespace services-917 exposes endpoints map[] (1.087509428s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:07:04.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-917" for this suite. May 12 13:07:26.945: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:07:27.001: INFO: namespace services-917 deletion completed in 22.376066108s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:36.273 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:07:27.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-3d8d6eeb-5f6e-4262-a32f-c143e2ce023c STEP: Creating a pod to test consume secrets May 12 13:07:27.432: INFO: Waiting up to 5m0s for pod "pod-secrets-15a100e0-b718-4d02-9e9d-d8ccc7185135" in namespace "secrets-709" to be "success or failure" May 12 13:07:27.470: INFO: Pod "pod-secrets-15a100e0-b718-4d02-9e9d-d8ccc7185135": Phase="Pending", Reason="", readiness=false. Elapsed: 38.192637ms May 12 13:07:29.472: INFO: Pod "pod-secrets-15a100e0-b718-4d02-9e9d-d8ccc7185135": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040566868s May 12 13:07:31.476: INFO: Pod "pod-secrets-15a100e0-b718-4d02-9e9d-d8ccc7185135": Phase="Running", Reason="", readiness=true. Elapsed: 4.044713259s May 12 13:07:33.481: INFO: Pod "pod-secrets-15a100e0-b718-4d02-9e9d-d8ccc7185135": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.049144937s STEP: Saw pod success May 12 13:07:33.481: INFO: Pod "pod-secrets-15a100e0-b718-4d02-9e9d-d8ccc7185135" satisfied condition "success or failure" May 12 13:07:33.483: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-15a100e0-b718-4d02-9e9d-d8ccc7185135 container secret-volume-test: STEP: delete the pod May 12 13:07:33.520: INFO: Waiting for pod pod-secrets-15a100e0-b718-4d02-9e9d-d8ccc7185135 to disappear May 12 13:07:33.535: INFO: Pod pod-secrets-15a100e0-b718-4d02-9e9d-d8ccc7185135 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:07:33.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-709" for this suite. May 12 13:07:39.550: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:07:39.622: INFO: namespace secrets-709 deletion completed in 6.082432741s • [SLOW TEST:12.620 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:07:39.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-3120/configmap-test-62d1062a-6e7b-48c3-9c9e-537635086d12 STEP: Creating a pod to test consume configMaps May 12 13:07:39.730: INFO: Waiting up to 5m0s for pod "pod-configmaps-95881ec3-e974-4835-8a65-293b956bc770" in namespace "configmap-3120" to be "success or failure" May 12 13:07:39.745: INFO: Pod "pod-configmaps-95881ec3-e974-4835-8a65-293b956bc770": Phase="Pending", Reason="", readiness=false. Elapsed: 14.847261ms May 12 13:07:41.748: INFO: Pod "pod-configmaps-95881ec3-e974-4835-8a65-293b956bc770": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018410411s May 12 13:07:43.769: INFO: Pod "pod-configmaps-95881ec3-e974-4835-8a65-293b956bc770": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038809984s STEP: Saw pod success May 12 13:07:43.769: INFO: Pod "pod-configmaps-95881ec3-e974-4835-8a65-293b956bc770" satisfied condition "success or failure" May 12 13:07:43.771: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-95881ec3-e974-4835-8a65-293b956bc770 container env-test: STEP: delete the pod May 12 13:07:43.815: INFO: Waiting for pod pod-configmaps-95881ec3-e974-4835-8a65-293b956bc770 to disappear May 12 13:07:44.002: INFO: Pod pod-configmaps-95881ec3-e974-4835-8a65-293b956bc770 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:07:44.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3120" for this suite. May 12 13:07:50.062: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:07:50.182: INFO: namespace configmap-3120 deletion completed in 6.176279866s • [SLOW TEST:10.560 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:07:50.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-b9c2aa49-a32f-4127-994c-10fc32df43d9 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-b9c2aa49-a32f-4127-994c-10fc32df43d9 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:07:56.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4988" for this suite. May 12 13:08:21.177: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:08:21.270: INFO: namespace configmap-4988 deletion completed in 24.625615411s • [SLOW TEST:31.087 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:08:21.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 12 13:08:21.441: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 12 13:08:26.464: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 12 13:08:26.464: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 12 13:08:26.550: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-7142,SelfLink:/apis/apps/v1/namespaces/deployment-7142/deployments/test-cleanup-deployment,UID:c2aefe5f-e24f-4eb1-ad3b-d236aec4f997,ResourceVersion:10483017,Generation:1,CreationTimestamp:2020-05-12 13:08:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} May 12 13:08:26.615: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-7142,SelfLink:/apis/apps/v1/namespaces/deployment-7142/replicasets/test-cleanup-deployment-55bbcbc84c,UID:b8363323-ca1d-4083-bbc3-cbd78a6bfae4,ResourceVersion:10483019,Generation:1,CreationTimestamp:2020-05-12 13:08:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment c2aefe5f-e24f-4eb1-ad3b-d236aec4f997 0xc0030196c7 0xc0030196c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 12 13:08:26.615: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 12 13:08:26.615: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-7142,SelfLink:/apis/apps/v1/namespaces/deployment-7142/replicasets/test-cleanup-controller,UID:0e9b3939-7141-4092-821b-a00d870b1da1,ResourceVersion:10483018,Generation:1,CreationTimestamp:2020-05-12 13:08:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment c2aefe5f-e24f-4eb1-ad3b-d236aec4f997 0xc0030195f7 0xc0030195f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 12 13:08:26.760: INFO: Pod "test-cleanup-controller-kr427" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-kr427,GenerateName:test-cleanup-controller-,Namespace:deployment-7142,SelfLink:/api/v1/namespaces/deployment-7142/pods/test-cleanup-controller-kr427,UID:72470a2e-2e04-4e96-b178-23c5a43a11bb,ResourceVersion:10483014,Generation:0,CreationTimestamp:2020-05-12 13:08:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 0e9b3939-7141-4092-821b-a00d870b1da1 0xc003019f97 0xc003019f98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dlgvc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dlgvc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dlgvc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00289e010} {node.kubernetes.io/unreachable Exists NoExecute 0xc00289e030}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:08:21 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:08:26 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:08:26 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:08:21 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.13,StartTime:2020-05-12 13:08:21 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-12 13:08:25 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://6a3f39620035e3460e3aa15213f5a444f86e47acaa0aafa202bb71a2cbedeccd}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 13:08:26.761: INFO: Pod "test-cleanup-deployment-55bbcbc84c-nx47b" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-nx47b,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-7142,SelfLink:/api/v1/namespaces/deployment-7142/pods/test-cleanup-deployment-55bbcbc84c-nx47b,UID:0788e7db-67cb-4777-bb53-e21b49c9b76b,ResourceVersion:10483024,Generation:0,CreationTimestamp:2020-05-12 13:08:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c b8363323-ca1d-4083-bbc3-cbd78a6bfae4 0xc00289e117 0xc00289e118}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dlgvc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dlgvc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-dlgvc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00289e190} {node.kubernetes.io/unreachable Exists NoExecute 0xc00289e1b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:08:26 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:08:26.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7142" for this suite. May 12 13:08:37.105: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:08:37.179: INFO: namespace deployment-7142 deletion completed in 10.376758061s • [SLOW TEST:15.909 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:08:37.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-2769 STEP: creating a selector STEP: Creating the service pods in kubernetes May 12 13:08:38.077: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 12 13:09:08.364: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.15 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2769 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 13:09:08.364: INFO: >>> kubeConfig: /root/.kube/config I0512 13:09:08.395080 6 log.go:172] (0xc0005b8c60) (0xc0019123c0) Create stream I0512 13:09:08.395106 6 log.go:172] (0xc0005b8c60) (0xc0019123c0) Stream added, broadcasting: 1 I0512 13:09:08.398542 6 log.go:172] (0xc0005b8c60) Reply frame received for 1 I0512 13:09:08.398577 6 log.go:172] (0xc0005b8c60) (0xc0021788c0) Create stream I0512 13:09:08.398588 6 log.go:172] (0xc0005b8c60) (0xc0021788c0) Stream added, broadcasting: 3 I0512 13:09:08.399323 6 log.go:172] (0xc0005b8c60) Reply frame received for 3 I0512 13:09:08.399356 6 log.go:172] (0xc0005b8c60) (0xc001bb2140) Create stream I0512 13:09:08.399370 6 log.go:172] (0xc0005b8c60) (0xc001bb2140) Stream added, broadcasting: 5 I0512 13:09:08.399996 6 log.go:172] (0xc0005b8c60) Reply frame received for 5 I0512 13:09:09.454163 6 log.go:172] (0xc0005b8c60) Data frame received for 5 I0512 13:09:09.454202 6 log.go:172] (0xc001bb2140) (5) Data frame handling I0512 13:09:09.454225 6 log.go:172] (0xc0005b8c60) Data frame received for 3 I0512 13:09:09.454238 6 log.go:172] (0xc0021788c0) (3) Data frame handling I0512 13:09:09.454253 6 log.go:172] (0xc0021788c0) (3) Data frame sent I0512 13:09:09.454394 6 log.go:172] (0xc0005b8c60) Data frame received for 3 I0512 13:09:09.454410 6 log.go:172] (0xc0021788c0) (3) Data frame handling I0512 13:09:09.456376 6 log.go:172] (0xc0005b8c60) Data frame received for 1 I0512 13:09:09.456396 6 log.go:172] (0xc0019123c0) (1) Data frame handling I0512 13:09:09.456417 6 log.go:172] (0xc0019123c0) (1) Data frame sent I0512 13:09:09.456434 6 log.go:172] (0xc0005b8c60) (0xc0019123c0) Stream removed, broadcasting: 1 I0512 13:09:09.456518 6 log.go:172] (0xc0005b8c60) Go away received I0512 13:09:09.456578 6 log.go:172] (0xc0005b8c60) (0xc0019123c0) Stream removed, broadcasting: 1 I0512 13:09:09.456614 6 log.go:172] (0xc0005b8c60) (0xc0021788c0) Stream removed, broadcasting: 3 I0512 13:09:09.456629 6 log.go:172] (0xc0005b8c60) (0xc001bb2140) Stream removed, broadcasting: 5 May 12 13:09:09.456: INFO: Found all expected endpoints: [netserver-0] May 12 13:09:09.459: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.18 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2769 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 13:09:09.459: INFO: >>> kubeConfig: /root/.kube/config I0512 13:09:09.487678 6 log.go:172] (0xc0005b9d90) (0xc0019126e0) Create stream I0512 13:09:09.487722 6 log.go:172] (0xc0005b9d90) (0xc0019126e0) Stream added, broadcasting: 1 I0512 13:09:09.489875 6 log.go:172] (0xc0005b9d90) Reply frame received for 1 I0512 13:09:09.489917 6 log.go:172] (0xc0005b9d90) (0xc002b46320) Create stream I0512 13:09:09.489929 6 log.go:172] (0xc0005b9d90) (0xc002b46320) Stream added, broadcasting: 3 I0512 13:09:09.490726 6 log.go:172] (0xc0005b9d90) Reply frame received for 3 I0512 13:09:09.490748 6 log.go:172] (0xc0005b9d90) (0xc002178b40) Create stream I0512 13:09:09.490759 6 log.go:172] (0xc0005b9d90) (0xc002178b40) Stream added, broadcasting: 5 I0512 13:09:09.491376 6 log.go:172] (0xc0005b9d90) Reply frame received for 5 I0512 13:09:10.551368 6 log.go:172] (0xc0005b9d90) Data frame received for 3 I0512 13:09:10.551406 6 log.go:172] (0xc002b46320) (3) Data frame handling I0512 13:09:10.551420 6 log.go:172] (0xc002b46320) (3) Data frame sent I0512 13:09:10.551431 6 log.go:172] (0xc0005b9d90) Data frame received for 3 I0512 13:09:10.551441 6 log.go:172] (0xc002b46320) (3) Data frame handling I0512 13:09:10.551459 6 log.go:172] (0xc0005b9d90) Data frame received for 5 I0512 13:09:10.551472 6 log.go:172] (0xc002178b40) (5) Data frame handling I0512 13:09:10.552967 6 log.go:172] (0xc0005b9d90) Data frame received for 1 I0512 13:09:10.552986 6 log.go:172] (0xc0019126e0) (1) Data frame handling I0512 13:09:10.553005 6 log.go:172] (0xc0019126e0) (1) Data frame sent I0512 13:09:10.553023 6 log.go:172] (0xc0005b9d90) (0xc0019126e0) Stream removed, broadcasting: 1 I0512 13:09:10.553082 6 log.go:172] (0xc0005b9d90) Go away received I0512 13:09:10.553293 6 log.go:172] (0xc0005b9d90) (0xc0019126e0) Stream removed, broadcasting: 1 I0512 13:09:10.553328 6 log.go:172] (0xc0005b9d90) (0xc002b46320) Stream removed, broadcasting: 3 I0512 13:09:10.553350 6 log.go:172] (0xc0005b9d90) (0xc002178b40) Stream removed, broadcasting: 5 May 12 13:09:10.553: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:09:10.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2769" for this suite. May 12 13:09:34.601: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:09:34.661: INFO: namespace pod-network-test-2769 deletion completed in 24.103354549s • [SLOW TEST:57.480 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:09:34.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 12 13:09:35.430: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9ce89902-d715-4637-be6f-dd4aa9349e9c" in namespace "projected-1757" to be "success or failure" May 12 13:09:35.434: INFO: Pod "downwardapi-volume-9ce89902-d715-4637-be6f-dd4aa9349e9c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.935818ms May 12 13:09:37.437: INFO: Pod "downwardapi-volume-9ce89902-d715-4637-be6f-dd4aa9349e9c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006935275s May 12 13:09:39.441: INFO: Pod "downwardapi-volume-9ce89902-d715-4637-be6f-dd4aa9349e9c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010275573s May 12 13:09:41.445: INFO: Pod "downwardapi-volume-9ce89902-d715-4637-be6f-dd4aa9349e9c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014341763s STEP: Saw pod success May 12 13:09:41.445: INFO: Pod "downwardapi-volume-9ce89902-d715-4637-be6f-dd4aa9349e9c" satisfied condition "success or failure" May 12 13:09:41.447: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-9ce89902-d715-4637-be6f-dd4aa9349e9c container client-container: STEP: delete the pod May 12 13:09:41.478: INFO: Waiting for pod downwardapi-volume-9ce89902-d715-4637-be6f-dd4aa9349e9c to disappear May 12 13:09:41.493: INFO: Pod downwardapi-volume-9ce89902-d715-4637-be6f-dd4aa9349e9c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:09:41.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1757" for this suite. May 12 13:09:47.621: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:09:48.089: INFO: namespace projected-1757 deletion completed in 6.593269624s • [SLOW TEST:13.427 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:09:48.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-06e3dd9a-5e63-4a0e-95ed-2c8618269e02 STEP: Creating a pod to test consume configMaps May 12 13:09:48.458: INFO: Waiting up to 5m0s for pod "pod-configmaps-4f24d1f8-618c-49f7-a7db-63bc105b9f08" in namespace "configmap-9711" to be "success or failure" May 12 13:09:48.795: INFO: Pod "pod-configmaps-4f24d1f8-618c-49f7-a7db-63bc105b9f08": Phase="Pending", Reason="", readiness=false. Elapsed: 337.220461ms May 12 13:09:50.800: INFO: Pod "pod-configmaps-4f24d1f8-618c-49f7-a7db-63bc105b9f08": Phase="Pending", Reason="", readiness=false. Elapsed: 2.342204722s May 12 13:09:52.903: INFO: Pod "pod-configmaps-4f24d1f8-618c-49f7-a7db-63bc105b9f08": Phase="Pending", Reason="", readiness=false. Elapsed: 4.445021022s May 12 13:09:54.907: INFO: Pod "pod-configmaps-4f24d1f8-618c-49f7-a7db-63bc105b9f08": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.448857322s STEP: Saw pod success May 12 13:09:54.907: INFO: Pod "pod-configmaps-4f24d1f8-618c-49f7-a7db-63bc105b9f08" satisfied condition "success or failure" May 12 13:09:54.910: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-4f24d1f8-618c-49f7-a7db-63bc105b9f08 container configmap-volume-test: STEP: delete the pod May 12 13:09:54.946: INFO: Waiting for pod pod-configmaps-4f24d1f8-618c-49f7-a7db-63bc105b9f08 to disappear May 12 13:09:54.952: INFO: Pod pod-configmaps-4f24d1f8-618c-49f7-a7db-63bc105b9f08 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:09:54.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9711" for this suite. May 12 13:10:02.993: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:10:03.052: INFO: namespace configmap-9711 deletion completed in 8.096044467s • [SLOW TEST:14.962 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:10:03.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-1609e261-e392-4945-83e0-bc3a1f11b3a5 STEP: Creating a pod to test consume configMaps May 12 13:10:03.223: INFO: Waiting up to 5m0s for pod "pod-configmaps-d850e4ff-7cad-410e-894a-b8b713ad0ecc" in namespace "configmap-6689" to be "success or failure" May 12 13:10:03.400: INFO: Pod "pod-configmaps-d850e4ff-7cad-410e-894a-b8b713ad0ecc": Phase="Pending", Reason="", readiness=false. Elapsed: 176.943143ms May 12 13:10:05.404: INFO: Pod "pod-configmaps-d850e4ff-7cad-410e-894a-b8b713ad0ecc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.180292007s May 12 13:10:07.407: INFO: Pod "pod-configmaps-d850e4ff-7cad-410e-894a-b8b713ad0ecc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.183998346s STEP: Saw pod success May 12 13:10:07.407: INFO: Pod "pod-configmaps-d850e4ff-7cad-410e-894a-b8b713ad0ecc" satisfied condition "success or failure" May 12 13:10:07.410: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-d850e4ff-7cad-410e-894a-b8b713ad0ecc container configmap-volume-test: STEP: delete the pod May 12 13:10:07.427: INFO: Waiting for pod pod-configmaps-d850e4ff-7cad-410e-894a-b8b713ad0ecc to disappear May 12 13:10:07.431: INFO: Pod pod-configmaps-d850e4ff-7cad-410e-894a-b8b713ad0ecc no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:10:07.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6689" for this suite. May 12 13:10:13.501: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:10:13.562: INFO: namespace configmap-6689 deletion completed in 6.127482s • [SLOW TEST:10.509 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:10:13.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-2738 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-2738 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-2738 May 12 13:10:13.672: INFO: Found 0 stateful pods, waiting for 1 May 12 13:10:23.678: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 12 13:10:23.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 12 13:10:27.990: INFO: stderr: "I0512 13:10:27.878849 107 log.go:172] (0xc0009a2370) (0xc0006eab40) Create stream\nI0512 13:10:27.878890 107 log.go:172] (0xc0009a2370) (0xc0006eab40) Stream added, broadcasting: 1\nI0512 13:10:27.880859 107 log.go:172] (0xc0009a2370) Reply frame received for 1\nI0512 13:10:27.880910 107 log.go:172] (0xc0009a2370) (0xc000a6c000) Create stream\nI0512 13:10:27.880927 107 log.go:172] (0xc0009a2370) (0xc000a6c000) Stream added, broadcasting: 3\nI0512 13:10:27.881861 107 log.go:172] (0xc0009a2370) Reply frame received for 3\nI0512 13:10:27.881915 107 log.go:172] (0xc0009a2370) (0xc0002a0000) Create stream\nI0512 13:10:27.881938 107 log.go:172] (0xc0009a2370) (0xc0002a0000) Stream added, broadcasting: 5\nI0512 13:10:27.882870 107 log.go:172] (0xc0009a2370) Reply frame received for 5\nI0512 13:10:27.933516 107 log.go:172] (0xc0009a2370) Data frame received for 5\nI0512 13:10:27.933546 107 log.go:172] (0xc0002a0000) (5) Data frame handling\nI0512 13:10:27.933566 107 log.go:172] (0xc0002a0000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0512 13:10:27.984556 107 log.go:172] (0xc0009a2370) Data frame received for 3\nI0512 13:10:27.984604 107 log.go:172] (0xc0009a2370) Data frame received for 5\nI0512 13:10:27.984661 107 log.go:172] (0xc0002a0000) (5) Data frame handling\nI0512 13:10:27.984699 107 log.go:172] (0xc000a6c000) (3) Data frame handling\nI0512 13:10:27.984734 107 log.go:172] (0xc000a6c000) (3) Data frame sent\nI0512 13:10:27.984763 107 log.go:172] (0xc0009a2370) Data frame received for 3\nI0512 13:10:27.984773 107 log.go:172] (0xc000a6c000) (3) Data frame handling\nI0512 13:10:27.986613 107 log.go:172] (0xc0009a2370) Data frame received for 1\nI0512 13:10:27.986628 107 log.go:172] (0xc0006eab40) (1) Data frame handling\nI0512 13:10:27.986649 107 log.go:172] (0xc0006eab40) (1) Data frame sent\nI0512 13:10:27.986662 107 log.go:172] (0xc0009a2370) (0xc0006eab40) Stream removed, broadcasting: 1\nI0512 13:10:27.986790 107 log.go:172] (0xc0009a2370) Go away received\nI0512 13:10:27.986942 107 log.go:172] (0xc0009a2370) (0xc0006eab40) Stream removed, broadcasting: 1\nI0512 13:10:27.986959 107 log.go:172] (0xc0009a2370) (0xc000a6c000) Stream removed, broadcasting: 3\nI0512 13:10:27.986969 107 log.go:172] (0xc0009a2370) (0xc0002a0000) Stream removed, broadcasting: 5\n" May 12 13:10:27.990: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 12 13:10:27.990: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 12 13:10:27.999: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 12 13:10:38.003: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 12 13:10:38.003: INFO: Waiting for statefulset status.replicas updated to 0 May 12 13:10:38.019: INFO: POD NODE PHASE GRACE CONDITIONS May 12 13:10:38.019: INFO: ss-0 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:13 +0000 UTC }] May 12 13:10:38.019: INFO: May 12 13:10:38.019: INFO: StatefulSet ss has not reached scale 3, at 1 May 12 13:10:39.024: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.990971045s May 12 13:10:40.119: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.986395649s May 12 13:10:41.123: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.890722899s May 12 13:10:42.127: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.887719821s May 12 13:10:43.133: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.883269804s May 12 13:10:44.137: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.877077407s May 12 13:10:45.142: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.872834894s May 12 13:10:46.185: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.868220543s May 12 13:10:47.188: INFO: Verifying statefulset ss doesn't scale past 3 for another 825.38509ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-2738 May 12 13:10:48.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 13:10:48.422: INFO: stderr: "I0512 13:10:48.355537 139 log.go:172] (0xc000a18420) (0xc0006c46e0) Create stream\nI0512 13:10:48.355582 139 log.go:172] (0xc000a18420) (0xc0006c46e0) Stream added, broadcasting: 1\nI0512 13:10:48.360219 139 log.go:172] (0xc000a18420) Reply frame received for 1\nI0512 13:10:48.360252 139 log.go:172] (0xc000a18420) (0xc0006c4000) Create stream\nI0512 13:10:48.360264 139 log.go:172] (0xc000a18420) (0xc0006c4000) Stream added, broadcasting: 3\nI0512 13:10:48.361396 139 log.go:172] (0xc000a18420) Reply frame received for 3\nI0512 13:10:48.361420 139 log.go:172] (0xc000a18420) (0xc0005ec1e0) Create stream\nI0512 13:10:48.361430 139 log.go:172] (0xc000a18420) (0xc0005ec1e0) Stream added, broadcasting: 5\nI0512 13:10:48.362317 139 log.go:172] (0xc000a18420) Reply frame received for 5\nI0512 13:10:48.417621 139 log.go:172] (0xc000a18420) Data frame received for 3\nI0512 13:10:48.417653 139 log.go:172] (0xc0006c4000) (3) Data frame handling\nI0512 13:10:48.417662 139 log.go:172] (0xc0006c4000) (3) Data frame sent\nI0512 13:10:48.417669 139 log.go:172] (0xc000a18420) Data frame received for 3\nI0512 13:10:48.417674 139 log.go:172] (0xc0006c4000) (3) Data frame handling\nI0512 13:10:48.417696 139 log.go:172] (0xc000a18420) Data frame received for 5\nI0512 13:10:48.417709 139 log.go:172] (0xc0005ec1e0) (5) Data frame handling\nI0512 13:10:48.417728 139 log.go:172] (0xc0005ec1e0) (5) Data frame sent\nI0512 13:10:48.417737 139 log.go:172] (0xc000a18420) Data frame received for 5\nI0512 13:10:48.417743 139 log.go:172] (0xc0005ec1e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0512 13:10:48.418457 139 log.go:172] (0xc000a18420) Data frame received for 1\nI0512 13:10:48.418500 139 log.go:172] (0xc0006c46e0) (1) Data frame handling\nI0512 13:10:48.418516 139 log.go:172] (0xc0006c46e0) (1) Data frame sent\nI0512 13:10:48.418526 139 log.go:172] (0xc000a18420) (0xc0006c46e0) Stream removed, broadcasting: 1\nI0512 13:10:48.418614 139 log.go:172] (0xc000a18420) Go away received\nI0512 13:10:48.418747 139 log.go:172] (0xc000a18420) (0xc0006c46e0) Stream removed, broadcasting: 1\nI0512 13:10:48.418758 139 log.go:172] (0xc000a18420) (0xc0006c4000) Stream removed, broadcasting: 3\nI0512 13:10:48.418765 139 log.go:172] (0xc000a18420) (0xc0005ec1e0) Stream removed, broadcasting: 5\n" May 12 13:10:48.422: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 12 13:10:48.422: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 12 13:10:48.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 13:10:48.608: INFO: stderr: "I0512 13:10:48.531650 159 log.go:172] (0xc000104790) (0xc0002ae140) Create stream\nI0512 13:10:48.531697 159 log.go:172] (0xc000104790) (0xc0002ae140) Stream added, broadcasting: 1\nI0512 13:10:48.533660 159 log.go:172] (0xc000104790) Reply frame received for 1\nI0512 13:10:48.533693 159 log.go:172] (0xc000104790) (0xc0005b4280) Create stream\nI0512 13:10:48.533702 159 log.go:172] (0xc000104790) (0xc0005b4280) Stream added, broadcasting: 3\nI0512 13:10:48.534456 159 log.go:172] (0xc000104790) Reply frame received for 3\nI0512 13:10:48.534497 159 log.go:172] (0xc000104790) (0xc0002ae280) Create stream\nI0512 13:10:48.534513 159 log.go:172] (0xc000104790) (0xc0002ae280) Stream added, broadcasting: 5\nI0512 13:10:48.535132 159 log.go:172] (0xc000104790) Reply frame received for 5\nI0512 13:10:48.602522 159 log.go:172] (0xc000104790) Data frame received for 5\nI0512 13:10:48.602562 159 log.go:172] (0xc0002ae280) (5) Data frame handling\nI0512 13:10:48.602576 159 log.go:172] (0xc0002ae280) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0512 13:10:48.602597 159 log.go:172] (0xc000104790) Data frame received for 3\nI0512 13:10:48.602624 159 log.go:172] (0xc0005b4280) (3) Data frame handling\nI0512 13:10:48.602683 159 log.go:172] (0xc0005b4280) (3) Data frame sent\nI0512 13:10:48.602709 159 log.go:172] (0xc000104790) Data frame received for 3\nI0512 13:10:48.602726 159 log.go:172] (0xc0005b4280) (3) Data frame handling\nI0512 13:10:48.602744 159 log.go:172] (0xc000104790) Data frame received for 5\nI0512 13:10:48.602756 159 log.go:172] (0xc0002ae280) (5) Data frame handling\nI0512 13:10:48.604060 159 log.go:172] (0xc000104790) Data frame received for 1\nI0512 13:10:48.604086 159 log.go:172] (0xc0002ae140) (1) Data frame handling\nI0512 13:10:48.604107 159 log.go:172] (0xc0002ae140) (1) Data frame sent\nI0512 13:10:48.604134 159 log.go:172] (0xc000104790) (0xc0002ae140) Stream removed, broadcasting: 1\nI0512 13:10:48.604171 159 log.go:172] (0xc000104790) Go away received\nI0512 13:10:48.604531 159 log.go:172] (0xc000104790) (0xc0002ae140) Stream removed, broadcasting: 1\nI0512 13:10:48.604560 159 log.go:172] (0xc000104790) (0xc0005b4280) Stream removed, broadcasting: 3\nI0512 13:10:48.604576 159 log.go:172] (0xc000104790) (0xc0002ae280) Stream removed, broadcasting: 5\n" May 12 13:10:48.609: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 12 13:10:48.609: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 12 13:10:48.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 13:10:48.815: INFO: stderr: "I0512 13:10:48.730820 180 log.go:172] (0xc000970840) (0xc00080c460) Create stream\nI0512 13:10:48.730863 180 log.go:172] (0xc000970840) (0xc00080c460) Stream added, broadcasting: 1\nI0512 13:10:48.738139 180 log.go:172] (0xc000970840) Reply frame received for 1\nI0512 13:10:48.738179 180 log.go:172] (0xc000970840) (0xc0002c40a0) Create stream\nI0512 13:10:48.738191 180 log.go:172] (0xc000970840) (0xc0002c40a0) Stream added, broadcasting: 3\nI0512 13:10:48.738999 180 log.go:172] (0xc000970840) Reply frame received for 3\nI0512 13:10:48.739028 180 log.go:172] (0xc000970840) (0xc0002c4140) Create stream\nI0512 13:10:48.739037 180 log.go:172] (0xc000970840) (0xc0002c4140) Stream added, broadcasting: 5\nI0512 13:10:48.740011 180 log.go:172] (0xc000970840) Reply frame received for 5\nI0512 13:10:48.809970 180 log.go:172] (0xc000970840) Data frame received for 3\nI0512 13:10:48.809998 180 log.go:172] (0xc0002c40a0) (3) Data frame handling\nI0512 13:10:48.810006 180 log.go:172] (0xc0002c40a0) (3) Data frame sent\nI0512 13:10:48.810021 180 log.go:172] (0xc000970840) Data frame received for 5\nI0512 13:10:48.810031 180 log.go:172] (0xc0002c4140) (5) Data frame handling\nI0512 13:10:48.810038 180 log.go:172] (0xc0002c4140) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0512 13:10:48.810136 180 log.go:172] (0xc000970840) Data frame received for 3\nI0512 13:10:48.810170 180 log.go:172] (0xc0002c40a0) (3) Data frame handling\nI0512 13:10:48.810232 180 log.go:172] (0xc000970840) Data frame received for 5\nI0512 13:10:48.810262 180 log.go:172] (0xc0002c4140) (5) Data frame handling\nI0512 13:10:48.811781 180 log.go:172] (0xc000970840) Data frame received for 1\nI0512 13:10:48.811797 180 log.go:172] (0xc00080c460) (1) Data frame handling\nI0512 13:10:48.811806 180 log.go:172] (0xc00080c460) (1) Data frame sent\nI0512 13:10:48.811817 180 log.go:172] (0xc000970840) (0xc00080c460) Stream removed, broadcasting: 1\nI0512 13:10:48.811835 180 log.go:172] (0xc000970840) Go away received\nI0512 13:10:48.812194 180 log.go:172] (0xc000970840) (0xc00080c460) Stream removed, broadcasting: 1\nI0512 13:10:48.812222 180 log.go:172] (0xc000970840) (0xc0002c40a0) Stream removed, broadcasting: 3\nI0512 13:10:48.812234 180 log.go:172] (0xc000970840) (0xc0002c4140) Stream removed, broadcasting: 5\n" May 12 13:10:48.815: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 12 13:10:48.815: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 12 13:10:48.847: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false May 12 13:10:58.872: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 12 13:10:58.872: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 12 13:10:58.872: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 12 13:10:58.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 12 13:10:59.088: INFO: stderr: "I0512 13:10:58.994692 199 log.go:172] (0xc0009ca420) (0xc000818640) Create stream\nI0512 13:10:58.994735 199 log.go:172] (0xc0009ca420) (0xc000818640) Stream added, broadcasting: 1\nI0512 13:10:58.996597 199 log.go:172] (0xc0009ca420) Reply frame received for 1\nI0512 13:10:58.996635 199 log.go:172] (0xc0009ca420) (0xc00081c000) Create stream\nI0512 13:10:58.996649 199 log.go:172] (0xc0009ca420) (0xc00081c000) Stream added, broadcasting: 3\nI0512 13:10:58.997471 199 log.go:172] (0xc0009ca420) Reply frame received for 3\nI0512 13:10:58.997488 199 log.go:172] (0xc0009ca420) (0xc00035a280) Create stream\nI0512 13:10:58.997495 199 log.go:172] (0xc0009ca420) (0xc00035a280) Stream added, broadcasting: 5\nI0512 13:10:58.998365 199 log.go:172] (0xc0009ca420) Reply frame received for 5\nI0512 13:10:59.083552 199 log.go:172] (0xc0009ca420) Data frame received for 5\nI0512 13:10:59.083584 199 log.go:172] (0xc0009ca420) Data frame received for 3\nI0512 13:10:59.083604 199 log.go:172] (0xc00081c000) (3) Data frame handling\nI0512 13:10:59.083612 199 log.go:172] (0xc00081c000) (3) Data frame sent\nI0512 13:10:59.083617 199 log.go:172] (0xc0009ca420) Data frame received for 3\nI0512 13:10:59.083621 199 log.go:172] (0xc00081c000) (3) Data frame handling\nI0512 13:10:59.083645 199 log.go:172] (0xc00035a280) (5) Data frame handling\nI0512 13:10:59.083662 199 log.go:172] (0xc00035a280) (5) Data frame sent\nI0512 13:10:59.083668 199 log.go:172] (0xc0009ca420) Data frame received for 5\nI0512 13:10:59.083672 199 log.go:172] (0xc00035a280) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0512 13:10:59.084580 199 log.go:172] (0xc0009ca420) Data frame received for 1\nI0512 13:10:59.084596 199 log.go:172] (0xc000818640) (1) Data frame handling\nI0512 13:10:59.084607 199 log.go:172] (0xc000818640) (1) Data frame sent\nI0512 13:10:59.084618 199 log.go:172] (0xc0009ca420) (0xc000818640) Stream removed, broadcasting: 1\nI0512 13:10:59.084840 199 log.go:172] (0xc0009ca420) Go away received\nI0512 13:10:59.084897 199 log.go:172] (0xc0009ca420) (0xc000818640) Stream removed, broadcasting: 1\nI0512 13:10:59.084907 199 log.go:172] (0xc0009ca420) (0xc00081c000) Stream removed, broadcasting: 3\nI0512 13:10:59.084919 199 log.go:172] (0xc0009ca420) (0xc00035a280) Stream removed, broadcasting: 5\n" May 12 13:10:59.089: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 12 13:10:59.089: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 12 13:10:59.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 12 13:10:59.337: INFO: stderr: "I0512 13:10:59.211743 221 log.go:172] (0xc0008b4420) (0xc00065a820) Create stream\nI0512 13:10:59.211798 221 log.go:172] (0xc0008b4420) (0xc00065a820) Stream added, broadcasting: 1\nI0512 13:10:59.214083 221 log.go:172] (0xc0008b4420) Reply frame received for 1\nI0512 13:10:59.214130 221 log.go:172] (0xc0008b4420) (0xc00077c000) Create stream\nI0512 13:10:59.214150 221 log.go:172] (0xc0008b4420) (0xc00077c000) Stream added, broadcasting: 3\nI0512 13:10:59.214855 221 log.go:172] (0xc0008b4420) Reply frame received for 3\nI0512 13:10:59.214875 221 log.go:172] (0xc0008b4420) (0xc00077c0a0) Create stream\nI0512 13:10:59.214882 221 log.go:172] (0xc0008b4420) (0xc00077c0a0) Stream added, broadcasting: 5\nI0512 13:10:59.215610 221 log.go:172] (0xc0008b4420) Reply frame received for 5\nI0512 13:10:59.266599 221 log.go:172] (0xc0008b4420) Data frame received for 5\nI0512 13:10:59.266618 221 log.go:172] (0xc00077c0a0) (5) Data frame handling\nI0512 13:10:59.266630 221 log.go:172] (0xc00077c0a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0512 13:10:59.332834 221 log.go:172] (0xc0008b4420) Data frame received for 3\nI0512 13:10:59.332851 221 log.go:172] (0xc00077c000) (3) Data frame handling\nI0512 13:10:59.332857 221 log.go:172] (0xc00077c000) (3) Data frame sent\nI0512 13:10:59.332870 221 log.go:172] (0xc0008b4420) Data frame received for 5\nI0512 13:10:59.332893 221 log.go:172] (0xc00077c0a0) (5) Data frame handling\nI0512 13:10:59.332912 221 log.go:172] (0xc0008b4420) Data frame received for 3\nI0512 13:10:59.332920 221 log.go:172] (0xc00077c000) (3) Data frame handling\nI0512 13:10:59.334774 221 log.go:172] (0xc0008b4420) Data frame received for 1\nI0512 13:10:59.334783 221 log.go:172] (0xc00065a820) (1) Data frame handling\nI0512 13:10:59.334791 221 log.go:172] (0xc00065a820) (1) Data frame sent\nI0512 13:10:59.334799 221 log.go:172] (0xc0008b4420) (0xc00065a820) Stream removed, broadcasting: 1\nI0512 13:10:59.334881 221 log.go:172] (0xc0008b4420) Go away received\nI0512 13:10:59.335035 221 log.go:172] (0xc0008b4420) (0xc00065a820) Stream removed, broadcasting: 1\nI0512 13:10:59.335044 221 log.go:172] (0xc0008b4420) (0xc00077c000) Stream removed, broadcasting: 3\nI0512 13:10:59.335049 221 log.go:172] (0xc0008b4420) (0xc00077c0a0) Stream removed, broadcasting: 5\n" May 12 13:10:59.337: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 12 13:10:59.337: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 12 13:10:59.338: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 12 13:10:59.814: INFO: stderr: "I0512 13:10:59.692235 241 log.go:172] (0xc0007220b0) (0xc000754640) Create stream\nI0512 13:10:59.692292 241 log.go:172] (0xc0007220b0) (0xc000754640) Stream added, broadcasting: 1\nI0512 13:10:59.695579 241 log.go:172] (0xc0007220b0) Reply frame received for 1\nI0512 13:10:59.695619 241 log.go:172] (0xc0007220b0) (0xc0005b2000) Create stream\nI0512 13:10:59.695630 241 log.go:172] (0xc0007220b0) (0xc0005b2000) Stream added, broadcasting: 3\nI0512 13:10:59.696458 241 log.go:172] (0xc0007220b0) Reply frame received for 3\nI0512 13:10:59.696491 241 log.go:172] (0xc0007220b0) (0xc0005b20a0) Create stream\nI0512 13:10:59.696503 241 log.go:172] (0xc0007220b0) (0xc0005b20a0) Stream added, broadcasting: 5\nI0512 13:10:59.697791 241 log.go:172] (0xc0007220b0) Reply frame received for 5\nI0512 13:10:59.757665 241 log.go:172] (0xc0007220b0) Data frame received for 5\nI0512 13:10:59.757693 241 log.go:172] (0xc0005b20a0) (5) Data frame handling\nI0512 13:10:59.757711 241 log.go:172] (0xc0005b20a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0512 13:10:59.807230 241 log.go:172] (0xc0007220b0) Data frame received for 3\nI0512 13:10:59.807297 241 log.go:172] (0xc0005b2000) (3) Data frame handling\nI0512 13:10:59.807319 241 log.go:172] (0xc0005b2000) (3) Data frame sent\nI0512 13:10:59.807335 241 log.go:172] (0xc0007220b0) Data frame received for 3\nI0512 13:10:59.807344 241 log.go:172] (0xc0005b2000) (3) Data frame handling\nI0512 13:10:59.807397 241 log.go:172] (0xc0007220b0) Data frame received for 5\nI0512 13:10:59.807521 241 log.go:172] (0xc0005b20a0) (5) Data frame handling\nI0512 13:10:59.809362 241 log.go:172] (0xc0007220b0) Data frame received for 1\nI0512 13:10:59.809396 241 log.go:172] (0xc000754640) (1) Data frame handling\nI0512 13:10:59.809438 241 log.go:172] (0xc000754640) (1) Data frame sent\nI0512 13:10:59.809462 241 log.go:172] (0xc0007220b0) (0xc000754640) Stream removed, broadcasting: 1\nI0512 13:10:59.809561 241 log.go:172] (0xc0007220b0) Go away received\nI0512 13:10:59.809922 241 log.go:172] (0xc0007220b0) (0xc000754640) Stream removed, broadcasting: 1\nI0512 13:10:59.809945 241 log.go:172] (0xc0007220b0) (0xc0005b2000) Stream removed, broadcasting: 3\nI0512 13:10:59.809971 241 log.go:172] (0xc0007220b0) (0xc0005b20a0) Stream removed, broadcasting: 5\n" May 12 13:10:59.814: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 12 13:10:59.814: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 12 13:10:59.814: INFO: Waiting for statefulset status.replicas updated to 0 May 12 13:10:59.868: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 May 12 13:11:09.877: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 12 13:11:09.877: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 12 13:11:09.877: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 12 13:11:09.890: INFO: POD NODE PHASE GRACE CONDITIONS May 12 13:11:09.890: INFO: ss-0 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:13 +0000 UTC }] May 12 13:11:09.890: INFO: ss-1 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:38 +0000 UTC }] May 12 13:11:09.890: INFO: ss-2 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:11:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:11:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:38 +0000 UTC }] May 12 13:11:09.890: INFO: May 12 13:11:09.890: INFO: StatefulSet ss has not reached scale 0, at 3 May 12 13:11:11.006: INFO: POD NODE PHASE GRACE CONDITIONS May 12 13:11:11.006: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:13 +0000 UTC }] May 12 13:11:11.006: INFO: ss-1 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:38 +0000 UTC }] May 12 13:11:11.007: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:11:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:11:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:38 +0000 UTC }] May 12 13:11:11.007: INFO: May 12 13:11:11.007: INFO: StatefulSet ss has not reached scale 0, at 3 May 12 13:11:12.010: INFO: POD NODE PHASE GRACE CONDITIONS May 12 13:11:12.010: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:13 +0000 UTC }] May 12 13:11:12.010: INFO: ss-1 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:38 +0000 UTC }] May 12 13:11:12.010: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:11:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:11:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:38 +0000 UTC }] May 12 13:11:12.010: INFO: May 12 13:11:12.010: INFO: StatefulSet ss has not reached scale 0, at 3 May 12 13:11:13.092: INFO: POD NODE PHASE GRACE CONDITIONS May 12 13:11:13.092: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:13 +0000 UTC }] May 12 13:11:13.092: INFO: ss-1 iruya-worker2 Running 0s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:38 +0000 UTC }] May 12 13:11:13.092: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:11:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:11:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:38 +0000 UTC }] May 12 13:11:13.092: INFO: May 12 13:11:13.092: INFO: StatefulSet ss has not reached scale 0, at 3 May 12 13:11:14.098: INFO: POD NODE PHASE GRACE CONDITIONS May 12 13:11:14.098: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:13 +0000 UTC }] May 12 13:11:14.098: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:11:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:11:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:38 +0000 UTC }] May 12 13:11:14.098: INFO: May 12 13:11:14.098: INFO: StatefulSet ss has not reached scale 0, at 2 May 12 13:11:15.354: INFO: POD NODE PHASE GRACE CONDITIONS May 12 13:11:15.354: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:13 +0000 UTC }] May 12 13:11:15.354: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:11:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:11:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:38 +0000 UTC }] May 12 13:11:15.354: INFO: May 12 13:11:15.354: INFO: StatefulSet ss has not reached scale 0, at 2 May 12 13:11:16.358: INFO: POD NODE PHASE GRACE CONDITIONS May 12 13:11:16.358: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:13 +0000 UTC }] May 12 13:11:16.358: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:11:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:11:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:38 +0000 UTC }] May 12 13:11:16.358: INFO: May 12 13:11:16.358: INFO: StatefulSet ss has not reached scale 0, at 2 May 12 13:11:17.482: INFO: POD NODE PHASE GRACE CONDITIONS May 12 13:11:17.482: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:13 +0000 UTC }] May 12 13:11:17.482: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:11:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:11:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:38 +0000 UTC }] May 12 13:11:17.482: INFO: May 12 13:11:17.482: INFO: StatefulSet ss has not reached scale 0, at 2 May 12 13:11:18.518: INFO: POD NODE PHASE GRACE CONDITIONS May 12 13:11:18.518: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:13 +0000 UTC }] May 12 13:11:18.518: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:11:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:11:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:38 +0000 UTC }] May 12 13:11:18.518: INFO: May 12 13:11:18.518: INFO: StatefulSet ss has not reached scale 0, at 2 May 12 13:11:19.534: INFO: POD NODE PHASE GRACE CONDITIONS May 12 13:11:19.534: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:13 +0000 UTC }] May 12 13:11:19.534: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:11:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:11:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:10:38 +0000 UTC }] May 12 13:11:19.534: INFO: May 12 13:11:19.534: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-2738 May 12 13:11:20.538: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 13:11:20.669: INFO: rc: 1 May 12 13:11:20.669: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc00306c8d0 exit status 1 true [0xc00035f928 0xc00035f968 0xc00035fa08] [0xc00035f928 0xc00035f968 0xc00035fa08] [0xc00035f940 0xc00035f9b8] [0xba70e0 0xba70e0] 0xc0021340c0 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 May 12 13:11:30.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 13:11:30.765: INFO: rc: 1 May 12 13:11:30.765: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00306c990 exit status 1 true [0xc00035fa68 0xc00035fab8 0xc00035fbb0] [0xc00035fa68 0xc00035fab8 0xc00035fbb0] [0xc00035fab0 0xc00035fb30] [0xba70e0 0xba70e0] 0xc002134600 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 13:11:40.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 13:11:40.863: INFO: rc: 1 May 12 13:11:40.863: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00306cab0 exit status 1 true [0xc00035fbd0 0xc00035fc48 0xc00035fca0] [0xc00035fbd0 0xc00035fc48 0xc00035fca0] [0xc00035fc38 0xc00035fc68] [0xba70e0 0xba70e0] 0xc0021349c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 13:11:50.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 13:11:50.961: INFO: rc: 1 May 12 13:11:50.961: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc003034090 exit status 1 true [0xc002234000 0xc002234018 0xc002234030] [0xc002234000 0xc002234018 0xc002234030] [0xc002234010 0xc002234028] [0xba70e0 0xba70e0] 0xc002132480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 13:12:00.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 13:12:01.049: INFO: rc: 1 May 12 13:12:01.049: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc003034150 exit status 1 true [0xc002234038 0xc002234050 0xc002234068] [0xc002234038 0xc002234050 0xc002234068] [0xc002234048 0xc002234060] [0xba70e0 0xba70e0] 0xc002132900 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 13:12:11.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 13:12:11.137: INFO: rc: 1 May 12 13:12:11.137: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00306cba0 exit status 1 true [0xc00035fce0 0xc00035fde0 0xc00035fdf8] [0xc00035fce0 0xc00035fde0 0xc00035fdf8] [0xc00035fdb8 0xc00035fdf0] [0xba70e0 0xba70e0] 0xc002134cc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 13:12:21.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 13:12:21.228: INFO: rc: 1 May 12 13:12:21.228: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00306cc60 exit status 1 true [0xc00035fe30 0xc00035fe88 0xc00035ff50] [0xc00035fe30 0xc00035fe88 0xc00035ff50] [0xc00035fe78 0xc00035ff48] [0xba70e0 0xba70e0] 0xc002134fc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 13:12:31.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 13:12:31.326: INFO: rc: 1 May 12 13:12:31.326: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00306cd20 exit status 1 true [0xc00035ff70 0xc002166000 0xc002166018] [0xc00035ff70 0xc002166000 0xc002166018] [0xc00035ffc8 0xc002166010] [0xba70e0 0xba70e0] 0xc002135380 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 13:12:41.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 13:12:41.421: INFO: rc: 1 May 12 13:12:41.421: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00306cde0 exit status 1 true [0xc002166020 0xc002166038 0xc002166050] [0xc002166020 0xc002166038 0xc002166050] [0xc002166030 0xc002166048] [0xba70e0 0xba70e0] 0xc0021356e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 13:12:51.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 13:12:51.508: INFO: rc: 1 May 12 13:12:51.508: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00306cea0 exit status 1 true [0xc002166058 0xc002166070 0xc002166088] [0xc002166058 0xc002166070 0xc002166088] [0xc002166068 0xc002166080] [0xba70e0 0xba70e0] 0xc0021359e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 13:13:01.509: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 13:13:01.604: INFO: rc: 1 May 12 13:13:01.604: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001dd0090 exit status 1 true [0xc00035e180 0xc00035e2c8 0xc00035e530] [0xc00035e180 0xc00035e2c8 0xc00035e530] [0xc00035e288 0xc00035e3d0] [0xba70e0 0xba70e0] 0xc001cc5680 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 13:13:11.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 13:13:11.698: INFO: rc: 1 May 12 13:13:11.699: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002b0c0c0 exit status 1 true [0xc000766d88 0xc0007672f8 0xc0007674f8] [0xc000766d88 0xc0007672f8 0xc0007674f8] [0xc0007672b8 0xc000767360] [0xba70e0 0xba70e0] 0xc00286c360 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 13:13:21.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 13:13:21.797: INFO: rc: 1 May 12 13:13:21.797: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0026b40c0 exit status 1 true [0xc0000101f0 0xc000186000 0xc0001863e0] [0xc0000101f0 0xc000186000 0xc0001863e0] [0xc000514100 0xc000186310] [0xba70e0 0xba70e0] 0xc001d9cc60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 13:13:31.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 13:13:31.896: INFO: rc: 1 May 12 13:13:31.897: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00306c090 exit status 1 true [0xc002166000 0xc002166018 0xc002166030] [0xc002166000 0xc002166018 0xc002166030] [0xc002166010 0xc002166028] [0xba70e0 0xba70e0] 0xc002134360 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 13:13:41.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 13:13:42.000: INFO: rc: 1 May 12 13:13:42.000: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002b0c180 exit status 1 true [0xc0007675b0 0xc000767680 0xc000767bf8] [0xc0007675b0 0xc000767680 0xc000767bf8] [0xc000767648 0xc000767be8] [0xba70e0 0xba70e0] 0xc00286c8a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 13:13:52.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 13:13:52.105: INFO: rc: 1 May 12 13:13:52.105: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0026b4210 exit status 1 true [0xc000186450 0xc0001866f0 0xc000186918] [0xc000186450 0xc0001866f0 0xc000186918] [0xc000186690 0xc0001868e0] [0xba70e0 0xba70e0] 0xc001d9d740 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 13:14:02.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 13:14:02.176: INFO: rc: 1 May 12 13:14:02.176: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001dd0180 exit status 1 true [0xc00035e5f8 0xc00035e710 0xc00035ea70] [0xc00035e5f8 0xc00035e710 0xc00035ea70] [0xc00035e690 0xc00035e8c0] [0xba70e0 0xba70e0] 0xc0021322a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 13:14:12.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 13:14:12.272: INFO: rc: 1 May 12 13:14:12.272: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00306c150 exit status 1 true [0xc002166038 0xc002166050 0xc002166068] [0xc002166038 0xc002166050 0xc002166068] [0xc002166048 0xc002166060] [0xba70e0 0xba70e0] 0xc0021347e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 13:14:22.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 13:14:22.367: INFO: rc: 1 May 12 13:14:22.367: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0026b42d0 exit status 1 true [0xc0001869f8 0xc000186a60 0xc000186b08] [0xc0001869f8 0xc000186a60 0xc000186b08] [0xc000186a10 0xc000186ae8] [0xba70e0 0xba70e0] 0xc0020c4ea0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 13:14:32.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 13:14:32.478: INFO: rc: 1 May 12 13:14:32.478: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002b0c2d0 exit status 1 true [0xc000767c00 0xc000767e68 0xc000767ec0] [0xc000767c00 0xc000767e68 0xc000767ec0] [0xc000767d90 0xc000767ea8] [0xba70e0 0xba70e0] 0xc00286cba0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 13:14:42.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 13:14:42.600: INFO: rc: 1 May 12 13:14:42.600: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002b0c390 exit status 1 true [0xc000767fa0 0xc002234010 0xc002234028] [0xc000767fa0 0xc002234010 0xc002234028] [0xc002234008 0xc002234020] [0xba70e0 0xba70e0] 0xc00286cea0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 13:14:52.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 13:14:52.698: INFO: rc: 1 May 12 13:14:52.698: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002b0c450 exit status 1 true [0xc002234030 0xc002234048 0xc002234060] [0xc002234030 0xc002234048 0xc002234060] [0xc002234040 0xc002234058] [0xba70e0 0xba70e0] 0xc00286d1a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 13:15:02.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 13:15:02.802: INFO: rc: 1 May 12 13:15:02.803: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00306c0c0 exit status 1 true [0xc0000101f0 0xc000767138 0xc000767318] [0xc0000101f0 0xc000767138 0xc000767318] [0xc000766d88 0xc0007672f8] [0xba70e0 0xba70e0] 0xc001d9cc60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 13:15:12.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 13:15:12.903: INFO: rc: 1 May 12 13:15:12.903: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00306c1b0 exit status 1 true [0xc000767360 0xc0007675d0 0xc000767b38] [0xc000767360 0xc0007675d0 0xc000767b38] [0xc0007675b0 0xc000767680] [0xba70e0 0xba70e0] 0xc001d9d740 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 13:15:22.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 13:15:23.004: INFO: rc: 1 May 12 13:15:23.004: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00306c270 exit status 1 true [0xc000767be8 0xc000767c80 0xc000767e80] [0xc000767be8 0xc000767c80 0xc000767e80] [0xc000767c00 0xc000767e68] [0xba70e0 0xba70e0] 0xc001cc4900 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 13:15:33.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 13:15:33.170: INFO: rc: 1 May 12 13:15:33.171: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00306c330 exit status 1 true [0xc000767ea8 0xc002166000 0xc002166018] [0xc000767ea8 0xc002166000 0xc002166018] [0xc000767fa0 0xc002166010] [0xba70e0 0xba70e0] 0xc002134000 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 13:15:43.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 13:15:43.268: INFO: rc: 1 May 12 13:15:43.268: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0026b4090 exit status 1 true [0xc002234000 0xc002234018 0xc002234030] [0xc002234000 0xc002234018 0xc002234030] [0xc002234010 0xc002234028] [0xba70e0 0xba70e0] 0xc00286c360 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 13:15:53.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 13:15:53.362: INFO: rc: 1 May 12 13:15:53.362: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00306c420 exit status 1 true [0xc002166020 0xc002166038 0xc002166050] [0xc002166020 0xc002166038 0xc002166050] [0xc002166030 0xc002166048] [0xba70e0 0xba70e0] 0xc002134540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 13:16:03.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 13:16:03.464: INFO: rc: 1 May 12 13:16:03.464: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001dd00c0 exit status 1 true [0xc000186000 0xc0001863e0 0xc000186690] [0xc000186000 0xc0001863e0 0xc000186690] [0xc000186310 0xc000186560] [0xba70e0 0xba70e0] 0xc0020c5560 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 13:16:13.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 13:16:13.563: INFO: rc: 1 May 12 13:16:13.563: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0026b41b0 exit status 1 true [0xc002234038 0xc002234050 0xc002234068] [0xc002234038 0xc002234050 0xc002234068] [0xc002234048 0xc002234060] [0xba70e0 0xba70e0] 0xc00286c8a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 13:16:23.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2738 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 13:16:23.668: INFO: rc: 1 May 12 13:16:23.668: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: May 12 13:16:23.668: INFO: Scaling statefulset ss to 0 May 12 13:16:23.676: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 12 13:16:23.678: INFO: Deleting all statefulset in ns statefulset-2738 May 12 13:16:23.680: INFO: Scaling statefulset ss to 0 May 12 13:16:23.689: INFO: Waiting for statefulset status.replicas updated to 0 May 12 13:16:23.691: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:16:23.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2738" for this suite. May 12 13:16:37.260: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:16:37.319: INFO: namespace statefulset-2738 deletion completed in 12.444907728s • [SLOW TEST:383.757 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:16:37.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 12 13:16:37.800: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:16:37.920: INFO: Number of nodes with available pods: 0 May 12 13:16:37.920: INFO: Node iruya-worker is running more than one daemon pod May 12 13:16:39.205: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:16:39.412: INFO: Number of nodes with available pods: 0 May 12 13:16:39.412: INFO: Node iruya-worker is running more than one daemon pod May 12 13:16:39.925: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:16:39.928: INFO: Number of nodes with available pods: 0 May 12 13:16:39.928: INFO: Node iruya-worker is running more than one daemon pod May 12 13:16:41.527: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:16:41.530: INFO: Number of nodes with available pods: 0 May 12 13:16:41.530: INFO: Node iruya-worker is running more than one daemon pod May 12 13:16:42.041: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:16:42.043: INFO: Number of nodes with available pods: 0 May 12 13:16:42.043: INFO: Node iruya-worker is running more than one daemon pod May 12 13:16:42.924: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:16:42.926: INFO: Number of nodes with available pods: 0 May 12 13:16:42.926: INFO: Node iruya-worker is running more than one daemon pod May 12 13:16:43.924: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:16:43.926: INFO: Number of nodes with available pods: 0 May 12 13:16:43.926: INFO: Node iruya-worker is running more than one daemon pod May 12 13:16:44.924: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:16:44.926: INFO: Number of nodes with available pods: 2 May 12 13:16:44.926: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 12 13:16:44.972: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:16:45.137: INFO: Number of nodes with available pods: 2 May 12 13:16:45.137: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7462, will wait for the garbage collector to delete the pods May 12 13:16:47.211: INFO: Deleting DaemonSet.extensions daemon-set took: 17.406554ms May 12 13:16:48.411: INFO: Terminating DaemonSet.extensions daemon-set pods took: 1.200202031s May 12 13:17:02.215: INFO: Number of nodes with available pods: 0 May 12 13:17:02.215: INFO: Number of running nodes: 0, number of available pods: 0 May 12 13:17:02.217: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7462/daemonsets","resourceVersion":"10484403"},"items":null} May 12 13:17:02.219: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7462/pods","resourceVersion":"10484403"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:17:02.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7462" for this suite. May 12 13:17:10.247: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:17:10.393: INFO: namespace daemonsets-7462 deletion completed in 8.160952019s • [SLOW TEST:33.074 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:17:10.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs May 12 13:17:10.493: INFO: Waiting up to 5m0s for pod "pod-f504b42d-693d-4715-8da4-dad2b7071891" in namespace "emptydir-9789" to be "success or failure" May 12 13:17:10.518: INFO: Pod "pod-f504b42d-693d-4715-8da4-dad2b7071891": Phase="Pending", Reason="", readiness=false. Elapsed: 25.527103ms May 12 13:17:12.761: INFO: Pod "pod-f504b42d-693d-4715-8da4-dad2b7071891": Phase="Pending", Reason="", readiness=false. Elapsed: 2.267617568s May 12 13:17:14.765: INFO: Pod "pod-f504b42d-693d-4715-8da4-dad2b7071891": Phase="Running", Reason="", readiness=true. Elapsed: 4.272264501s May 12 13:17:16.769: INFO: Pod "pod-f504b42d-693d-4715-8da4-dad2b7071891": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.276250758s STEP: Saw pod success May 12 13:17:16.769: INFO: Pod "pod-f504b42d-693d-4715-8da4-dad2b7071891" satisfied condition "success or failure" May 12 13:17:16.772: INFO: Trying to get logs from node iruya-worker2 pod pod-f504b42d-693d-4715-8da4-dad2b7071891 container test-container: STEP: delete the pod May 12 13:17:16.978: INFO: Waiting for pod pod-f504b42d-693d-4715-8da4-dad2b7071891 to disappear May 12 13:17:17.058: INFO: Pod pod-f504b42d-693d-4715-8da4-dad2b7071891 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:17:17.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9789" for this suite. May 12 13:17:23.176: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:17:23.246: INFO: namespace emptydir-9789 deletion completed in 6.183766807s • [SLOW TEST:12.851 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:17:23.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 12 13:17:23.438: INFO: Waiting up to 5m0s for pod "downwardapi-volume-16bffd4d-d566-44df-b49c-5ae0be3c0b36" in namespace "projected-8260" to be "success or failure" May 12 13:17:23.484: INFO: Pod "downwardapi-volume-16bffd4d-d566-44df-b49c-5ae0be3c0b36": Phase="Pending", Reason="", readiness=false. Elapsed: 46.093994ms May 12 13:17:25.487: INFO: Pod "downwardapi-volume-16bffd4d-d566-44df-b49c-5ae0be3c0b36": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049464171s May 12 13:17:27.671: INFO: Pod "downwardapi-volume-16bffd4d-d566-44df-b49c-5ae0be3c0b36": Phase="Running", Reason="", readiness=true. Elapsed: 4.232687843s May 12 13:17:29.675: INFO: Pod "downwardapi-volume-16bffd4d-d566-44df-b49c-5ae0be3c0b36": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.237418866s STEP: Saw pod success May 12 13:17:29.675: INFO: Pod "downwardapi-volume-16bffd4d-d566-44df-b49c-5ae0be3c0b36" satisfied condition "success or failure" May 12 13:17:29.679: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-16bffd4d-d566-44df-b49c-5ae0be3c0b36 container client-container: STEP: delete the pod May 12 13:17:29.707: INFO: Waiting for pod downwardapi-volume-16bffd4d-d566-44df-b49c-5ae0be3c0b36 to disappear May 12 13:17:29.772: INFO: Pod downwardapi-volume-16bffd4d-d566-44df-b49c-5ae0be3c0b36 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:17:29.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8260" for this suite. May 12 13:17:35.792: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:17:36.191: INFO: namespace projected-8260 deletion completed in 6.414807623s • [SLOW TEST:12.945 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:17:36.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 12 13:17:36.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-3922' May 12 13:17:36.894: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 12 13:17:36.894: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc May 12 13:17:37.126: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-6z422] May 12 13:17:37.126: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-6z422" in namespace "kubectl-3922" to be "running and ready" May 12 13:17:37.162: INFO: Pod "e2e-test-nginx-rc-6z422": Phase="Pending", Reason="", readiness=false. Elapsed: 35.321572ms May 12 13:17:39.304: INFO: Pod "e2e-test-nginx-rc-6z422": Phase="Pending", Reason="", readiness=false. Elapsed: 2.177837565s May 12 13:17:41.308: INFO: Pod "e2e-test-nginx-rc-6z422": Phase="Pending", Reason="", readiness=false. Elapsed: 4.181793818s May 12 13:17:43.311: INFO: Pod "e2e-test-nginx-rc-6z422": Phase="Running", Reason="", readiness=true. Elapsed: 6.18466769s May 12 13:17:43.311: INFO: Pod "e2e-test-nginx-rc-6z422" satisfied condition "running and ready" May 12 13:17:43.311: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-6z422] May 12 13:17:43.311: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-3922' May 12 13:17:43.422: INFO: stderr: "" May 12 13:17:43.422: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 May 12 13:17:43.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-3922' May 12 13:17:43.584: INFO: stderr: "" May 12 13:17:43.584: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:17:43.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3922" for this suite. May 12 13:17:50.120: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:17:51.679: INFO: namespace kubectl-3922 deletion completed in 8.089466218s • [SLOW TEST:15.488 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:17:51.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info May 12 13:17:52.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' May 12 13:17:52.488: INFO: stderr: "" May 12 13:17:52.488: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:17:52.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9033" for this suite. May 12 13:17:58.571: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:17:58.637: INFO: namespace kubectl-9033 deletion completed in 6.145933984s • [SLOW TEST:6.957 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:17:58.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components May 12 13:17:58.697: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend May 12 13:17:58.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7443' May 12 13:17:59.086: INFO: stderr: "" May 12 13:17:59.086: INFO: stdout: "service/redis-slave created\n" May 12 13:17:59.086: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend May 12 13:17:59.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7443' May 12 13:17:59.497: INFO: stderr: "" May 12 13:17:59.497: INFO: stdout: "service/redis-master created\n" May 12 13:17:59.497: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 12 13:17:59.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7443' May 12 13:17:59.909: INFO: stderr: "" May 12 13:17:59.909: INFO: stdout: "service/frontend created\n" May 12 13:17:59.909: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 May 12 13:17:59.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7443' May 12 13:18:00.319: INFO: stderr: "" May 12 13:18:00.319: INFO: stdout: "deployment.apps/frontend created\n" May 12 13:18:00.319: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 12 13:18:00.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7443' May 12 13:18:00.700: INFO: stderr: "" May 12 13:18:00.700: INFO: stdout: "deployment.apps/redis-master created\n" May 12 13:18:00.701: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 May 12 13:18:00.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7443' May 12 13:18:01.102: INFO: stderr: "" May 12 13:18:01.102: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app May 12 13:18:01.102: INFO: Waiting for all frontend pods to be Running. May 12 13:18:16.152: INFO: Waiting for frontend to serve content. May 12 13:18:17.227: INFO: Trying to add a new entry to the guestbook. May 12 13:18:17.302: INFO: Verifying that added entry can be retrieved. May 12 13:18:17.329: INFO: Failed to get response from guestbook. err: , response: {"data": ""} STEP: using delete to clean up resources May 12 13:18:22.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7443' May 12 13:18:23.746: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 13:18:23.746: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources May 12 13:18:23.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7443' May 12 13:18:24.697: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 13:18:24.697: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 12 13:18:24.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7443' May 12 13:18:24.923: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 13:18:24.923: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 12 13:18:24.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7443' May 12 13:18:25.060: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 13:18:25.060: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources May 12 13:18:25.061: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7443' May 12 13:18:25.865: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 13:18:25.865: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 12 13:18:25.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7443' May 12 13:18:27.496: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 13:18:27.496: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:18:27.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7443" for this suite. May 12 13:19:15.150: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:19:15.219: INFO: namespace kubectl-7443 deletion completed in 47.204999928s • [SLOW TEST:76.582 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:19:15.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-26c968da-bd58-45d0-8a85-990ce1dd3dbd STEP: Creating a pod to test consume secrets May 12 13:19:15.802: INFO: Waiting up to 5m0s for pod "pod-secrets-fbc419eb-fd28-40cb-b962-076f4d4c9699" in namespace "secrets-6135" to be "success or failure" May 12 13:19:15.836: INFO: Pod "pod-secrets-fbc419eb-fd28-40cb-b962-076f4d4c9699": Phase="Pending", Reason="", readiness=false. Elapsed: 34.354949ms May 12 13:19:18.181: INFO: Pod "pod-secrets-fbc419eb-fd28-40cb-b962-076f4d4c9699": Phase="Pending", Reason="", readiness=false. Elapsed: 2.379130895s May 12 13:19:20.185: INFO: Pod "pod-secrets-fbc419eb-fd28-40cb-b962-076f4d4c9699": Phase="Pending", Reason="", readiness=false. Elapsed: 4.383136796s May 12 13:19:22.188: INFO: Pod "pod-secrets-fbc419eb-fd28-40cb-b962-076f4d4c9699": Phase="Running", Reason="", readiness=true. Elapsed: 6.386277849s May 12 13:19:24.192: INFO: Pod "pod-secrets-fbc419eb-fd28-40cb-b962-076f4d4c9699": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.389969229s STEP: Saw pod success May 12 13:19:24.192: INFO: Pod "pod-secrets-fbc419eb-fd28-40cb-b962-076f4d4c9699" satisfied condition "success or failure" May 12 13:19:24.195: INFO: Trying to get logs from node iruya-worker pod pod-secrets-fbc419eb-fd28-40cb-b962-076f4d4c9699 container secret-volume-test: STEP: delete the pod May 12 13:19:24.578: INFO: Waiting for pod pod-secrets-fbc419eb-fd28-40cb-b962-076f4d4c9699 to disappear May 12 13:19:24.657: INFO: Pod pod-secrets-fbc419eb-fd28-40cb-b962-076f4d4c9699 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:19:24.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6135" for this suite. May 12 13:19:30.772: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:19:30.913: INFO: namespace secrets-6135 deletion completed in 6.250415019s • [SLOW TEST:15.694 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:19:30.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 12 13:19:31.028: INFO: Waiting up to 5m0s for pod "downward-api-8d3342a7-1fa7-4607-b74e-aeea815b3639" in namespace "downward-api-3051" to be "success or failure" May 12 13:19:31.048: INFO: Pod "downward-api-8d3342a7-1fa7-4607-b74e-aeea815b3639": Phase="Pending", Reason="", readiness=false. Elapsed: 19.533181ms May 12 13:19:33.619: INFO: Pod "downward-api-8d3342a7-1fa7-4607-b74e-aeea815b3639": Phase="Pending", Reason="", readiness=false. Elapsed: 2.59064581s May 12 13:19:35.630: INFO: Pod "downward-api-8d3342a7-1fa7-4607-b74e-aeea815b3639": Phase="Running", Reason="", readiness=true. Elapsed: 4.601954936s May 12 13:19:37.634: INFO: Pod "downward-api-8d3342a7-1fa7-4607-b74e-aeea815b3639": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.606029683s STEP: Saw pod success May 12 13:19:37.634: INFO: Pod "downward-api-8d3342a7-1fa7-4607-b74e-aeea815b3639" satisfied condition "success or failure" May 12 13:19:37.638: INFO: Trying to get logs from node iruya-worker pod downward-api-8d3342a7-1fa7-4607-b74e-aeea815b3639 container dapi-container: STEP: delete the pod May 12 13:19:37.672: INFO: Waiting for pod downward-api-8d3342a7-1fa7-4607-b74e-aeea815b3639 to disappear May 12 13:19:37.715: INFO: Pod downward-api-8d3342a7-1fa7-4607-b74e-aeea815b3639 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:19:37.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3051" for this suite. May 12 13:19:43.744: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:19:43.823: INFO: namespace downward-api-3051 deletion completed in 6.104153555s • [SLOW TEST:12.910 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:19:43.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 12 13:19:43.908: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1e4c51d8-79b7-4cbe-9149-1cafcb90fd71" in namespace "projected-2693" to be "success or failure" May 12 13:19:43.919: INFO: Pod "downwardapi-volume-1e4c51d8-79b7-4cbe-9149-1cafcb90fd71": Phase="Pending", Reason="", readiness=false. Elapsed: 11.500124ms May 12 13:19:46.230: INFO: Pod "downwardapi-volume-1e4c51d8-79b7-4cbe-9149-1cafcb90fd71": Phase="Pending", Reason="", readiness=false. Elapsed: 2.322322962s May 12 13:19:48.234: INFO: Pod "downwardapi-volume-1e4c51d8-79b7-4cbe-9149-1cafcb90fd71": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.326668244s STEP: Saw pod success May 12 13:19:48.234: INFO: Pod "downwardapi-volume-1e4c51d8-79b7-4cbe-9149-1cafcb90fd71" satisfied condition "success or failure" May 12 13:19:48.237: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-1e4c51d8-79b7-4cbe-9149-1cafcb90fd71 container client-container: STEP: delete the pod May 12 13:19:48.393: INFO: Waiting for pod downwardapi-volume-1e4c51d8-79b7-4cbe-9149-1cafcb90fd71 to disappear May 12 13:19:48.466: INFO: Pod downwardapi-volume-1e4c51d8-79b7-4cbe-9149-1cafcb90fd71 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:19:48.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2693" for this suite. May 12 13:19:54.615: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:19:54.678: INFO: namespace projected-2693 deletion completed in 6.2071482s • [SLOW TEST:10.854 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:19:54.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC May 12 13:19:54.789: INFO: namespace kubectl-9698 May 12 13:19:54.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9698' May 12 13:19:55.078: INFO: stderr: "" May 12 13:19:55.078: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 12 13:19:56.082: INFO: Selector matched 1 pods for map[app:redis] May 12 13:19:56.082: INFO: Found 0 / 1 May 12 13:19:57.224: INFO: Selector matched 1 pods for map[app:redis] May 12 13:19:57.224: INFO: Found 0 / 1 May 12 13:19:58.083: INFO: Selector matched 1 pods for map[app:redis] May 12 13:19:58.083: INFO: Found 0 / 1 May 12 13:19:59.082: INFO: Selector matched 1 pods for map[app:redis] May 12 13:19:59.082: INFO: Found 0 / 1 May 12 13:20:00.127: INFO: Selector matched 1 pods for map[app:redis] May 12 13:20:00.128: INFO: Found 1 / 1 May 12 13:20:00.128: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 12 13:20:00.130: INFO: Selector matched 1 pods for map[app:redis] May 12 13:20:00.130: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 12 13:20:00.130: INFO: wait on redis-master startup in kubectl-9698 May 12 13:20:00.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-dbjvk redis-master --namespace=kubectl-9698' May 12 13:20:00.226: INFO: stderr: "" May 12 13:20:00.226: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 12 May 13:19:58.781 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 12 May 13:19:58.781 # Server started, Redis version 3.2.12\n1:M 12 May 13:19:58.781 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 12 May 13:19:58.781 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC May 12 13:20:00.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-9698' May 12 13:20:01.201: INFO: stderr: "" May 12 13:20:01.202: INFO: stdout: "service/rm2 exposed\n" May 12 13:20:01.226: INFO: Service rm2 in namespace kubectl-9698 found. STEP: exposing service May 12 13:20:03.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-9698' May 12 13:20:03.384: INFO: stderr: "" May 12 13:20:03.384: INFO: stdout: "service/rm3 exposed\n" May 12 13:20:03.395: INFO: Service rm3 in namespace kubectl-9698 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:20:05.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9698" for this suite. May 12 13:20:29.536: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:20:29.615: INFO: namespace kubectl-9698 deletion completed in 24.20964362s • [SLOW TEST:34.937 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:20:29.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-6a3199ce-7cc0-4eb3-b060-d9008c69c6e1 STEP: Creating a pod to test consume configMaps May 12 13:20:29.755: INFO: Waiting up to 5m0s for pod "pod-configmaps-50971207-423a-40c1-8795-6242836b8e96" in namespace "configmap-3256" to be "success or failure" May 12 13:20:29.925: INFO: Pod "pod-configmaps-50971207-423a-40c1-8795-6242836b8e96": Phase="Pending", Reason="", readiness=false. Elapsed: 169.46102ms May 12 13:20:31.928: INFO: Pod "pod-configmaps-50971207-423a-40c1-8795-6242836b8e96": Phase="Pending", Reason="", readiness=false. Elapsed: 2.172745117s May 12 13:20:34.051: INFO: Pod "pod-configmaps-50971207-423a-40c1-8795-6242836b8e96": Phase="Pending", Reason="", readiness=false. Elapsed: 4.29553184s May 12 13:20:36.055: INFO: Pod "pod-configmaps-50971207-423a-40c1-8795-6242836b8e96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.299563183s STEP: Saw pod success May 12 13:20:36.055: INFO: Pod "pod-configmaps-50971207-423a-40c1-8795-6242836b8e96" satisfied condition "success or failure" May 12 13:20:36.057: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-50971207-423a-40c1-8795-6242836b8e96 container configmap-volume-test: STEP: delete the pod May 12 13:20:36.090: INFO: Waiting for pod pod-configmaps-50971207-423a-40c1-8795-6242836b8e96 to disappear May 12 13:20:36.140: INFO: Pod pod-configmaps-50971207-423a-40c1-8795-6242836b8e96 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:20:36.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3256" for this suite. May 12 13:20:42.231: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:20:42.302: INFO: namespace configmap-3256 deletion completed in 6.158240804s • [SLOW TEST:12.687 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:20:42.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container May 12 13:20:48.929: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5266 pod-service-account-473a1db5-a6ea-43f1-aa1b-37a34e436c1e -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container May 12 13:20:51.969: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5266 pod-service-account-473a1db5-a6ea-43f1-aa1b-37a34e436c1e -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container May 12 13:20:52.168: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5266 pod-service-account-473a1db5-a6ea-43f1-aa1b-37a34e436c1e -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:20:52.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-5266" for this suite. May 12 13:20:58.520: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:20:58.582: INFO: namespace svcaccounts-5266 deletion completed in 6.201340834s • [SLOW TEST:16.279 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:20:58.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 12 13:20:58.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-7628' May 12 13:20:58.851: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 12 13:20:58.851: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 May 12 13:21:03.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-7628' May 12 13:21:03.138: INFO: stderr: "" May 12 13:21:03.138: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:21:03.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7628" for this suite. May 12 13:23:05.150: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:23:05.271: INFO: namespace kubectl-7628 deletion completed in 2m2.130192758s • [SLOW TEST:126.689 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:23:05.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-7ec22caf-d2a4-4cb0-81bf-d7a51cc967f5 STEP: Creating a pod to test consume secrets May 12 13:23:05.421: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8e17d7f6-1c3f-41e1-ba1d-e143368dda77" in namespace "projected-2737" to be "success or failure" May 12 13:23:05.475: INFO: Pod "pod-projected-secrets-8e17d7f6-1c3f-41e1-ba1d-e143368dda77": Phase="Pending", Reason="", readiness=false. Elapsed: 53.184121ms May 12 13:23:07.685: INFO: Pod "pod-projected-secrets-8e17d7f6-1c3f-41e1-ba1d-e143368dda77": Phase="Pending", Reason="", readiness=false. Elapsed: 2.263091702s May 12 13:23:09.689: INFO: Pod "pod-projected-secrets-8e17d7f6-1c3f-41e1-ba1d-e143368dda77": Phase="Pending", Reason="", readiness=false. Elapsed: 4.267869851s May 12 13:23:11.694: INFO: Pod "pod-projected-secrets-8e17d7f6-1c3f-41e1-ba1d-e143368dda77": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.272361962s STEP: Saw pod success May 12 13:23:11.694: INFO: Pod "pod-projected-secrets-8e17d7f6-1c3f-41e1-ba1d-e143368dda77" satisfied condition "success or failure" May 12 13:23:11.697: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-8e17d7f6-1c3f-41e1-ba1d-e143368dda77 container projected-secret-volume-test: STEP: delete the pod May 12 13:23:11.723: INFO: Waiting for pod pod-projected-secrets-8e17d7f6-1c3f-41e1-ba1d-e143368dda77 to disappear May 12 13:23:11.728: INFO: Pod pod-projected-secrets-8e17d7f6-1c3f-41e1-ba1d-e143368dda77 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:23:11.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2737" for this suite. May 12 13:23:17.813: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:23:17.923: INFO: namespace projected-2737 deletion completed in 6.192601432s • [SLOW TEST:12.651 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:23:17.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod May 12 13:23:18.228: INFO: PodSpec: initContainers in spec.initContainers May 12 13:24:10.923: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-4f7308c2-e61b-4843-9a8c-111445e16a89", GenerateName:"", Namespace:"init-container-2869", SelfLink:"/api/v1/namespaces/init-container-2869/pods/pod-init-4f7308c2-e61b-4843-9a8c-111445e16a89", UID:"6088d3f2-26b9-48ce-b0c5-1c47bbfcfd14", ResourceVersion:"10485779", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63724886598, loc:(*time.Location)(0x7ead8c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"228507452"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-ljvm7", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00234fc80), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-ljvm7", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-ljvm7", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-ljvm7", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00268e7e8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002bcca80), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00268e870)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00268e890)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00268e898), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00268e89c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724886598, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724886598, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724886598, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724886598, loc:(*time.Location)(0x7ead8c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.6", PodIP:"10.244.2.36", StartTime:(*v1.Time)(0xc0013f4760), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc0013f47a0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00151e540)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://c78efd1f2b1ad35b60b208b4726260e3b5ee2dead79b8392fa4977b7d107b597"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0013f47c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0013f4780), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:24:10.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2869" for this suite. May 12 13:24:35.194: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:24:35.577: INFO: namespace init-container-2869 deletion completed in 24.564229571s • [SLOW TEST:77.654 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:24:35.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap that has name configmap-test-emptyKey-aff716c4-5be3-4179-850f-2846d0a374a1 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:24:35.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7430" for this suite. May 12 13:24:42.132: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:24:42.356: INFO: namespace configmap-7430 deletion completed in 6.454172824s • [SLOW TEST:6.778 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:24:42.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 12 13:24:42.579: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1534,SelfLink:/api/v1/namespaces/watch-1534/configmaps/e2e-watch-test-configmap-a,UID:c6ebcb4b-a1f4-4e41-90a1-f4d0481838e4,ResourceVersion:10485870,Generation:0,CreationTimestamp:2020-05-12 13:24:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 12 13:24:42.579: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1534,SelfLink:/api/v1/namespaces/watch-1534/configmaps/e2e-watch-test-configmap-a,UID:c6ebcb4b-a1f4-4e41-90a1-f4d0481838e4,ResourceVersion:10485870,Generation:0,CreationTimestamp:2020-05-12 13:24:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 12 13:24:52.586: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1534,SelfLink:/api/v1/namespaces/watch-1534/configmaps/e2e-watch-test-configmap-a,UID:c6ebcb4b-a1f4-4e41-90a1-f4d0481838e4,ResourceVersion:10485890,Generation:0,CreationTimestamp:2020-05-12 13:24:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 12 13:24:52.587: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1534,SelfLink:/api/v1/namespaces/watch-1534/configmaps/e2e-watch-test-configmap-a,UID:c6ebcb4b-a1f4-4e41-90a1-f4d0481838e4,ResourceVersion:10485890,Generation:0,CreationTimestamp:2020-05-12 13:24:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 12 13:25:02.669: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1534,SelfLink:/api/v1/namespaces/watch-1534/configmaps/e2e-watch-test-configmap-a,UID:c6ebcb4b-a1f4-4e41-90a1-f4d0481838e4,ResourceVersion:10485910,Generation:0,CreationTimestamp:2020-05-12 13:24:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 12 13:25:02.669: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1534,SelfLink:/api/v1/namespaces/watch-1534/configmaps/e2e-watch-test-configmap-a,UID:c6ebcb4b-a1f4-4e41-90a1-f4d0481838e4,ResourceVersion:10485910,Generation:0,CreationTimestamp:2020-05-12 13:24:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 12 13:25:12.674: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1534,SelfLink:/api/v1/namespaces/watch-1534/configmaps/e2e-watch-test-configmap-a,UID:c6ebcb4b-a1f4-4e41-90a1-f4d0481838e4,ResourceVersion:10485931,Generation:0,CreationTimestamp:2020-05-12 13:24:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 12 13:25:12.674: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1534,SelfLink:/api/v1/namespaces/watch-1534/configmaps/e2e-watch-test-configmap-a,UID:c6ebcb4b-a1f4-4e41-90a1-f4d0481838e4,ResourceVersion:10485931,Generation:0,CreationTimestamp:2020-05-12 13:24:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 12 13:25:22.681: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-1534,SelfLink:/api/v1/namespaces/watch-1534/configmaps/e2e-watch-test-configmap-b,UID:dcc97734-cb86-45e5-9ab2-a447efeca308,ResourceVersion:10485953,Generation:0,CreationTimestamp:2020-05-12 13:25:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 12 13:25:22.681: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-1534,SelfLink:/api/v1/namespaces/watch-1534/configmaps/e2e-watch-test-configmap-b,UID:dcc97734-cb86-45e5-9ab2-a447efeca308,ResourceVersion:10485953,Generation:0,CreationTimestamp:2020-05-12 13:25:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 12 13:25:32.688: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-1534,SelfLink:/api/v1/namespaces/watch-1534/configmaps/e2e-watch-test-configmap-b,UID:dcc97734-cb86-45e5-9ab2-a447efeca308,ResourceVersion:10485973,Generation:0,CreationTimestamp:2020-05-12 13:25:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 12 13:25:32.688: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-1534,SelfLink:/api/v1/namespaces/watch-1534/configmaps/e2e-watch-test-configmap-b,UID:dcc97734-cb86-45e5-9ab2-a447efeca308,ResourceVersion:10485973,Generation:0,CreationTimestamp:2020-05-12 13:25:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:25:42.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1534" for this suite. May 12 13:25:49.132: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:25:49.244: INFO: namespace watch-1534 deletion completed in 6.203970643s • [SLOW TEST:66.888 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:25:49.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-dfa703f6-8b57-494d-9f9a-42db37e76d15 STEP: Creating a pod to test consume secrets May 12 13:25:49.351: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f93227d8-5b06-4174-a1b4-c37f8dcece64" in namespace "projected-321" to be "success or failure" May 12 13:25:49.375: INFO: Pod "pod-projected-secrets-f93227d8-5b06-4174-a1b4-c37f8dcece64": Phase="Pending", Reason="", readiness=false. Elapsed: 24.510159ms May 12 13:25:51.862: INFO: Pod "pod-projected-secrets-f93227d8-5b06-4174-a1b4-c37f8dcece64": Phase="Pending", Reason="", readiness=false. Elapsed: 2.511497751s May 12 13:25:53.866: INFO: Pod "pod-projected-secrets-f93227d8-5b06-4174-a1b4-c37f8dcece64": Phase="Running", Reason="", readiness=true. Elapsed: 4.515314357s May 12 13:25:55.871: INFO: Pod "pod-projected-secrets-f93227d8-5b06-4174-a1b4-c37f8dcece64": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.519675072s STEP: Saw pod success May 12 13:25:55.871: INFO: Pod "pod-projected-secrets-f93227d8-5b06-4174-a1b4-c37f8dcece64" satisfied condition "success or failure" May 12 13:25:55.874: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-f93227d8-5b06-4174-a1b4-c37f8dcece64 container projected-secret-volume-test: STEP: delete the pod May 12 13:25:55.990: INFO: Waiting for pod pod-projected-secrets-f93227d8-5b06-4174-a1b4-c37f8dcece64 to disappear May 12 13:25:55.992: INFO: Pod pod-projected-secrets-f93227d8-5b06-4174-a1b4-c37f8dcece64 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:25:55.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-321" for this suite. May 12 13:26:04.086: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:26:04.183: INFO: namespace projected-321 deletion completed in 8.187249074s • [SLOW TEST:14.939 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:26:04.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-38459c73-f293-4d7e-8e16-3e61dc41e642 May 12 13:26:04.291: INFO: Pod name my-hostname-basic-38459c73-f293-4d7e-8e16-3e61dc41e642: Found 0 pods out of 1 May 12 13:26:09.626: INFO: Pod name my-hostname-basic-38459c73-f293-4d7e-8e16-3e61dc41e642: Found 1 pods out of 1 May 12 13:26:09.626: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-38459c73-f293-4d7e-8e16-3e61dc41e642" are running May 12 13:26:10.191: INFO: Pod "my-hostname-basic-38459c73-f293-4d7e-8e16-3e61dc41e642-kkwq9" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-12 13:26:04 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-12 13:26:09 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-12 13:26:09 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-12 13:26:04 +0000 UTC Reason: Message:}]) May 12 13:26:10.191: INFO: Trying to dial the pod May 12 13:26:15.201: INFO: Controller my-hostname-basic-38459c73-f293-4d7e-8e16-3e61dc41e642: Got expected result from replica 1 [my-hostname-basic-38459c73-f293-4d7e-8e16-3e61dc41e642-kkwq9]: "my-hostname-basic-38459c73-f293-4d7e-8e16-3e61dc41e642-kkwq9", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:26:15.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6996" for this suite. May 12 13:26:23.293: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:26:23.427: INFO: namespace replication-controller-6996 deletion completed in 8.223035526s • [SLOW TEST:19.244 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:26:23.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 May 12 13:26:23.812: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 12 13:26:24.082: INFO: Waiting for terminating namespaces to be deleted... May 12 13:26:24.084: INFO: Logging pods the kubelet thinks is on node iruya-worker before test May 12 13:26:24.090: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 12 13:26:24.090: INFO: Container kube-proxy ready: true, restart count 0 May 12 13:26:24.090: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 12 13:26:24.090: INFO: Container kindnet-cni ready: true, restart count 0 May 12 13:26:24.090: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test May 12 13:26:24.097: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) May 12 13:26:24.097: INFO: Container coredns ready: true, restart count 0 May 12 13:26:24.097: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) May 12 13:26:24.097: INFO: Container coredns ready: true, restart count 0 May 12 13:26:24.097: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) May 12 13:26:24.097: INFO: Container kube-proxy ready: true, restart count 0 May 12 13:26:24.097: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) May 12 13:26:24.097: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.160e4ad1f9846d9a], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:26:25.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-325" for this suite. May 12 13:26:31.226: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:26:31.291: INFO: namespace sched-pred-325 deletion completed in 6.078388368s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:7.863 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:26:31.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-8e98cd76-9a80-4365-9283-08150f9a3115 STEP: Creating a pod to test consume configMaps May 12 13:26:31.486: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-84d56f0d-7274-432a-915b-8c270907cfee" in namespace "projected-1376" to be "success or failure" May 12 13:26:31.494: INFO: Pod "pod-projected-configmaps-84d56f0d-7274-432a-915b-8c270907cfee": Phase="Pending", Reason="", readiness=false. Elapsed: 7.710312ms May 12 13:26:34.152: INFO: Pod "pod-projected-configmaps-84d56f0d-7274-432a-915b-8c270907cfee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.666191133s May 12 13:26:36.156: INFO: Pod "pod-projected-configmaps-84d56f0d-7274-432a-915b-8c270907cfee": Phase="Pending", Reason="", readiness=false. Elapsed: 4.67020527s May 12 13:26:38.160: INFO: Pod "pod-projected-configmaps-84d56f0d-7274-432a-915b-8c270907cfee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.673821345s STEP: Saw pod success May 12 13:26:38.160: INFO: Pod "pod-projected-configmaps-84d56f0d-7274-432a-915b-8c270907cfee" satisfied condition "success or failure" May 12 13:26:38.163: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-84d56f0d-7274-432a-915b-8c270907cfee container projected-configmap-volume-test: STEP: delete the pod May 12 13:26:38.255: INFO: Waiting for pod pod-projected-configmaps-84d56f0d-7274-432a-915b-8c270907cfee to disappear May 12 13:26:38.304: INFO: Pod pod-projected-configmaps-84d56f0d-7274-432a-915b-8c270907cfee no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:26:38.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1376" for this suite. May 12 13:26:44.431: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:26:44.636: INFO: namespace projected-1376 deletion completed in 6.32893906s • [SLOW TEST:13.345 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:26:44.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-0ae2655a-12a7-4613-aac8-70e8acca4edc STEP: Creating configMap with name cm-test-opt-upd-c87f4fc8-6003-4579-a940-da800749e6c8 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-0ae2655a-12a7-4613-aac8-70e8acca4edc STEP: Updating configmap cm-test-opt-upd-c87f4fc8-6003-4579-a940-da800749e6c8 STEP: Creating configMap with name cm-test-opt-create-ac201aae-4eb6-4d67-aa6d-10e3dd2700f9 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:28:09.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7327" for this suite. May 12 13:28:33.076: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:28:33.148: INFO: namespace configmap-7327 deletion completed in 24.097601789s • [SLOW TEST:108.512 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:28:33.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-95bc5494-d0b6-48f3-bb15-c1ae85623bd7 STEP: Creating a pod to test consume secrets May 12 13:28:33.259: INFO: Waiting up to 5m0s for pod "pod-secrets-c3bc5062-c0d7-49c6-aee0-28b510f06b6b" in namespace "secrets-9030" to be "success or failure" May 12 13:28:33.263: INFO: Pod "pod-secrets-c3bc5062-c0d7-49c6-aee0-28b510f06b6b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.897553ms May 12 13:28:35.267: INFO: Pod "pod-secrets-c3bc5062-c0d7-49c6-aee0-28b510f06b6b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007501652s May 12 13:28:37.271: INFO: Pod "pod-secrets-c3bc5062-c0d7-49c6-aee0-28b510f06b6b": Phase="Running", Reason="", readiness=true. Elapsed: 4.01161361s May 12 13:28:39.275: INFO: Pod "pod-secrets-c3bc5062-c0d7-49c6-aee0-28b510f06b6b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015944886s STEP: Saw pod success May 12 13:28:39.275: INFO: Pod "pod-secrets-c3bc5062-c0d7-49c6-aee0-28b510f06b6b" satisfied condition "success or failure" May 12 13:28:39.278: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-c3bc5062-c0d7-49c6-aee0-28b510f06b6b container secret-volume-test: STEP: delete the pod May 12 13:28:39.366: INFO: Waiting for pod pod-secrets-c3bc5062-c0d7-49c6-aee0-28b510f06b6b to disappear May 12 13:28:39.444: INFO: Pod pod-secrets-c3bc5062-c0d7-49c6-aee0-28b510f06b6b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:28:39.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9030" for this suite. May 12 13:28:45.547: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:28:45.626: INFO: namespace secrets-9030 deletion completed in 6.178406729s • [SLOW TEST:12.477 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:28:45.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-7e45a069-ceaa-444e-8af5-995414bd6023 STEP: Creating a pod to test consume secrets May 12 13:28:45.711: INFO: Waiting up to 5m0s for pod "pod-secrets-4c6dcd94-3b8b-4f73-a0e1-0b9222baa4cb" in namespace "secrets-8253" to be "success or failure" May 12 13:28:45.819: INFO: Pod "pod-secrets-4c6dcd94-3b8b-4f73-a0e1-0b9222baa4cb": Phase="Pending", Reason="", readiness=false. Elapsed: 108.454721ms May 12 13:28:47.823: INFO: Pod "pod-secrets-4c6dcd94-3b8b-4f73-a0e1-0b9222baa4cb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112509307s May 12 13:28:49.827: INFO: Pod "pod-secrets-4c6dcd94-3b8b-4f73-a0e1-0b9222baa4cb": Phase="Running", Reason="", readiness=true. Elapsed: 4.116406266s May 12 13:28:51.831: INFO: Pod "pod-secrets-4c6dcd94-3b8b-4f73-a0e1-0b9222baa4cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.120321026s STEP: Saw pod success May 12 13:28:51.831: INFO: Pod "pod-secrets-4c6dcd94-3b8b-4f73-a0e1-0b9222baa4cb" satisfied condition "success or failure" May 12 13:28:51.834: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-4c6dcd94-3b8b-4f73-a0e1-0b9222baa4cb container secret-volume-test: STEP: delete the pod May 12 13:28:52.011: INFO: Waiting for pod pod-secrets-4c6dcd94-3b8b-4f73-a0e1-0b9222baa4cb to disappear May 12 13:28:52.043: INFO: Pod pod-secrets-4c6dcd94-3b8b-4f73-a0e1-0b9222baa4cb no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:28:52.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8253" for this suite. May 12 13:28:58.154: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:28:58.226: INFO: namespace secrets-8253 deletion completed in 6.179058311s • [SLOW TEST:12.599 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:28:58.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-4230 STEP: creating a selector STEP: Creating the service pods in kubernetes May 12 13:28:58.306: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 12 13:29:24.476: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.41:8080/dial?request=hostName&protocol=udp&host=10.244.2.40&port=8081&tries=1'] Namespace:pod-network-test-4230 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 13:29:24.476: INFO: >>> kubeConfig: /root/.kube/config I0512 13:29:24.509667 6 log.go:172] (0xc000c0f4a0) (0xc00171efa0) Create stream I0512 13:29:24.509699 6 log.go:172] (0xc000c0f4a0) (0xc00171efa0) Stream added, broadcasting: 1 I0512 13:29:24.511460 6 log.go:172] (0xc000c0f4a0) Reply frame received for 1 I0512 13:29:24.511499 6 log.go:172] (0xc000c0f4a0) (0xc00063c140) Create stream I0512 13:29:24.511511 6 log.go:172] (0xc000c0f4a0) (0xc00063c140) Stream added, broadcasting: 3 I0512 13:29:24.512385 6 log.go:172] (0xc000c0f4a0) Reply frame received for 3 I0512 13:29:24.512445 6 log.go:172] (0xc000c0f4a0) (0xc0002f28c0) Create stream I0512 13:29:24.512459 6 log.go:172] (0xc000c0f4a0) (0xc0002f28c0) Stream added, broadcasting: 5 I0512 13:29:24.513405 6 log.go:172] (0xc000c0f4a0) Reply frame received for 5 I0512 13:29:24.607282 6 log.go:172] (0xc000c0f4a0) Data frame received for 3 I0512 13:29:24.607318 6 log.go:172] (0xc00063c140) (3) Data frame handling I0512 13:29:24.607340 6 log.go:172] (0xc00063c140) (3) Data frame sent I0512 13:29:24.608093 6 log.go:172] (0xc000c0f4a0) Data frame received for 3 I0512 13:29:24.608119 6 log.go:172] (0xc00063c140) (3) Data frame handling I0512 13:29:24.608157 6 log.go:172] (0xc000c0f4a0) Data frame received for 5 I0512 13:29:24.608174 6 log.go:172] (0xc0002f28c0) (5) Data frame handling I0512 13:29:24.609751 6 log.go:172] (0xc000c0f4a0) Data frame received for 1 I0512 13:29:24.609806 6 log.go:172] (0xc00171efa0) (1) Data frame handling I0512 13:29:24.609839 6 log.go:172] (0xc00171efa0) (1) Data frame sent I0512 13:29:24.609862 6 log.go:172] (0xc000c0f4a0) (0xc00171efa0) Stream removed, broadcasting: 1 I0512 13:29:24.609885 6 log.go:172] (0xc000c0f4a0) Go away received I0512 13:29:24.609994 6 log.go:172] (0xc000c0f4a0) (0xc00171efa0) Stream removed, broadcasting: 1 I0512 13:29:24.610018 6 log.go:172] (0xc000c0f4a0) (0xc00063c140) Stream removed, broadcasting: 3 I0512 13:29:24.610034 6 log.go:172] (0xc000c0f4a0) (0xc0002f28c0) Stream removed, broadcasting: 5 May 12 13:29:24.610: INFO: Waiting for endpoints: map[] May 12 13:29:24.613: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.41:8080/dial?request=hostName&protocol=udp&host=10.244.1.29&port=8081&tries=1'] Namespace:pod-network-test-4230 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 13:29:24.613: INFO: >>> kubeConfig: /root/.kube/config I0512 13:29:24.643699 6 log.go:172] (0xc000c0fef0) (0xc00171f720) Create stream I0512 13:29:24.643729 6 log.go:172] (0xc000c0fef0) (0xc00171f720) Stream added, broadcasting: 1 I0512 13:29:24.645311 6 log.go:172] (0xc000c0fef0) Reply frame received for 1 I0512 13:29:24.645362 6 log.go:172] (0xc000c0fef0) (0xc00171f860) Create stream I0512 13:29:24.645376 6 log.go:172] (0xc000c0fef0) (0xc00171f860) Stream added, broadcasting: 3 I0512 13:29:24.646073 6 log.go:172] (0xc000c0fef0) Reply frame received for 3 I0512 13:29:24.646111 6 log.go:172] (0xc000c0fef0) (0xc00063c320) Create stream I0512 13:29:24.646121 6 log.go:172] (0xc000c0fef0) (0xc00063c320) Stream added, broadcasting: 5 I0512 13:29:24.646958 6 log.go:172] (0xc000c0fef0) Reply frame received for 5 I0512 13:29:24.713644 6 log.go:172] (0xc000c0fef0) Data frame received for 3 I0512 13:29:24.713693 6 log.go:172] (0xc00171f860) (3) Data frame handling I0512 13:29:24.713720 6 log.go:172] (0xc00171f860) (3) Data frame sent I0512 13:29:24.713931 6 log.go:172] (0xc000c0fef0) Data frame received for 3 I0512 13:29:24.713958 6 log.go:172] (0xc00171f860) (3) Data frame handling I0512 13:29:24.713983 6 log.go:172] (0xc000c0fef0) Data frame received for 5 I0512 13:29:24.713994 6 log.go:172] (0xc00063c320) (5) Data frame handling I0512 13:29:24.715297 6 log.go:172] (0xc000c0fef0) Data frame received for 1 I0512 13:29:24.715336 6 log.go:172] (0xc00171f720) (1) Data frame handling I0512 13:29:24.715345 6 log.go:172] (0xc00171f720) (1) Data frame sent I0512 13:29:24.715355 6 log.go:172] (0xc000c0fef0) (0xc00171f720) Stream removed, broadcasting: 1 I0512 13:29:24.715369 6 log.go:172] (0xc000c0fef0) Go away received I0512 13:29:24.715451 6 log.go:172] (0xc000c0fef0) (0xc00171f720) Stream removed, broadcasting: 1 I0512 13:29:24.715466 6 log.go:172] (0xc000c0fef0) (0xc00171f860) Stream removed, broadcasting: 3 I0512 13:29:24.715472 6 log.go:172] (0xc000c0fef0) (0xc00063c320) Stream removed, broadcasting: 5 May 12 13:29:24.715: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:29:24.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4230" for this suite. May 12 13:29:48.747: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:29:48.823: INFO: namespace pod-network-test-4230 deletion completed in 24.09884836s • [SLOW TEST:50.596 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:29:48.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 12 13:29:48.883: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 12 13:29:48.889: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:29:48.902: INFO: Number of nodes with available pods: 0 May 12 13:29:48.902: INFO: Node iruya-worker is running more than one daemon pod May 12 13:29:49.908: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:29:49.912: INFO: Number of nodes with available pods: 0 May 12 13:29:49.912: INFO: Node iruya-worker is running more than one daemon pod May 12 13:29:51.055: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:29:51.058: INFO: Number of nodes with available pods: 0 May 12 13:29:51.058: INFO: Node iruya-worker is running more than one daemon pod May 12 13:29:52.102: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:29:52.105: INFO: Number of nodes with available pods: 0 May 12 13:29:52.105: INFO: Node iruya-worker is running more than one daemon pod May 12 13:29:52.922: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:29:52.955: INFO: Number of nodes with available pods: 0 May 12 13:29:52.955: INFO: Node iruya-worker is running more than one daemon pod May 12 13:29:53.934: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:29:53.936: INFO: Number of nodes with available pods: 2 May 12 13:29:53.936: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 12 13:29:53.973: INFO: Wrong image for pod: daemon-set-7wddj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 13:29:53.973: INFO: Wrong image for pod: daemon-set-rf8bq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 13:29:53.990: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:29:54.995: INFO: Wrong image for pod: daemon-set-7wddj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 13:29:54.995: INFO: Wrong image for pod: daemon-set-rf8bq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 13:29:54.999: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:29:55.995: INFO: Wrong image for pod: daemon-set-7wddj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 13:29:55.995: INFO: Wrong image for pod: daemon-set-rf8bq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 13:29:55.999: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:29:56.995: INFO: Wrong image for pod: daemon-set-7wddj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 13:29:56.995: INFO: Wrong image for pod: daemon-set-rf8bq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 13:29:56.999: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:29:57.994: INFO: Wrong image for pod: daemon-set-7wddj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 13:29:57.994: INFO: Wrong image for pod: daemon-set-rf8bq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 13:29:57.994: INFO: Pod daemon-set-rf8bq is not available May 12 13:29:57.997: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:29:59.091: INFO: Wrong image for pod: daemon-set-7wddj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 13:29:59.091: INFO: Pod daemon-set-gvh58 is not available May 12 13:29:59.111: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:29:59.995: INFO: Wrong image for pod: daemon-set-7wddj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 13:29:59.995: INFO: Pod daemon-set-gvh58 is not available May 12 13:29:59.998: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:30:01.152: INFO: Wrong image for pod: daemon-set-7wddj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 13:30:01.152: INFO: Pod daemon-set-gvh58 is not available May 12 13:30:01.156: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:30:01.994: INFO: Wrong image for pod: daemon-set-7wddj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 13:30:01.994: INFO: Pod daemon-set-gvh58 is not available May 12 13:30:01.998: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:30:02.994: INFO: Wrong image for pod: daemon-set-7wddj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 13:30:03.247: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:30:03.995: INFO: Wrong image for pod: daemon-set-7wddj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 13:30:03.999: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:30:04.995: INFO: Wrong image for pod: daemon-set-7wddj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 13:30:04.995: INFO: Pod daemon-set-7wddj is not available May 12 13:30:04.998: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:30:06.013: INFO: Pod daemon-set-4vmx7 is not available May 12 13:30:06.046: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 12 13:30:06.049: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:30:06.072: INFO: Number of nodes with available pods: 1 May 12 13:30:06.072: INFO: Node iruya-worker2 is running more than one daemon pod May 12 13:30:07.075: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:30:07.078: INFO: Number of nodes with available pods: 1 May 12 13:30:07.078: INFO: Node iruya-worker2 is running more than one daemon pod May 12 13:30:08.076: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:30:08.080: INFO: Number of nodes with available pods: 1 May 12 13:30:08.080: INFO: Node iruya-worker2 is running more than one daemon pod May 12 13:30:09.076: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:30:09.080: INFO: Number of nodes with available pods: 2 May 12 13:30:09.080: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8708, will wait for the garbage collector to delete the pods May 12 13:30:09.167: INFO: Deleting DaemonSet.extensions daemon-set took: 4.352452ms May 12 13:30:09.467: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.185552ms May 12 13:30:13.970: INFO: Number of nodes with available pods: 0 May 12 13:30:13.970: INFO: Number of running nodes: 0, number of available pods: 0 May 12 13:30:13.972: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8708/daemonsets","resourceVersion":"10486855"},"items":null} May 12 13:30:13.974: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8708/pods","resourceVersion":"10486855"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:30:13.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8708" for this suite. May 12 13:30:22.008: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:30:22.068: INFO: namespace daemonsets-8708 deletion completed in 8.084384636s • [SLOW TEST:33.244 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:30:22.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:30:30.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7997" for this suite. May 12 13:30:36.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:30:36.343: INFO: namespace kubelet-test-7997 deletion completed in 6.106779251s • [SLOW TEST:14.275 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:30:36.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 May 12 13:30:36.445: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 12 13:30:36.474: INFO: Waiting for terminating namespaces to be deleted... May 12 13:30:36.476: INFO: Logging pods the kubelet thinks is on node iruya-worker before test May 12 13:30:36.482: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 12 13:30:36.482: INFO: Container kube-proxy ready: true, restart count 0 May 12 13:30:36.482: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 12 13:30:36.482: INFO: Container kindnet-cni ready: true, restart count 0 May 12 13:30:36.482: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test May 12 13:30:36.488: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) May 12 13:30:36.488: INFO: Container kube-proxy ready: true, restart count 0 May 12 13:30:36.488: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) May 12 13:30:36.488: INFO: Container kindnet-cni ready: true, restart count 0 May 12 13:30:36.488: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) May 12 13:30:36.488: INFO: Container coredns ready: true, restart count 0 May 12 13:30:36.488: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) May 12 13:30:36.488: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-d88166bd-d0b4-402e-9038-2bc0e1c28645 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-d88166bd-d0b4-402e-9038-2bc0e1c28645 off the node iruya-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-d88166bd-d0b4-402e-9038-2bc0e1c28645 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:30:48.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1373" for this suite. May 12 13:30:59.013: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:30:59.129: INFO: namespace sched-pred-1373 deletion completed in 10.194351011s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:22.786 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:30:59.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-9852 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet May 12 13:30:59.268: INFO: Found 0 stateful pods, waiting for 3 May 12 13:31:09.272: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 12 13:31:09.272: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 12 13:31:09.272: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 12 13:31:19.273: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 12 13:31:19.273: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 12 13:31:19.273: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 12 13:31:19.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9852 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 12 13:31:24.345: INFO: stderr: "I0512 13:31:24.159413 1352 log.go:172] (0xc000a7c4d0) (0xc0005c2a00) Create stream\nI0512 13:31:24.159460 1352 log.go:172] (0xc000a7c4d0) (0xc0005c2a00) Stream added, broadcasting: 1\nI0512 13:31:24.162124 1352 log.go:172] (0xc000a7c4d0) Reply frame received for 1\nI0512 13:31:24.162175 1352 log.go:172] (0xc000a7c4d0) (0xc0005c2aa0) Create stream\nI0512 13:31:24.162193 1352 log.go:172] (0xc000a7c4d0) (0xc0005c2aa0) Stream added, broadcasting: 3\nI0512 13:31:24.163324 1352 log.go:172] (0xc000a7c4d0) Reply frame received for 3\nI0512 13:31:24.163389 1352 log.go:172] (0xc000a7c4d0) (0xc000a74000) Create stream\nI0512 13:31:24.163427 1352 log.go:172] (0xc000a7c4d0) (0xc000a74000) Stream added, broadcasting: 5\nI0512 13:31:24.164457 1352 log.go:172] (0xc000a7c4d0) Reply frame received for 5\nI0512 13:31:24.292022 1352 log.go:172] (0xc000a7c4d0) Data frame received for 5\nI0512 13:31:24.292040 1352 log.go:172] (0xc000a74000) (5) Data frame handling\nI0512 13:31:24.292050 1352 log.go:172] (0xc000a74000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0512 13:31:24.334900 1352 log.go:172] (0xc000a7c4d0) Data frame received for 3\nI0512 13:31:24.335053 1352 log.go:172] (0xc0005c2aa0) (3) Data frame handling\nI0512 13:31:24.335079 1352 log.go:172] (0xc0005c2aa0) (3) Data frame sent\nI0512 13:31:24.335160 1352 log.go:172] (0xc000a7c4d0) Data frame received for 5\nI0512 13:31:24.335291 1352 log.go:172] (0xc000a74000) (5) Data frame handling\nI0512 13:31:24.335379 1352 log.go:172] (0xc000a7c4d0) Data frame received for 3\nI0512 13:31:24.335408 1352 log.go:172] (0xc0005c2aa0) (3) Data frame handling\nI0512 13:31:24.338115 1352 log.go:172] (0xc000a7c4d0) Data frame received for 1\nI0512 13:31:24.338149 1352 log.go:172] (0xc0005c2a00) (1) Data frame handling\nI0512 13:31:24.338172 1352 log.go:172] (0xc0005c2a00) (1) Data frame sent\nI0512 13:31:24.338199 1352 log.go:172] (0xc000a7c4d0) (0xc0005c2a00) Stream removed, broadcasting: 1\nI0512 13:31:24.338224 1352 log.go:172] (0xc000a7c4d0) Go away received\nI0512 13:31:24.338765 1352 log.go:172] (0xc000a7c4d0) (0xc0005c2a00) Stream removed, broadcasting: 1\nI0512 13:31:24.338794 1352 log.go:172] (0xc000a7c4d0) (0xc0005c2aa0) Stream removed, broadcasting: 3\nI0512 13:31:24.338812 1352 log.go:172] (0xc000a7c4d0) (0xc000a74000) Stream removed, broadcasting: 5\n" May 12 13:31:24.345: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 12 13:31:24.345: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 12 13:31:34.371: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 12 13:31:44.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9852 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 13:31:44.615: INFO: stderr: "I0512 13:31:44.520487 1384 log.go:172] (0xc000610420) (0xc0003706e0) Create stream\nI0512 13:31:44.520538 1384 log.go:172] (0xc000610420) (0xc0003706e0) Stream added, broadcasting: 1\nI0512 13:31:44.523033 1384 log.go:172] (0xc000610420) Reply frame received for 1\nI0512 13:31:44.523098 1384 log.go:172] (0xc000610420) (0xc000968000) Create stream\nI0512 13:31:44.523125 1384 log.go:172] (0xc000610420) (0xc000968000) Stream added, broadcasting: 3\nI0512 13:31:44.524141 1384 log.go:172] (0xc000610420) Reply frame received for 3\nI0512 13:31:44.524181 1384 log.go:172] (0xc000610420) (0xc000370780) Create stream\nI0512 13:31:44.524195 1384 log.go:172] (0xc000610420) (0xc000370780) Stream added, broadcasting: 5\nI0512 13:31:44.525537 1384 log.go:172] (0xc000610420) Reply frame received for 5\nI0512 13:31:44.608455 1384 log.go:172] (0xc000610420) Data frame received for 5\nI0512 13:31:44.608481 1384 log.go:172] (0xc000370780) (5) Data frame handling\nI0512 13:31:44.608493 1384 log.go:172] (0xc000370780) (5) Data frame sent\nI0512 13:31:44.608507 1384 log.go:172] (0xc000610420) Data frame received for 5\nI0512 13:31:44.608515 1384 log.go:172] (0xc000370780) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0512 13:31:44.608547 1384 log.go:172] (0xc000610420) Data frame received for 3\nI0512 13:31:44.608581 1384 log.go:172] (0xc000968000) (3) Data frame handling\nI0512 13:31:44.608602 1384 log.go:172] (0xc000968000) (3) Data frame sent\nI0512 13:31:44.608612 1384 log.go:172] (0xc000610420) Data frame received for 3\nI0512 13:31:44.608619 1384 log.go:172] (0xc000968000) (3) Data frame handling\nI0512 13:31:44.610131 1384 log.go:172] (0xc000610420) Data frame received for 1\nI0512 13:31:44.610143 1384 log.go:172] (0xc0003706e0) (1) Data frame handling\nI0512 13:31:44.610150 1384 log.go:172] (0xc0003706e0) (1) Data frame sent\nI0512 13:31:44.610164 1384 log.go:172] (0xc000610420) (0xc0003706e0) Stream removed, broadcasting: 1\nI0512 13:31:44.610222 1384 log.go:172] (0xc000610420) Go away received\nI0512 13:31:44.610385 1384 log.go:172] (0xc000610420) (0xc0003706e0) Stream removed, broadcasting: 1\nI0512 13:31:44.610397 1384 log.go:172] (0xc000610420) (0xc000968000) Stream removed, broadcasting: 3\nI0512 13:31:44.610403 1384 log.go:172] (0xc000610420) (0xc000370780) Stream removed, broadcasting: 5\n" May 12 13:31:44.615: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 12 13:31:44.615: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 12 13:31:54.639: INFO: Waiting for StatefulSet statefulset-9852/ss2 to complete update May 12 13:31:54.639: INFO: Waiting for Pod statefulset-9852/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 12 13:31:54.639: INFO: Waiting for Pod statefulset-9852/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 12 13:32:04.644: INFO: Waiting for StatefulSet statefulset-9852/ss2 to complete update May 12 13:32:04.644: INFO: Waiting for Pod statefulset-9852/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 12 13:32:14.922: INFO: Waiting for StatefulSet statefulset-9852/ss2 to complete update STEP: Rolling back to a previous revision May 12 13:32:24.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9852 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 12 13:32:24.930: INFO: stderr: "I0512 13:32:24.774258 1405 log.go:172] (0xc0006c8a50) (0xc000292820) Create stream\nI0512 13:32:24.774329 1405 log.go:172] (0xc0006c8a50) (0xc000292820) Stream added, broadcasting: 1\nI0512 13:32:24.777799 1405 log.go:172] (0xc0006c8a50) Reply frame received for 1\nI0512 13:32:24.777832 1405 log.go:172] (0xc0006c8a50) (0xc000892000) Create stream\nI0512 13:32:24.777841 1405 log.go:172] (0xc0006c8a50) (0xc000892000) Stream added, broadcasting: 3\nI0512 13:32:24.778677 1405 log.go:172] (0xc0006c8a50) Reply frame received for 3\nI0512 13:32:24.778723 1405 log.go:172] (0xc0006c8a50) (0xc0002928c0) Create stream\nI0512 13:32:24.778744 1405 log.go:172] (0xc0006c8a50) (0xc0002928c0) Stream added, broadcasting: 5\nI0512 13:32:24.779528 1405 log.go:172] (0xc0006c8a50) Reply frame received for 5\nI0512 13:32:24.880090 1405 log.go:172] (0xc0006c8a50) Data frame received for 5\nI0512 13:32:24.880118 1405 log.go:172] (0xc0002928c0) (5) Data frame handling\nI0512 13:32:24.880136 1405 log.go:172] (0xc0002928c0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0512 13:32:24.922716 1405 log.go:172] (0xc0006c8a50) Data frame received for 5\nI0512 13:32:24.922750 1405 log.go:172] (0xc0002928c0) (5) Data frame handling\nI0512 13:32:24.922786 1405 log.go:172] (0xc0006c8a50) Data frame received for 3\nI0512 13:32:24.922811 1405 log.go:172] (0xc000892000) (3) Data frame handling\nI0512 13:32:24.922827 1405 log.go:172] (0xc000892000) (3) Data frame sent\nI0512 13:32:24.922845 1405 log.go:172] (0xc0006c8a50) Data frame received for 3\nI0512 13:32:24.922850 1405 log.go:172] (0xc000892000) (3) Data frame handling\nI0512 13:32:24.924386 1405 log.go:172] (0xc0006c8a50) Data frame received for 1\nI0512 13:32:24.924415 1405 log.go:172] (0xc000292820) (1) Data frame handling\nI0512 13:32:24.924447 1405 log.go:172] (0xc000292820) (1) Data frame sent\nI0512 13:32:24.924477 1405 log.go:172] (0xc0006c8a50) (0xc000292820) Stream removed, broadcasting: 1\nI0512 13:32:24.924553 1405 log.go:172] (0xc0006c8a50) Go away received\nI0512 13:32:24.924887 1405 log.go:172] (0xc0006c8a50) (0xc000292820) Stream removed, broadcasting: 1\nI0512 13:32:24.924907 1405 log.go:172] (0xc0006c8a50) (0xc000892000) Stream removed, broadcasting: 3\nI0512 13:32:24.924918 1405 log.go:172] (0xc0006c8a50) (0xc0002928c0) Stream removed, broadcasting: 5\n" May 12 13:32:24.930: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 12 13:32:24.930: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 12 13:32:34.960: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 12 13:32:45.041: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9852 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 13:32:45.284: INFO: stderr: "I0512 13:32:45.205676 1424 log.go:172] (0xc000a3c370) (0xc0008de640) Create stream\nI0512 13:32:45.205748 1424 log.go:172] (0xc000a3c370) (0xc0008de640) Stream added, broadcasting: 1\nI0512 13:32:45.208131 1424 log.go:172] (0xc000a3c370) Reply frame received for 1\nI0512 13:32:45.208171 1424 log.go:172] (0xc000a3c370) (0xc0008de6e0) Create stream\nI0512 13:32:45.208188 1424 log.go:172] (0xc000a3c370) (0xc0008de6e0) Stream added, broadcasting: 3\nI0512 13:32:45.209441 1424 log.go:172] (0xc000a3c370) Reply frame received for 3\nI0512 13:32:45.209484 1424 log.go:172] (0xc000a3c370) (0xc0009b8000) Create stream\nI0512 13:32:45.209502 1424 log.go:172] (0xc000a3c370) (0xc0009b8000) Stream added, broadcasting: 5\nI0512 13:32:45.210627 1424 log.go:172] (0xc000a3c370) Reply frame received for 5\nI0512 13:32:45.277781 1424 log.go:172] (0xc000a3c370) Data frame received for 3\nI0512 13:32:45.277803 1424 log.go:172] (0xc0008de6e0) (3) Data frame handling\nI0512 13:32:45.277810 1424 log.go:172] (0xc0008de6e0) (3) Data frame sent\nI0512 13:32:45.277856 1424 log.go:172] (0xc000a3c370) Data frame received for 3\nI0512 13:32:45.277895 1424 log.go:172] (0xc0008de6e0) (3) Data frame handling\nI0512 13:32:45.277928 1424 log.go:172] (0xc000a3c370) Data frame received for 5\nI0512 13:32:45.277947 1424 log.go:172] (0xc0009b8000) (5) Data frame handling\nI0512 13:32:45.277966 1424 log.go:172] (0xc0009b8000) (5) Data frame sent\nI0512 13:32:45.277987 1424 log.go:172] (0xc000a3c370) Data frame received for 5\nI0512 13:32:45.278013 1424 log.go:172] (0xc0009b8000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0512 13:32:45.279516 1424 log.go:172] (0xc000a3c370) Data frame received for 1\nI0512 13:32:45.279530 1424 log.go:172] (0xc0008de640) (1) Data frame handling\nI0512 13:32:45.279549 1424 log.go:172] (0xc0008de640) (1) Data frame sent\nI0512 13:32:45.279571 1424 log.go:172] (0xc000a3c370) (0xc0008de640) Stream removed, broadcasting: 1\nI0512 13:32:45.279760 1424 log.go:172] (0xc000a3c370) Go away received\nI0512 13:32:45.279869 1424 log.go:172] (0xc000a3c370) (0xc0008de640) Stream removed, broadcasting: 1\nI0512 13:32:45.279879 1424 log.go:172] (0xc000a3c370) (0xc0008de6e0) Stream removed, broadcasting: 3\nI0512 13:32:45.279887 1424 log.go:172] (0xc000a3c370) (0xc0009b8000) Stream removed, broadcasting: 5\n" May 12 13:32:45.284: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 12 13:32:45.284: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 12 13:32:55.308: INFO: Waiting for StatefulSet statefulset-9852/ss2 to complete update May 12 13:32:55.308: INFO: Waiting for Pod statefulset-9852/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 12 13:32:55.308: INFO: Waiting for Pod statefulset-9852/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 12 13:33:05.315: INFO: Waiting for StatefulSet statefulset-9852/ss2 to complete update May 12 13:33:05.315: INFO: Waiting for Pod statefulset-9852/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 12 13:33:05.315: INFO: Waiting for Pod statefulset-9852/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 12 13:33:15.313: INFO: Waiting for StatefulSet statefulset-9852/ss2 to complete update May 12 13:33:15.313: INFO: Waiting for Pod statefulset-9852/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 12 13:33:25.338: INFO: Waiting for StatefulSet statefulset-9852/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 12 13:33:35.316: INFO: Deleting all statefulset in ns statefulset-9852 May 12 13:33:35.319: INFO: Scaling statefulset ss2 to 0 May 12 13:34:05.341: INFO: Waiting for statefulset status.replicas updated to 0 May 12 13:34:05.344: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:34:05.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9852" for this suite. May 12 13:34:17.412: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:34:17.471: INFO: namespace statefulset-9852 deletion completed in 12.078776245s • [SLOW TEST:198.342 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:34:17.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-fd72a127-bf72-48a5-a60b-aff487ee8411 STEP: Creating a pod to test consume secrets May 12 13:34:17.718: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-498805c2-b888-43d1-bb78-05e2bab92412" in namespace "projected-2699" to be "success or failure" May 12 13:34:17.747: INFO: Pod "pod-projected-secrets-498805c2-b888-43d1-bb78-05e2bab92412": Phase="Pending", Reason="", readiness=false. Elapsed: 28.658365ms May 12 13:34:19.750: INFO: Pod "pod-projected-secrets-498805c2-b888-43d1-bb78-05e2bab92412": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032147636s May 12 13:34:21.754: INFO: Pod "pod-projected-secrets-498805c2-b888-43d1-bb78-05e2bab92412": Phase="Running", Reason="", readiness=true. Elapsed: 4.036192953s May 12 13:34:23.758: INFO: Pod "pod-projected-secrets-498805c2-b888-43d1-bb78-05e2bab92412": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.040019688s STEP: Saw pod success May 12 13:34:23.758: INFO: Pod "pod-projected-secrets-498805c2-b888-43d1-bb78-05e2bab92412" satisfied condition "success or failure" May 12 13:34:23.761: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-498805c2-b888-43d1-bb78-05e2bab92412 container projected-secret-volume-test: STEP: delete the pod May 12 13:34:24.029: INFO: Waiting for pod pod-projected-secrets-498805c2-b888-43d1-bb78-05e2bab92412 to disappear May 12 13:34:24.406: INFO: Pod pod-projected-secrets-498805c2-b888-43d1-bb78-05e2bab92412 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:34:24.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2699" for this suite. May 12 13:34:30.673: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:34:30.731: INFO: namespace projected-2699 deletion completed in 6.321367926s • [SLOW TEST:13.259 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:34:30.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod May 12 13:34:37.363: INFO: Successfully updated pod "labelsupdate567ddd85-635d-4d88-a780-cc73479142a8" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:34:39.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-864" for this suite. May 12 13:35:03.485: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:35:03.541: INFO: namespace downward-api-864 deletion completed in 24.093416117s • [SLOW TEST:32.810 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:35:03.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs May 12 13:35:04.756: INFO: Waiting up to 5m0s for pod "pod-a027f40e-7d99-48f1-8829-fba0d4df2de8" in namespace "emptydir-7469" to be "success or failure" May 12 13:35:04.759: INFO: Pod "pod-a027f40e-7d99-48f1-8829-fba0d4df2de8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.115436ms May 12 13:35:06.766: INFO: Pod "pod-a027f40e-7d99-48f1-8829-fba0d4df2de8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010171797s May 12 13:35:08.770: INFO: Pod "pod-a027f40e-7d99-48f1-8829-fba0d4df2de8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014297914s May 12 13:35:10.774: INFO: Pod "pod-a027f40e-7d99-48f1-8829-fba0d4df2de8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.018517189s STEP: Saw pod success May 12 13:35:10.774: INFO: Pod "pod-a027f40e-7d99-48f1-8829-fba0d4df2de8" satisfied condition "success or failure" May 12 13:35:10.777: INFO: Trying to get logs from node iruya-worker pod pod-a027f40e-7d99-48f1-8829-fba0d4df2de8 container test-container: STEP: delete the pod May 12 13:35:10.887: INFO: Waiting for pod pod-a027f40e-7d99-48f1-8829-fba0d4df2de8 to disappear May 12 13:35:10.914: INFO: Pod pod-a027f40e-7d99-48f1-8829-fba0d4df2de8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:35:10.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7469" for this suite. May 12 13:35:16.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:35:17.059: INFO: namespace emptydir-7469 deletion completed in 6.140728762s • [SLOW TEST:13.518 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:35:17.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-7d34af17-9392-47ad-a0a8-ab02b8fa6f2a STEP: Creating a pod to test consume secrets May 12 13:35:18.207: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1b7fa28a-3d04-443f-972c-f3c84004aa14" in namespace "projected-5808" to be "success or failure" May 12 13:35:18.502: INFO: Pod "pod-projected-secrets-1b7fa28a-3d04-443f-972c-f3c84004aa14": Phase="Pending", Reason="", readiness=false. Elapsed: 295.221112ms May 12 13:35:20.505: INFO: Pod "pod-projected-secrets-1b7fa28a-3d04-443f-972c-f3c84004aa14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.298176867s May 12 13:35:22.510: INFO: Pod "pod-projected-secrets-1b7fa28a-3d04-443f-972c-f3c84004aa14": Phase="Pending", Reason="", readiness=false. Elapsed: 4.302706845s May 12 13:35:24.513: INFO: Pod "pod-projected-secrets-1b7fa28a-3d04-443f-972c-f3c84004aa14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.306259375s STEP: Saw pod success May 12 13:35:24.513: INFO: Pod "pod-projected-secrets-1b7fa28a-3d04-443f-972c-f3c84004aa14" satisfied condition "success or failure" May 12 13:35:24.515: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-1b7fa28a-3d04-443f-972c-f3c84004aa14 container projected-secret-volume-test: STEP: delete the pod May 12 13:35:24.661: INFO: Waiting for pod pod-projected-secrets-1b7fa28a-3d04-443f-972c-f3c84004aa14 to disappear May 12 13:35:24.896: INFO: Pod pod-projected-secrets-1b7fa28a-3d04-443f-972c-f3c84004aa14 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:35:24.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5808" for this suite. May 12 13:35:32.990: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:35:33.064: INFO: namespace projected-5808 deletion completed in 8.164638744s • [SLOW TEST:16.005 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:35:33.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 12 13:35:43.876: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 12 13:35:43.915: INFO: Pod pod-with-prestop-http-hook still exists May 12 13:35:45.915: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 12 13:35:46.311: INFO: Pod pod-with-prestop-http-hook still exists May 12 13:35:47.915: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 12 13:35:48.425: INFO: Pod pod-with-prestop-http-hook still exists May 12 13:35:49.915: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 12 13:35:50.251: INFO: Pod pod-with-prestop-http-hook still exists May 12 13:35:51.915: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 12 13:35:51.919: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:35:51.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6859" for this suite. May 12 13:36:16.115: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:36:16.179: INFO: namespace container-lifecycle-hook-6859 deletion completed in 24.248064717s • [SLOW TEST:43.114 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:36:16.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 12 13:36:29.168: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 12 13:36:29.347: INFO: Pod pod-with-poststart-http-hook still exists May 12 13:36:31.347: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 12 13:36:31.443: INFO: Pod pod-with-poststart-http-hook still exists May 12 13:36:33.347: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 12 13:36:33.350: INFO: Pod pod-with-poststart-http-hook still exists May 12 13:36:35.347: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 12 13:36:35.389: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:36:35.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8775" for this suite. May 12 13:36:59.416: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:36:59.487: INFO: namespace container-lifecycle-hook-8775 deletion completed in 24.094490277s • [SLOW TEST:43.308 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:36:59.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token May 12 13:37:00.564: INFO: created pod pod-service-account-defaultsa May 12 13:37:00.565: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 12 13:37:00.648: INFO: created pod pod-service-account-mountsa May 12 13:37:00.648: INFO: pod pod-service-account-mountsa service account token volume mount: true May 12 13:37:00.730: INFO: created pod pod-service-account-nomountsa May 12 13:37:00.730: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 12 13:37:00.816: INFO: created pod pod-service-account-defaultsa-mountspec May 12 13:37:00.816: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 12 13:37:00.960: INFO: created pod pod-service-account-mountsa-mountspec May 12 13:37:00.960: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 12 13:37:01.358: INFO: created pod pod-service-account-nomountsa-mountspec May 12 13:37:01.358: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 12 13:37:01.581: INFO: created pod pod-service-account-defaultsa-nomountspec May 12 13:37:01.581: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 12 13:37:01.947: INFO: created pod pod-service-account-mountsa-nomountspec May 12 13:37:01.947: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 12 13:37:02.151: INFO: created pod pod-service-account-nomountsa-nomountspec May 12 13:37:02.151: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:37:02.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1455" for this suite. May 12 13:37:33.150: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:37:33.226: INFO: namespace svcaccounts-1455 deletion completed in 30.520349331s • [SLOW TEST:33.739 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:37:33.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 12 13:37:34.230: INFO: Create a RollingUpdate DaemonSet May 12 13:37:34.233: INFO: Check that daemon pods launch on every node of the cluster May 12 13:37:34.390: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:37:34.432: INFO: Number of nodes with available pods: 0 May 12 13:37:34.432: INFO: Node iruya-worker is running more than one daemon pod May 12 13:37:35.437: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:37:35.440: INFO: Number of nodes with available pods: 0 May 12 13:37:35.440: INFO: Node iruya-worker is running more than one daemon pod May 12 13:37:36.436: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:37:36.440: INFO: Number of nodes with available pods: 0 May 12 13:37:36.440: INFO: Node iruya-worker is running more than one daemon pod May 12 13:37:37.803: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:37:37.895: INFO: Number of nodes with available pods: 0 May 12 13:37:37.895: INFO: Node iruya-worker is running more than one daemon pod May 12 13:37:38.436: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:37:38.440: INFO: Number of nodes with available pods: 0 May 12 13:37:38.440: INFO: Node iruya-worker is running more than one daemon pod May 12 13:37:39.584: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:37:39.851: INFO: Number of nodes with available pods: 0 May 12 13:37:39.851: INFO: Node iruya-worker is running more than one daemon pod May 12 13:37:40.456: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:37:40.459: INFO: Number of nodes with available pods: 1 May 12 13:37:40.459: INFO: Node iruya-worker is running more than one daemon pod May 12 13:37:41.438: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:37:41.442: INFO: Number of nodes with available pods: 1 May 12 13:37:41.442: INFO: Node iruya-worker is running more than one daemon pod May 12 13:37:42.436: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:37:42.440: INFO: Number of nodes with available pods: 2 May 12 13:37:42.440: INFO: Number of running nodes: 2, number of available pods: 2 May 12 13:37:42.440: INFO: Update the DaemonSet to trigger a rollout May 12 13:37:42.447: INFO: Updating DaemonSet daemon-set May 12 13:37:46.811: INFO: Roll back the DaemonSet before rollout is complete May 12 13:37:46.817: INFO: Updating DaemonSet daemon-set May 12 13:37:46.817: INFO: Make sure DaemonSet rollback is complete May 12 13:37:46.960: INFO: Wrong image for pod: daemon-set-grg6s. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. May 12 13:37:46.960: INFO: Pod daemon-set-grg6s is not available May 12 13:37:46.964: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:37:47.968: INFO: Wrong image for pod: daemon-set-grg6s. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. May 12 13:37:47.968: INFO: Pod daemon-set-grg6s is not available May 12 13:37:47.973: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:37:48.966: INFO: Wrong image for pod: daemon-set-grg6s. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. May 12 13:37:48.966: INFO: Pod daemon-set-grg6s is not available May 12 13:37:48.968: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:37:49.976: INFO: Wrong image for pod: daemon-set-grg6s. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. May 12 13:37:49.976: INFO: Pod daemon-set-grg6s is not available May 12 13:37:49.979: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:37:51.199: INFO: Wrong image for pod: daemon-set-grg6s. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. May 12 13:37:51.199: INFO: Pod daemon-set-grg6s is not available May 12 13:37:51.203: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:37:52.210: INFO: Pod daemon-set-tqd96 is not available May 12 13:37:52.475: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7170, will wait for the garbage collector to delete the pods May 12 13:37:52.729: INFO: Deleting DaemonSet.extensions daemon-set took: 138.277926ms May 12 13:37:54.829: INFO: Terminating DaemonSet.extensions daemon-set pods took: 2.10026908s May 12 13:38:02.232: INFO: Number of nodes with available pods: 0 May 12 13:38:02.232: INFO: Number of running nodes: 0, number of available pods: 0 May 12 13:38:02.234: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7170/daemonsets","resourceVersion":"10488554"},"items":null} May 12 13:38:02.235: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7170/pods","resourceVersion":"10488554"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:38:02.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7170" for this suite. May 12 13:38:10.258: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:38:11.264: INFO: namespace daemonsets-7170 deletion completed in 9.019782886s • [SLOW TEST:38.038 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:38:11.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 12 13:38:11.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-2909' May 12 13:38:11.709: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 12 13:38:11.709: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created May 12 13:38:11.817: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 May 12 13:38:11.854: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller May 12 13:38:11.901: INFO: scanned /root for discovery docs: May 12 13:38:11.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-2909' May 12 13:38:30.654: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 12 13:38:30.655: INFO: stdout: "Created e2e-test-nginx-rc-fcc5c69dcf67d0284b0c8c37422b9219\nScaling up e2e-test-nginx-rc-fcc5c69dcf67d0284b0c8c37422b9219 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-fcc5c69dcf67d0284b0c8c37422b9219 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-fcc5c69dcf67d0284b0c8c37422b9219 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" May 12 13:38:30.655: INFO: stdout: "Created e2e-test-nginx-rc-fcc5c69dcf67d0284b0c8c37422b9219\nScaling up e2e-test-nginx-rc-fcc5c69dcf67d0284b0c8c37422b9219 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-fcc5c69dcf67d0284b0c8c37422b9219 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-fcc5c69dcf67d0284b0c8c37422b9219 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. May 12 13:38:30.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-2909' May 12 13:38:30.739: INFO: stderr: "" May 12 13:38:30.739: INFO: stdout: "e2e-test-nginx-rc-fcc5c69dcf67d0284b0c8c37422b9219-pblzj " May 12 13:38:30.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-fcc5c69dcf67d0284b0c8c37422b9219-pblzj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2909' May 12 13:38:30.884: INFO: stderr: "" May 12 13:38:30.884: INFO: stdout: "true" May 12 13:38:30.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-fcc5c69dcf67d0284b0c8c37422b9219-pblzj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2909' May 12 13:38:30.973: INFO: stderr: "" May 12 13:38:30.973: INFO: stdout: "docker.io/library/nginx:1.14-alpine" May 12 13:38:30.973: INFO: e2e-test-nginx-rc-fcc5c69dcf67d0284b0c8c37422b9219-pblzj is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 May 12 13:38:30.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-2909' May 12 13:38:31.071: INFO: stderr: "" May 12 13:38:31.071: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:38:31.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2909" for this suite. May 12 13:38:53.192: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:38:53.267: INFO: namespace kubectl-2909 deletion completed in 22.090503685s • [SLOW TEST:42.002 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:38:53.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 12 13:39:05.457: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-65 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 13:39:05.457: INFO: >>> kubeConfig: /root/.kube/config I0512 13:39:05.487556 6 log.go:172] (0xc0023a06e0) (0xc0022866e0) Create stream I0512 13:39:05.487579 6 log.go:172] (0xc0023a06e0) (0xc0022866e0) Stream added, broadcasting: 1 I0512 13:39:05.489326 6 log.go:172] (0xc0023a06e0) Reply frame received for 1 I0512 13:39:05.489363 6 log.go:172] (0xc0023a06e0) (0xc002286780) Create stream I0512 13:39:05.489373 6 log.go:172] (0xc0023a06e0) (0xc002286780) Stream added, broadcasting: 3 I0512 13:39:05.490153 6 log.go:172] (0xc0023a06e0) Reply frame received for 3 I0512 13:39:05.490195 6 log.go:172] (0xc0023a06e0) (0xc0020375e0) Create stream I0512 13:39:05.490215 6 log.go:172] (0xc0023a06e0) (0xc0020375e0) Stream added, broadcasting: 5 I0512 13:39:05.490915 6 log.go:172] (0xc0023a06e0) Reply frame received for 5 I0512 13:39:05.551090 6 log.go:172] (0xc0023a06e0) Data frame received for 3 I0512 13:39:05.551118 6 log.go:172] (0xc002286780) (3) Data frame handling I0512 13:39:05.551130 6 log.go:172] (0xc002286780) (3) Data frame sent I0512 13:39:05.551137 6 log.go:172] (0xc0023a06e0) Data frame received for 3 I0512 13:39:05.551144 6 log.go:172] (0xc002286780) (3) Data frame handling I0512 13:39:05.551161 6 log.go:172] (0xc0023a06e0) Data frame received for 5 I0512 13:39:05.551168 6 log.go:172] (0xc0020375e0) (5) Data frame handling I0512 13:39:05.552209 6 log.go:172] (0xc0023a06e0) Data frame received for 1 I0512 13:39:05.552245 6 log.go:172] (0xc0022866e0) (1) Data frame handling I0512 13:39:05.552274 6 log.go:172] (0xc0022866e0) (1) Data frame sent I0512 13:39:05.552310 6 log.go:172] (0xc0023a06e0) (0xc0022866e0) Stream removed, broadcasting: 1 I0512 13:39:05.552337 6 log.go:172] (0xc0023a06e0) Go away received I0512 13:39:05.552430 6 log.go:172] (0xc0023a06e0) (0xc0022866e0) Stream removed, broadcasting: 1 I0512 13:39:05.552458 6 log.go:172] (0xc0023a06e0) (0xc002286780) Stream removed, broadcasting: 3 I0512 13:39:05.552479 6 log.go:172] (0xc0023a06e0) (0xc0020375e0) Stream removed, broadcasting: 5 May 12 13:39:05.552: INFO: Exec stderr: "" May 12 13:39:05.552: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-65 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 13:39:05.552: INFO: >>> kubeConfig: /root/.kube/config I0512 13:39:05.582327 6 log.go:172] (0xc000c36a50) (0xc001af2be0) Create stream I0512 13:39:05.582356 6 log.go:172] (0xc000c36a50) (0xc001af2be0) Stream added, broadcasting: 1 I0512 13:39:05.583940 6 log.go:172] (0xc000c36a50) Reply frame received for 1 I0512 13:39:05.583978 6 log.go:172] (0xc000c36a50) (0xc001696c80) Create stream I0512 13:39:05.583988 6 log.go:172] (0xc000c36a50) (0xc001696c80) Stream added, broadcasting: 3 I0512 13:39:05.584664 6 log.go:172] (0xc000c36a50) Reply frame received for 3 I0512 13:39:05.584689 6 log.go:172] (0xc000c36a50) (0xc002037680) Create stream I0512 13:39:05.584700 6 log.go:172] (0xc000c36a50) (0xc002037680) Stream added, broadcasting: 5 I0512 13:39:05.585654 6 log.go:172] (0xc000c36a50) Reply frame received for 5 I0512 13:39:05.639513 6 log.go:172] (0xc000c36a50) Data frame received for 3 I0512 13:39:05.639541 6 log.go:172] (0xc001696c80) (3) Data frame handling I0512 13:39:05.639552 6 log.go:172] (0xc001696c80) (3) Data frame sent I0512 13:39:05.639567 6 log.go:172] (0xc000c36a50) Data frame received for 3 I0512 13:39:05.639575 6 log.go:172] (0xc001696c80) (3) Data frame handling I0512 13:39:05.639596 6 log.go:172] (0xc000c36a50) Data frame received for 5 I0512 13:39:05.639603 6 log.go:172] (0xc002037680) (5) Data frame handling I0512 13:39:05.642353 6 log.go:172] (0xc000c36a50) Data frame received for 1 I0512 13:39:05.642388 6 log.go:172] (0xc001af2be0) (1) Data frame handling I0512 13:39:05.642401 6 log.go:172] (0xc001af2be0) (1) Data frame sent I0512 13:39:05.642411 6 log.go:172] (0xc000c36a50) (0xc001af2be0) Stream removed, broadcasting: 1 I0512 13:39:05.642425 6 log.go:172] (0xc000c36a50) Go away received I0512 13:39:05.642591 6 log.go:172] (0xc000c36a50) (0xc001af2be0) Stream removed, broadcasting: 1 I0512 13:39:05.642602 6 log.go:172] (0xc000c36a50) (0xc001696c80) Stream removed, broadcasting: 3 I0512 13:39:05.642609 6 log.go:172] (0xc000c36a50) (0xc002037680) Stream removed, broadcasting: 5 May 12 13:39:05.642: INFO: Exec stderr: "" May 12 13:39:05.642: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-65 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 13:39:05.642: INFO: >>> kubeConfig: /root/.kube/config I0512 13:39:05.670285 6 log.go:172] (0xc0023a1600) (0xc002286b40) Create stream I0512 13:39:05.670307 6 log.go:172] (0xc0023a1600) (0xc002286b40) Stream added, broadcasting: 1 I0512 13:39:05.672239 6 log.go:172] (0xc0023a1600) Reply frame received for 1 I0512 13:39:05.672279 6 log.go:172] (0xc0023a1600) (0xc001696e60) Create stream I0512 13:39:05.672296 6 log.go:172] (0xc0023a1600) (0xc001696e60) Stream added, broadcasting: 3 I0512 13:39:05.673010 6 log.go:172] (0xc0023a1600) Reply frame received for 3 I0512 13:39:05.673050 6 log.go:172] (0xc0023a1600) (0xc002037720) Create stream I0512 13:39:05.673060 6 log.go:172] (0xc0023a1600) (0xc002037720) Stream added, broadcasting: 5 I0512 13:39:05.674345 6 log.go:172] (0xc0023a1600) Reply frame received for 5 I0512 13:39:05.739753 6 log.go:172] (0xc0023a1600) Data frame received for 3 I0512 13:39:05.739785 6 log.go:172] (0xc001696e60) (3) Data frame handling I0512 13:39:05.739808 6 log.go:172] (0xc001696e60) (3) Data frame sent I0512 13:39:05.739915 6 log.go:172] (0xc0023a1600) Data frame received for 3 I0512 13:39:05.739943 6 log.go:172] (0xc001696e60) (3) Data frame handling I0512 13:39:05.739967 6 log.go:172] (0xc0023a1600) Data frame received for 5 I0512 13:39:05.739978 6 log.go:172] (0xc002037720) (5) Data frame handling I0512 13:39:05.741429 6 log.go:172] (0xc0023a1600) Data frame received for 1 I0512 13:39:05.741455 6 log.go:172] (0xc002286b40) (1) Data frame handling I0512 13:39:05.741483 6 log.go:172] (0xc002286b40) (1) Data frame sent I0512 13:39:05.741498 6 log.go:172] (0xc0023a1600) (0xc002286b40) Stream removed, broadcasting: 1 I0512 13:39:05.741537 6 log.go:172] (0xc0023a1600) Go away received I0512 13:39:05.741579 6 log.go:172] (0xc0023a1600) (0xc002286b40) Stream removed, broadcasting: 1 I0512 13:39:05.741593 6 log.go:172] (0xc0023a1600) (0xc001696e60) Stream removed, broadcasting: 3 I0512 13:39:05.741610 6 log.go:172] (0xc0023a1600) (0xc002037720) Stream removed, broadcasting: 5 May 12 13:39:05.741: INFO: Exec stderr: "" May 12 13:39:05.741: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-65 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 13:39:05.741: INFO: >>> kubeConfig: /root/.kube/config I0512 13:39:05.766867 6 log.go:172] (0xc002b15130) (0xc002037ae0) Create stream I0512 13:39:05.766894 6 log.go:172] (0xc002b15130) (0xc002037ae0) Stream added, broadcasting: 1 I0512 13:39:05.768601 6 log.go:172] (0xc002b15130) Reply frame received for 1 I0512 13:39:05.768633 6 log.go:172] (0xc002b15130) (0xc002286c80) Create stream I0512 13:39:05.768644 6 log.go:172] (0xc002b15130) (0xc002286c80) Stream added, broadcasting: 3 I0512 13:39:05.769540 6 log.go:172] (0xc002b15130) Reply frame received for 3 I0512 13:39:05.769574 6 log.go:172] (0xc002b15130) (0xc002286dc0) Create stream I0512 13:39:05.769592 6 log.go:172] (0xc002b15130) (0xc002286dc0) Stream added, broadcasting: 5 I0512 13:39:05.770201 6 log.go:172] (0xc002b15130) Reply frame received for 5 I0512 13:39:05.820498 6 log.go:172] (0xc002b15130) Data frame received for 5 I0512 13:39:05.820538 6 log.go:172] (0xc002b15130) Data frame received for 3 I0512 13:39:05.820571 6 log.go:172] (0xc002286c80) (3) Data frame handling I0512 13:39:05.820586 6 log.go:172] (0xc002286dc0) (5) Data frame handling I0512 13:39:05.820622 6 log.go:172] (0xc002286c80) (3) Data frame sent I0512 13:39:05.820638 6 log.go:172] (0xc002b15130) Data frame received for 3 I0512 13:39:05.820645 6 log.go:172] (0xc002286c80) (3) Data frame handling I0512 13:39:05.821802 6 log.go:172] (0xc002b15130) Data frame received for 1 I0512 13:39:05.821811 6 log.go:172] (0xc002037ae0) (1) Data frame handling I0512 13:39:05.821816 6 log.go:172] (0xc002037ae0) (1) Data frame sent I0512 13:39:05.821822 6 log.go:172] (0xc002b15130) (0xc002037ae0) Stream removed, broadcasting: 1 I0512 13:39:05.821870 6 log.go:172] (0xc002b15130) (0xc002037ae0) Stream removed, broadcasting: 1 I0512 13:39:05.821878 6 log.go:172] (0xc002b15130) (0xc002286c80) Stream removed, broadcasting: 3 I0512 13:39:05.821932 6 log.go:172] (0xc002b15130) Go away received I0512 13:39:05.821958 6 log.go:172] (0xc002b15130) (0xc002286dc0) Stream removed, broadcasting: 5 May 12 13:39:05.821: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 12 13:39:05.822: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-65 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 13:39:05.822: INFO: >>> kubeConfig: /root/.kube/config I0512 13:39:05.852944 6 log.go:172] (0xc000c37550) (0xc001af2f00) Create stream I0512 13:39:05.852966 6 log.go:172] (0xc000c37550) (0xc001af2f00) Stream added, broadcasting: 1 I0512 13:39:05.855067 6 log.go:172] (0xc000c37550) Reply frame received for 1 I0512 13:39:05.855093 6 log.go:172] (0xc000c37550) (0xc001af2fa0) Create stream I0512 13:39:05.855101 6 log.go:172] (0xc000c37550) (0xc001af2fa0) Stream added, broadcasting: 3 I0512 13:39:05.855739 6 log.go:172] (0xc000c37550) Reply frame received for 3 I0512 13:39:05.855766 6 log.go:172] (0xc000c37550) (0xc001696fa0) Create stream I0512 13:39:05.855776 6 log.go:172] (0xc000c37550) (0xc001696fa0) Stream added, broadcasting: 5 I0512 13:39:05.856489 6 log.go:172] (0xc000c37550) Reply frame received for 5 I0512 13:39:05.919747 6 log.go:172] (0xc000c37550) Data frame received for 5 I0512 13:39:05.919787 6 log.go:172] (0xc000c37550) Data frame received for 3 I0512 13:39:05.919845 6 log.go:172] (0xc001af2fa0) (3) Data frame handling I0512 13:39:05.919862 6 log.go:172] (0xc001af2fa0) (3) Data frame sent I0512 13:39:05.919876 6 log.go:172] (0xc000c37550) Data frame received for 3 I0512 13:39:05.919894 6 log.go:172] (0xc001af2fa0) (3) Data frame handling I0512 13:39:05.919923 6 log.go:172] (0xc001696fa0) (5) Data frame handling I0512 13:39:05.921467 6 log.go:172] (0xc000c37550) Data frame received for 1 I0512 13:39:05.921478 6 log.go:172] (0xc001af2f00) (1) Data frame handling I0512 13:39:05.921484 6 log.go:172] (0xc001af2f00) (1) Data frame sent I0512 13:39:05.921690 6 log.go:172] (0xc000c37550) (0xc001af2f00) Stream removed, broadcasting: 1 I0512 13:39:05.921858 6 log.go:172] (0xc000c37550) (0xc001af2f00) Stream removed, broadcasting: 1 I0512 13:39:05.921898 6 log.go:172] (0xc000c37550) (0xc001af2fa0) Stream removed, broadcasting: 3 I0512 13:39:05.921926 6 log.go:172] (0xc000c37550) (0xc001696fa0) Stream removed, broadcasting: 5 May 12 13:39:05.921: INFO: Exec stderr: "" May 12 13:39:05.921: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-65 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} I0512 13:39:05.921983 6 log.go:172] (0xc000c37550) Go away received May 12 13:39:05.921: INFO: >>> kubeConfig: /root/.kube/config I0512 13:39:05.950144 6 log.go:172] (0xc002f9c160) (0xc001af3360) Create stream I0512 13:39:05.950168 6 log.go:172] (0xc002f9c160) (0xc001af3360) Stream added, broadcasting: 1 I0512 13:39:05.952315 6 log.go:172] (0xc002f9c160) Reply frame received for 1 I0512 13:39:05.952354 6 log.go:172] (0xc002f9c160) (0xc002286f00) Create stream I0512 13:39:05.952370 6 log.go:172] (0xc002f9c160) (0xc002286f00) Stream added, broadcasting: 3 I0512 13:39:05.953332 6 log.go:172] (0xc002f9c160) Reply frame received for 3 I0512 13:39:05.953372 6 log.go:172] (0xc002f9c160) (0xc002286fa0) Create stream I0512 13:39:05.953387 6 log.go:172] (0xc002f9c160) (0xc002286fa0) Stream added, broadcasting: 5 I0512 13:39:05.954127 6 log.go:172] (0xc002f9c160) Reply frame received for 5 I0512 13:39:06.003270 6 log.go:172] (0xc002f9c160) Data frame received for 3 I0512 13:39:06.003318 6 log.go:172] (0xc002286f00) (3) Data frame handling I0512 13:39:06.003343 6 log.go:172] (0xc002286f00) (3) Data frame sent I0512 13:39:06.003430 6 log.go:172] (0xc002f9c160) Data frame received for 5 I0512 13:39:06.003466 6 log.go:172] (0xc002286fa0) (5) Data frame handling I0512 13:39:06.003508 6 log.go:172] (0xc002f9c160) Data frame received for 3 I0512 13:39:06.003532 6 log.go:172] (0xc002286f00) (3) Data frame handling I0512 13:39:06.004733 6 log.go:172] (0xc002f9c160) Data frame received for 1 I0512 13:39:06.004743 6 log.go:172] (0xc001af3360) (1) Data frame handling I0512 13:39:06.004748 6 log.go:172] (0xc001af3360) (1) Data frame sent I0512 13:39:06.004804 6 log.go:172] (0xc002f9c160) (0xc001af3360) Stream removed, broadcasting: 1 I0512 13:39:06.004870 6 log.go:172] (0xc002f9c160) (0xc001af3360) Stream removed, broadcasting: 1 I0512 13:39:06.004885 6 log.go:172] (0xc002f9c160) (0xc002286f00) Stream removed, broadcasting: 3 I0512 13:39:06.004894 6 log.go:172] (0xc002f9c160) (0xc002286fa0) Stream removed, broadcasting: 5 May 12 13:39:06.004: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 12 13:39:06.004: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-65 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 13:39:06.004: INFO: >>> kubeConfig: /root/.kube/config I0512 13:39:06.004998 6 log.go:172] (0xc002f9c160) Go away received I0512 13:39:06.055354 6 log.go:172] (0xc00304ce70) (0xc002287540) Create stream I0512 13:39:06.055400 6 log.go:172] (0xc00304ce70) (0xc002287540) Stream added, broadcasting: 1 I0512 13:39:06.058118 6 log.go:172] (0xc00304ce70) Reply frame received for 1 I0512 13:39:06.058147 6 log.go:172] (0xc00304ce70) (0xc001af34a0) Create stream I0512 13:39:06.058157 6 log.go:172] (0xc00304ce70) (0xc001af34a0) Stream added, broadcasting: 3 I0512 13:39:06.058806 6 log.go:172] (0xc00304ce70) Reply frame received for 3 I0512 13:39:06.058820 6 log.go:172] (0xc00304ce70) (0xc0022875e0) Create stream I0512 13:39:06.058826 6 log.go:172] (0xc00304ce70) (0xc0022875e0) Stream added, broadcasting: 5 I0512 13:39:06.059400 6 log.go:172] (0xc00304ce70) Reply frame received for 5 I0512 13:39:06.113382 6 log.go:172] (0xc00304ce70) Data frame received for 3 I0512 13:39:06.113405 6 log.go:172] (0xc001af34a0) (3) Data frame handling I0512 13:39:06.113422 6 log.go:172] (0xc001af34a0) (3) Data frame sent I0512 13:39:06.113454 6 log.go:172] (0xc00304ce70) Data frame received for 3 I0512 13:39:06.113466 6 log.go:172] (0xc001af34a0) (3) Data frame handling I0512 13:39:06.113482 6 log.go:172] (0xc00304ce70) Data frame received for 5 I0512 13:39:06.113491 6 log.go:172] (0xc0022875e0) (5) Data frame handling I0512 13:39:06.114848 6 log.go:172] (0xc00304ce70) Data frame received for 1 I0512 13:39:06.114875 6 log.go:172] (0xc002287540) (1) Data frame handling I0512 13:39:06.114895 6 log.go:172] (0xc002287540) (1) Data frame sent I0512 13:39:06.114910 6 log.go:172] (0xc00304ce70) (0xc002287540) Stream removed, broadcasting: 1 I0512 13:39:06.114930 6 log.go:172] (0xc00304ce70) Go away received I0512 13:39:06.115067 6 log.go:172] (0xc00304ce70) (0xc002287540) Stream removed, broadcasting: 1 I0512 13:39:06.115088 6 log.go:172] (0xc00304ce70) (0xc001af34a0) Stream removed, broadcasting: 3 I0512 13:39:06.115100 6 log.go:172] (0xc00304ce70) (0xc0022875e0) Stream removed, broadcasting: 5 May 12 13:39:06.115: INFO: Exec stderr: "" May 12 13:39:06.115: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-65 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 13:39:06.115: INFO: >>> kubeConfig: /root/.kube/config I0512 13:39:06.144042 6 log.go:172] (0xc002f9d6b0) (0xc001af3860) Create stream I0512 13:39:06.144069 6 log.go:172] (0xc002f9d6b0) (0xc001af3860) Stream added, broadcasting: 1 I0512 13:39:06.146569 6 log.go:172] (0xc002f9d6b0) Reply frame received for 1 I0512 13:39:06.146638 6 log.go:172] (0xc002f9d6b0) (0xc002037b80) Create stream I0512 13:39:06.146658 6 log.go:172] (0xc002f9d6b0) (0xc002037b80) Stream added, broadcasting: 3 I0512 13:39:06.147587 6 log.go:172] (0xc002f9d6b0) Reply frame received for 3 I0512 13:39:06.147628 6 log.go:172] (0xc002f9d6b0) (0xc002287680) Create stream I0512 13:39:06.147641 6 log.go:172] (0xc002f9d6b0) (0xc002287680) Stream added, broadcasting: 5 I0512 13:39:06.148377 6 log.go:172] (0xc002f9d6b0) Reply frame received for 5 I0512 13:39:06.213935 6 log.go:172] (0xc002f9d6b0) Data frame received for 5 I0512 13:39:06.213973 6 log.go:172] (0xc002287680) (5) Data frame handling I0512 13:39:06.214001 6 log.go:172] (0xc002f9d6b0) Data frame received for 3 I0512 13:39:06.214018 6 log.go:172] (0xc002037b80) (3) Data frame handling I0512 13:39:06.214035 6 log.go:172] (0xc002037b80) (3) Data frame sent I0512 13:39:06.214048 6 log.go:172] (0xc002f9d6b0) Data frame received for 3 I0512 13:39:06.214057 6 log.go:172] (0xc002037b80) (3) Data frame handling I0512 13:39:06.214925 6 log.go:172] (0xc002f9d6b0) Data frame received for 1 I0512 13:39:06.214939 6 log.go:172] (0xc001af3860) (1) Data frame handling I0512 13:39:06.214945 6 log.go:172] (0xc001af3860) (1) Data frame sent I0512 13:39:06.214952 6 log.go:172] (0xc002f9d6b0) (0xc001af3860) Stream removed, broadcasting: 1 I0512 13:39:06.214962 6 log.go:172] (0xc002f9d6b0) Go away received I0512 13:39:06.215034 6 log.go:172] (0xc002f9d6b0) (0xc001af3860) Stream removed, broadcasting: 1 I0512 13:39:06.215052 6 log.go:172] (0xc002f9d6b0) (0xc002037b80) Stream removed, broadcasting: 3 I0512 13:39:06.215064 6 log.go:172] (0xc002f9d6b0) (0xc002287680) Stream removed, broadcasting: 5 May 12 13:39:06.215: INFO: Exec stderr: "" May 12 13:39:06.215: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-65 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 13:39:06.215: INFO: >>> kubeConfig: /root/.kube/config I0512 13:39:06.243131 6 log.go:172] (0xc00304dad0) (0xc0022879a0) Create stream I0512 13:39:06.243151 6 log.go:172] (0xc00304dad0) (0xc0022879a0) Stream added, broadcasting: 1 I0512 13:39:06.249852 6 log.go:172] (0xc00304dad0) Reply frame received for 1 I0512 13:39:06.249884 6 log.go:172] (0xc00304dad0) (0xc001bb2140) Create stream I0512 13:39:06.249896 6 log.go:172] (0xc00304dad0) (0xc001bb2140) Stream added, broadcasting: 3 I0512 13:39:06.253300 6 log.go:172] (0xc00304dad0) Reply frame received for 3 I0512 13:39:06.253321 6 log.go:172] (0xc00304dad0) (0xc000bcc1e0) Create stream I0512 13:39:06.253329 6 log.go:172] (0xc00304dad0) (0xc000bcc1e0) Stream added, broadcasting: 5 I0512 13:39:06.253898 6 log.go:172] (0xc00304dad0) Reply frame received for 5 I0512 13:39:06.312326 6 log.go:172] (0xc00304dad0) Data frame received for 3 I0512 13:39:06.312366 6 log.go:172] (0xc001bb2140) (3) Data frame handling I0512 13:39:06.312396 6 log.go:172] (0xc001bb2140) (3) Data frame sent I0512 13:39:06.312413 6 log.go:172] (0xc00304dad0) Data frame received for 3 I0512 13:39:06.312424 6 log.go:172] (0xc001bb2140) (3) Data frame handling I0512 13:39:06.312470 6 log.go:172] (0xc00304dad0) Data frame received for 5 I0512 13:39:06.312570 6 log.go:172] (0xc000bcc1e0) (5) Data frame handling I0512 13:39:06.314096 6 log.go:172] (0xc00304dad0) Data frame received for 1 I0512 13:39:06.314127 6 log.go:172] (0xc0022879a0) (1) Data frame handling I0512 13:39:06.314147 6 log.go:172] (0xc0022879a0) (1) Data frame sent I0512 13:39:06.314167 6 log.go:172] (0xc00304dad0) (0xc0022879a0) Stream removed, broadcasting: 1 I0512 13:39:06.314225 6 log.go:172] (0xc00304dad0) Go away received I0512 13:39:06.314261 6 log.go:172] (0xc00304dad0) (0xc0022879a0) Stream removed, broadcasting: 1 I0512 13:39:06.314288 6 log.go:172] (0xc00304dad0) (0xc001bb2140) Stream removed, broadcasting: 3 I0512 13:39:06.314305 6 log.go:172] (0xc00304dad0) (0xc000bcc1e0) Stream removed, broadcasting: 5 May 12 13:39:06.314: INFO: Exec stderr: "" May 12 13:39:06.314: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-65 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 13:39:06.314: INFO: >>> kubeConfig: /root/.kube/config I0512 13:39:06.343856 6 log.go:172] (0xc0005b98c0) (0xc00161a460) Create stream I0512 13:39:06.343889 6 log.go:172] (0xc0005b98c0) (0xc00161a460) Stream added, broadcasting: 1 I0512 13:39:06.345604 6 log.go:172] (0xc0005b98c0) Reply frame received for 1 I0512 13:39:06.345652 6 log.go:172] (0xc0005b98c0) (0xc00197a000) Create stream I0512 13:39:06.345669 6 log.go:172] (0xc0005b98c0) (0xc00197a000) Stream added, broadcasting: 3 I0512 13:39:06.346439 6 log.go:172] (0xc0005b98c0) Reply frame received for 3 I0512 13:39:06.346463 6 log.go:172] (0xc0005b98c0) (0xc00161a500) Create stream I0512 13:39:06.346471 6 log.go:172] (0xc0005b98c0) (0xc00161a500) Stream added, broadcasting: 5 I0512 13:39:06.347161 6 log.go:172] (0xc0005b98c0) Reply frame received for 5 I0512 13:39:06.396750 6 log.go:172] (0xc0005b98c0) Data frame received for 5 I0512 13:39:06.396779 6 log.go:172] (0xc00161a500) (5) Data frame handling I0512 13:39:06.396815 6 log.go:172] (0xc0005b98c0) Data frame received for 3 I0512 13:39:06.396840 6 log.go:172] (0xc00197a000) (3) Data frame handling I0512 13:39:06.396853 6 log.go:172] (0xc00197a000) (3) Data frame sent I0512 13:39:06.396865 6 log.go:172] (0xc0005b98c0) Data frame received for 3 I0512 13:39:06.396875 6 log.go:172] (0xc00197a000) (3) Data frame handling I0512 13:39:06.398216 6 log.go:172] (0xc0005b98c0) Data frame received for 1 I0512 13:39:06.398229 6 log.go:172] (0xc00161a460) (1) Data frame handling I0512 13:39:06.398240 6 log.go:172] (0xc00161a460) (1) Data frame sent I0512 13:39:06.398378 6 log.go:172] (0xc0005b98c0) (0xc00161a460) Stream removed, broadcasting: 1 I0512 13:39:06.398409 6 log.go:172] (0xc0005b98c0) Go away received I0512 13:39:06.398448 6 log.go:172] (0xc0005b98c0) (0xc00161a460) Stream removed, broadcasting: 1 I0512 13:39:06.398460 6 log.go:172] (0xc0005b98c0) (0xc00197a000) Stream removed, broadcasting: 3 I0512 13:39:06.398469 6 log.go:172] (0xc0005b98c0) (0xc00161a500) Stream removed, broadcasting: 5 May 12 13:39:06.398: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:39:06.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-65" for this suite. May 12 13:39:48.412: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:39:48.473: INFO: namespace e2e-kubelet-etc-hosts-65 deletion completed in 42.07196688s • [SLOW TEST:55.206 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:39:48.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 12 13:39:48.625: INFO: Waiting up to 5m0s for pod "downward-api-d2d28131-ae09-4316-9927-6388d670e72d" in namespace "downward-api-601" to be "success or failure" May 12 13:39:48.634: INFO: Pod "downward-api-d2d28131-ae09-4316-9927-6388d670e72d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.689301ms May 12 13:39:50.638: INFO: Pod "downward-api-d2d28131-ae09-4316-9927-6388d670e72d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012462095s May 12 13:39:52.641: INFO: Pod "downward-api-d2d28131-ae09-4316-9927-6388d670e72d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01627022s STEP: Saw pod success May 12 13:39:52.641: INFO: Pod "downward-api-d2d28131-ae09-4316-9927-6388d670e72d" satisfied condition "success or failure" May 12 13:39:52.644: INFO: Trying to get logs from node iruya-worker2 pod downward-api-d2d28131-ae09-4316-9927-6388d670e72d container dapi-container: STEP: delete the pod May 12 13:39:52.687: INFO: Waiting for pod downward-api-d2d28131-ae09-4316-9927-6388d670e72d to disappear May 12 13:39:52.694: INFO: Pod downward-api-d2d28131-ae09-4316-9927-6388d670e72d no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:39:52.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-601" for this suite. May 12 13:39:58.878: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:39:58.942: INFO: namespace downward-api-601 deletion completed in 6.243561481s • [SLOW TEST:10.469 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:39:58.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 12 13:40:05.392: INFO: Waiting up to 5m0s for pod "client-envvars-13da4574-db84-4bad-90ee-158f2057fb50" in namespace "pods-4160" to be "success or failure" May 12 13:40:05.432: INFO: Pod "client-envvars-13da4574-db84-4bad-90ee-158f2057fb50": Phase="Pending", Reason="", readiness=false. Elapsed: 40.126171ms May 12 13:40:07.436: INFO: Pod "client-envvars-13da4574-db84-4bad-90ee-158f2057fb50": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043513534s May 12 13:40:09.439: INFO: Pod "client-envvars-13da4574-db84-4bad-90ee-158f2057fb50": Phase="Running", Reason="", readiness=true. Elapsed: 4.047037749s May 12 13:40:11.443: INFO: Pod "client-envvars-13da4574-db84-4bad-90ee-158f2057fb50": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.050411408s STEP: Saw pod success May 12 13:40:11.443: INFO: Pod "client-envvars-13da4574-db84-4bad-90ee-158f2057fb50" satisfied condition "success or failure" May 12 13:40:11.445: INFO: Trying to get logs from node iruya-worker2 pod client-envvars-13da4574-db84-4bad-90ee-158f2057fb50 container env3cont: STEP: delete the pod May 12 13:40:11.478: INFO: Waiting for pod client-envvars-13da4574-db84-4bad-90ee-158f2057fb50 to disappear May 12 13:40:11.495: INFO: Pod client-envvars-13da4574-db84-4bad-90ee-158f2057fb50 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:40:11.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4160" for this suite. May 12 13:40:53.528: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:40:53.604: INFO: namespace pods-4160 deletion completed in 42.105821137s • [SLOW TEST:54.662 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:40:53.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-d3d5f19c-eb41-465e-a41d-3b72b42285be STEP: Creating a pod to test consume configMaps May 12 13:40:53.921: INFO: Waiting up to 5m0s for pod "pod-configmaps-212d54e9-8c1f-42ff-90d2-827ff913780c" in namespace "configmap-4128" to be "success or failure" May 12 13:40:53.992: INFO: Pod "pod-configmaps-212d54e9-8c1f-42ff-90d2-827ff913780c": Phase="Pending", Reason="", readiness=false. Elapsed: 70.74625ms May 12 13:40:55.995: INFO: Pod "pod-configmaps-212d54e9-8c1f-42ff-90d2-827ff913780c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073616854s May 12 13:40:58.064: INFO: Pod "pod-configmaps-212d54e9-8c1f-42ff-90d2-827ff913780c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.142859153s May 12 13:41:00.068: INFO: Pod "pod-configmaps-212d54e9-8c1f-42ff-90d2-827ff913780c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.146877158s STEP: Saw pod success May 12 13:41:00.068: INFO: Pod "pod-configmaps-212d54e9-8c1f-42ff-90d2-827ff913780c" satisfied condition "success or failure" May 12 13:41:00.071: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-212d54e9-8c1f-42ff-90d2-827ff913780c container configmap-volume-test: STEP: delete the pod May 12 13:41:00.137: INFO: Waiting for pod pod-configmaps-212d54e9-8c1f-42ff-90d2-827ff913780c to disappear May 12 13:41:00.352: INFO: Pod pod-configmaps-212d54e9-8c1f-42ff-90d2-827ff913780c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:41:00.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4128" for this suite. May 12 13:41:06.584: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:41:06.671: INFO: namespace configmap-4128 deletion completed in 6.315393376s • [SLOW TEST:13.067 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:41:06.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller May 12 13:41:06.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8563' May 12 13:41:07.014: INFO: stderr: "" May 12 13:41:07.014: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 12 13:41:07.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8563' May 12 13:41:07.125: INFO: stderr: "" May 12 13:41:07.125: INFO: stdout: "" STEP: Replicas for name=update-demo: expected=2 actual=0 May 12 13:41:12.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8563' May 12 13:41:12.228: INFO: stderr: "" May 12 13:41:12.228: INFO: stdout: "update-demo-nautilus-pj5t4 update-demo-nautilus-s77b5 " May 12 13:41:12.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pj5t4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8563' May 12 13:41:12.309: INFO: stderr: "" May 12 13:41:12.309: INFO: stdout: "" May 12 13:41:12.309: INFO: update-demo-nautilus-pj5t4 is created but not running May 12 13:41:17.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8563' May 12 13:41:17.400: INFO: stderr: "" May 12 13:41:17.401: INFO: stdout: "update-demo-nautilus-pj5t4 update-demo-nautilus-s77b5 " May 12 13:41:17.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pj5t4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8563' May 12 13:41:17.485: INFO: stderr: "" May 12 13:41:17.485: INFO: stdout: "true" May 12 13:41:17.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pj5t4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8563' May 12 13:41:17.573: INFO: stderr: "" May 12 13:41:17.573: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 12 13:41:17.573: INFO: validating pod update-demo-nautilus-pj5t4 May 12 13:41:17.576: INFO: got data: { "image": "nautilus.jpg" } May 12 13:41:17.576: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 12 13:41:17.576: INFO: update-demo-nautilus-pj5t4 is verified up and running May 12 13:41:17.576: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-s77b5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8563' May 12 13:41:17.670: INFO: stderr: "" May 12 13:41:17.670: INFO: stdout: "true" May 12 13:41:17.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-s77b5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8563' May 12 13:41:17.757: INFO: stderr: "" May 12 13:41:17.757: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 12 13:41:17.757: INFO: validating pod update-demo-nautilus-s77b5 May 12 13:41:17.760: INFO: got data: { "image": "nautilus.jpg" } May 12 13:41:17.760: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 12 13:41:17.760: INFO: update-demo-nautilus-s77b5 is verified up and running STEP: using delete to clean up resources May 12 13:41:17.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8563' May 12 13:41:17.887: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 13:41:17.887: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 12 13:41:17.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8563' May 12 13:41:17.990: INFO: stderr: "No resources found.\n" May 12 13:41:17.990: INFO: stdout: "" May 12 13:41:17.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8563 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 12 13:41:18.127: INFO: stderr: "" May 12 13:41:18.127: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:41:18.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8563" for this suite. May 12 13:41:42.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:41:42.203: INFO: namespace kubectl-8563 deletion completed in 24.072624664s • [SLOW TEST:35.531 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:41:42.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-7815 STEP: creating a selector STEP: Creating the service pods in kubernetes May 12 13:41:42.319: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 12 13:42:08.403: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.64:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7815 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 13:42:08.403: INFO: >>> kubeConfig: /root/.kube/config I0512 13:42:08.436639 6 log.go:172] (0xc001cda0b0) (0xc000a66640) Create stream I0512 13:42:08.436663 6 log.go:172] (0xc001cda0b0) (0xc000a66640) Stream added, broadcasting: 1 I0512 13:42:08.438893 6 log.go:172] (0xc001cda0b0) Reply frame received for 1 I0512 13:42:08.438953 6 log.go:172] (0xc001cda0b0) (0xc002036be0) Create stream I0512 13:42:08.438979 6 log.go:172] (0xc001cda0b0) (0xc002036be0) Stream added, broadcasting: 3 I0512 13:42:08.439968 6 log.go:172] (0xc001cda0b0) Reply frame received for 3 I0512 13:42:08.439997 6 log.go:172] (0xc001cda0b0) (0xc002036c80) Create stream I0512 13:42:08.440009 6 log.go:172] (0xc001cda0b0) (0xc002036c80) Stream added, broadcasting: 5 I0512 13:42:08.440873 6 log.go:172] (0xc001cda0b0) Reply frame received for 5 I0512 13:42:08.504064 6 log.go:172] (0xc001cda0b0) Data frame received for 3 I0512 13:42:08.504119 6 log.go:172] (0xc002036be0) (3) Data frame handling I0512 13:42:08.504199 6 log.go:172] (0xc002036be0) (3) Data frame sent I0512 13:42:08.504270 6 log.go:172] (0xc001cda0b0) Data frame received for 5 I0512 13:42:08.504298 6 log.go:172] (0xc002036c80) (5) Data frame handling I0512 13:42:08.504404 6 log.go:172] (0xc001cda0b0) Data frame received for 3 I0512 13:42:08.504425 6 log.go:172] (0xc002036be0) (3) Data frame handling I0512 13:42:08.506060 6 log.go:172] (0xc001cda0b0) Data frame received for 1 I0512 13:42:08.506072 6 log.go:172] (0xc000a66640) (1) Data frame handling I0512 13:42:08.506082 6 log.go:172] (0xc000a66640) (1) Data frame sent I0512 13:42:08.506089 6 log.go:172] (0xc001cda0b0) (0xc000a66640) Stream removed, broadcasting: 1 I0512 13:42:08.506101 6 log.go:172] (0xc001cda0b0) Go away received I0512 13:42:08.506249 6 log.go:172] (0xc001cda0b0) (0xc000a66640) Stream removed, broadcasting: 1 I0512 13:42:08.506272 6 log.go:172] (0xc001cda0b0) (0xc002036be0) Stream removed, broadcasting: 3 I0512 13:42:08.506281 6 log.go:172] (0xc001cda0b0) (0xc002036c80) Stream removed, broadcasting: 5 May 12 13:42:08.506: INFO: Found all expected endpoints: [netserver-0] May 12 13:42:08.508: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.54:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7815 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 13:42:08.508: INFO: >>> kubeConfig: /root/.kube/config I0512 13:42:08.539192 6 log.go:172] (0xc0005b9ef0) (0xc002510640) Create stream I0512 13:42:08.539226 6 log.go:172] (0xc0005b9ef0) (0xc002510640) Stream added, broadcasting: 1 I0512 13:42:08.541622 6 log.go:172] (0xc0005b9ef0) Reply frame received for 1 I0512 13:42:08.541667 6 log.go:172] (0xc0005b9ef0) (0xc001af2f00) Create stream I0512 13:42:08.541680 6 log.go:172] (0xc0005b9ef0) (0xc001af2f00) Stream added, broadcasting: 3 I0512 13:42:08.542366 6 log.go:172] (0xc0005b9ef0) Reply frame received for 3 I0512 13:42:08.542391 6 log.go:172] (0xc0005b9ef0) (0xc0025106e0) Create stream I0512 13:42:08.542398 6 log.go:172] (0xc0005b9ef0) (0xc0025106e0) Stream added, broadcasting: 5 I0512 13:42:08.543092 6 log.go:172] (0xc0005b9ef0) Reply frame received for 5 I0512 13:42:08.609632 6 log.go:172] (0xc0005b9ef0) Data frame received for 5 I0512 13:42:08.609662 6 log.go:172] (0xc0025106e0) (5) Data frame handling I0512 13:42:08.609697 6 log.go:172] (0xc0005b9ef0) Data frame received for 3 I0512 13:42:08.609732 6 log.go:172] (0xc001af2f00) (3) Data frame handling I0512 13:42:08.609756 6 log.go:172] (0xc001af2f00) (3) Data frame sent I0512 13:42:08.609771 6 log.go:172] (0xc0005b9ef0) Data frame received for 3 I0512 13:42:08.609784 6 log.go:172] (0xc001af2f00) (3) Data frame handling I0512 13:42:08.611366 6 log.go:172] (0xc0005b9ef0) Data frame received for 1 I0512 13:42:08.611382 6 log.go:172] (0xc002510640) (1) Data frame handling I0512 13:42:08.611391 6 log.go:172] (0xc002510640) (1) Data frame sent I0512 13:42:08.611399 6 log.go:172] (0xc0005b9ef0) (0xc002510640) Stream removed, broadcasting: 1 I0512 13:42:08.611407 6 log.go:172] (0xc0005b9ef0) Go away received I0512 13:42:08.611549 6 log.go:172] (0xc0005b9ef0) (0xc002510640) Stream removed, broadcasting: 1 I0512 13:42:08.611567 6 log.go:172] (0xc0005b9ef0) (0xc001af2f00) Stream removed, broadcasting: 3 I0512 13:42:08.611576 6 log.go:172] (0xc0005b9ef0) (0xc0025106e0) Stream removed, broadcasting: 5 May 12 13:42:08.611: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:42:08.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7815" for this suite. May 12 13:42:34.993: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:42:35.055: INFO: namespace pod-network-test-7815 deletion completed in 26.440518951s • [SLOW TEST:52.852 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:42:35.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 12 13:42:43.998: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:42:44.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6134" for this suite. May 12 13:42:50.267: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:42:50.359: INFO: namespace container-runtime-6134 deletion completed in 6.145538118s • [SLOW TEST:15.304 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:42:50.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 12 13:42:50.455: INFO: Creating ReplicaSet my-hostname-basic-aad36fd7-907b-4572-8bae-685634d3c0ee May 12 13:42:50.473: INFO: Pod name my-hostname-basic-aad36fd7-907b-4572-8bae-685634d3c0ee: Found 0 pods out of 1 May 12 13:42:55.476: INFO: Pod name my-hostname-basic-aad36fd7-907b-4572-8bae-685634d3c0ee: Found 1 pods out of 1 May 12 13:42:55.476: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-aad36fd7-907b-4572-8bae-685634d3c0ee" is running May 12 13:42:55.478: INFO: Pod "my-hostname-basic-aad36fd7-907b-4572-8bae-685634d3c0ee-gq58m" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-12 13:42:50 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-12 13:42:53 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-12 13:42:53 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-12 13:42:50 +0000 UTC Reason: Message:}]) May 12 13:42:55.478: INFO: Trying to dial the pod May 12 13:43:00.490: INFO: Controller my-hostname-basic-aad36fd7-907b-4572-8bae-685634d3c0ee: Got expected result from replica 1 [my-hostname-basic-aad36fd7-907b-4572-8bae-685634d3c0ee-gq58m]: "my-hostname-basic-aad36fd7-907b-4572-8bae-685634d3c0ee-gq58m", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:43:00.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6259" for this suite. May 12 13:43:08.872: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:43:08.942: INFO: namespace replicaset-6259 deletion completed in 8.324468704s • [SLOW TEST:18.583 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:43:08.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-c2a4063e-6c9b-49de-912b-7c9a7607ec4d STEP: Creating a pod to test consume configMaps May 12 13:43:10.127: INFO: Waiting up to 5m0s for pod "pod-configmaps-6f532c4f-e3cf-4966-b41d-408dde930fbc" in namespace "configmap-1213" to be "success or failure" May 12 13:43:10.191: INFO: Pod "pod-configmaps-6f532c4f-e3cf-4966-b41d-408dde930fbc": Phase="Pending", Reason="", readiness=false. Elapsed: 63.564675ms May 12 13:43:12.215: INFO: Pod "pod-configmaps-6f532c4f-e3cf-4966-b41d-408dde930fbc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08808978s May 12 13:43:14.251: INFO: Pod "pod-configmaps-6f532c4f-e3cf-4966-b41d-408dde930fbc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.123486325s May 12 13:43:16.514: INFO: Pod "pod-configmaps-6f532c4f-e3cf-4966-b41d-408dde930fbc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.387401677s May 12 13:43:18.518: INFO: Pod "pod-configmaps-6f532c4f-e3cf-4966-b41d-408dde930fbc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.391201845s STEP: Saw pod success May 12 13:43:18.518: INFO: Pod "pod-configmaps-6f532c4f-e3cf-4966-b41d-408dde930fbc" satisfied condition "success or failure" May 12 13:43:18.522: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-6f532c4f-e3cf-4966-b41d-408dde930fbc container configmap-volume-test: STEP: delete the pod May 12 13:43:18.700: INFO: Waiting for pod pod-configmaps-6f532c4f-e3cf-4966-b41d-408dde930fbc to disappear May 12 13:43:18.887: INFO: Pod pod-configmaps-6f532c4f-e3cf-4966-b41d-408dde930fbc no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:43:18.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1213" for this suite. May 12 13:43:25.003: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:43:25.073: INFO: namespace configmap-1213 deletion completed in 6.182517798s • [SLOW TEST:16.130 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:43:25.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-rlhj STEP: Creating a pod to test atomic-volume-subpath May 12 13:43:25.147: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-rlhj" in namespace "subpath-30" to be "success or failure" May 12 13:43:25.151: INFO: Pod "pod-subpath-test-projected-rlhj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081614ms May 12 13:43:27.155: INFO: Pod "pod-subpath-test-projected-rlhj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007860272s May 12 13:43:29.163: INFO: Pod "pod-subpath-test-projected-rlhj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016148717s May 12 13:43:31.166: INFO: Pod "pod-subpath-test-projected-rlhj": Phase="Running", Reason="", readiness=true. Elapsed: 6.019687123s May 12 13:43:33.170: INFO: Pod "pod-subpath-test-projected-rlhj": Phase="Running", Reason="", readiness=true. Elapsed: 8.023769562s May 12 13:43:35.174: INFO: Pod "pod-subpath-test-projected-rlhj": Phase="Running", Reason="", readiness=true. Elapsed: 10.027440157s May 12 13:43:37.217: INFO: Pod "pod-subpath-test-projected-rlhj": Phase="Running", Reason="", readiness=true. Elapsed: 12.070732161s May 12 13:43:39.223: INFO: Pod "pod-subpath-test-projected-rlhj": Phase="Running", Reason="", readiness=true. Elapsed: 14.076158446s May 12 13:43:41.233: INFO: Pod "pod-subpath-test-projected-rlhj": Phase="Running", Reason="", readiness=true. Elapsed: 16.086740787s May 12 13:43:43.237: INFO: Pod "pod-subpath-test-projected-rlhj": Phase="Running", Reason="", readiness=true. Elapsed: 18.090268237s May 12 13:43:45.241: INFO: Pod "pod-subpath-test-projected-rlhj": Phase="Running", Reason="", readiness=true. Elapsed: 20.094416871s May 12 13:43:47.244: INFO: Pod "pod-subpath-test-projected-rlhj": Phase="Running", Reason="", readiness=true. Elapsed: 22.097627863s May 12 13:43:49.247: INFO: Pod "pod-subpath-test-projected-rlhj": Phase="Running", Reason="", readiness=true. Elapsed: 24.100780306s May 12 13:43:51.569: INFO: Pod "pod-subpath-test-projected-rlhj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.422604831s STEP: Saw pod success May 12 13:43:51.569: INFO: Pod "pod-subpath-test-projected-rlhj" satisfied condition "success or failure" May 12 13:43:51.573: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-projected-rlhj container test-container-subpath-projected-rlhj: STEP: delete the pod May 12 13:43:51.726: INFO: Waiting for pod pod-subpath-test-projected-rlhj to disappear May 12 13:43:51.736: INFO: Pod pod-subpath-test-projected-rlhj no longer exists STEP: Deleting pod pod-subpath-test-projected-rlhj May 12 13:43:51.736: INFO: Deleting pod "pod-subpath-test-projected-rlhj" in namespace "subpath-30" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:43:51.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-30" for this suite. May 12 13:43:57.775: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:43:57.852: INFO: namespace subpath-30 deletion completed in 6.092258257s • [SLOW TEST:32.778 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:43:57.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 12 13:43:57.963: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 12 13:44:02.969: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 12 13:44:02.969: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 12 13:44:04.972: INFO: Creating deployment "test-rollover-deployment" May 12 13:44:05.000: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 12 13:44:07.390: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 12 13:44:07.452: INFO: Ensure that both replica sets have 1 created replica May 12 13:44:07.723: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 12 13:44:07.730: INFO: Updating deployment test-rollover-deployment May 12 13:44:07.730: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 12 13:44:10.156: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 12 13:44:10.162: INFO: Make sure deployment "test-rollover-deployment" is complete May 12 13:44:10.167: INFO: all replica sets need to contain the pod-template-hash label May 12 13:44:10.167: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724887845, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724887845, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724887848, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724887845, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 13:44:12.209: INFO: all replica sets need to contain the pod-template-hash label May 12 13:44:12.209: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724887845, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724887845, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724887848, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724887845, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 13:44:14.174: INFO: all replica sets need to contain the pod-template-hash label May 12 13:44:14.175: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724887845, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724887845, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724887853, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724887845, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 13:44:16.173: INFO: all replica sets need to contain the pod-template-hash label May 12 13:44:16.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724887845, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724887845, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724887853, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724887845, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 13:44:18.175: INFO: all replica sets need to contain the pod-template-hash label May 12 13:44:18.175: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724887845, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724887845, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724887853, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724887845, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 13:44:20.175: INFO: all replica sets need to contain the pod-template-hash label May 12 13:44:20.175: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724887845, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724887845, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724887853, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724887845, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 13:44:22.174: INFO: all replica sets need to contain the pod-template-hash label May 12 13:44:22.174: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724887845, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724887845, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724887853, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724887845, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 13:44:24.175: INFO: May 12 13:44:24.175: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 12 13:44:24.184: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-5676,SelfLink:/apis/apps/v1/namespaces/deployment-5676/deployments/test-rollover-deployment,UID:88651282-9479-461b-9545-370c629820f8,ResourceVersion:10489859,Generation:2,CreationTimestamp:2020-05-12 13:44:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-12 13:44:05 +0000 UTC 2020-05-12 13:44:05 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-12 13:44:23 +0000 UTC 2020-05-12 13:44:05 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 12 13:44:24.188: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-5676,SelfLink:/apis/apps/v1/namespaces/deployment-5676/replicasets/test-rollover-deployment-854595fc44,UID:6878c883-3e67-4313-a176-37336fdbe359,ResourceVersion:10489848,Generation:2,CreationTimestamp:2020-05-12 13:44:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 88651282-9479-461b-9545-370c629820f8 0xc002d172d7 0xc002d172d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 12 13:44:24.188: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 12 13:44:24.188: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-5676,SelfLink:/apis/apps/v1/namespaces/deployment-5676/replicasets/test-rollover-controller,UID:a6dc9962-e57f-4da8-b4a2-a753a4085577,ResourceVersion:10489857,Generation:2,CreationTimestamp:2020-05-12 13:43:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 88651282-9479-461b-9545-370c629820f8 0xc002d171ef 0xc002d17200}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 12 13:44:24.188: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-5676,SelfLink:/apis/apps/v1/namespaces/deployment-5676/replicasets/test-rollover-deployment-9b8b997cf,UID:c4a8c0f8-9641-4b83-ad52-9e40d98f90cf,ResourceVersion:10489805,Generation:2,CreationTimestamp:2020-05-12 13:44:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 88651282-9479-461b-9545-370c629820f8 0xc002d173a0 0xc002d173a1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 12 13:44:24.192: INFO: Pod "test-rollover-deployment-854595fc44-8qfkp" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-8qfkp,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-5676,SelfLink:/api/v1/namespaces/deployment-5676/pods/test-rollover-deployment-854595fc44-8qfkp,UID:1ade7547-9a93-4193-93c9-0f8ceed0abff,ResourceVersion:10489823,Generation:0,CreationTimestamp:2020-05-12 13:44:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 6878c883-3e67-4313-a176-37336fdbe359 0xc002686757 0xc002686758}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hk8rs {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hk8rs,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-hk8rs true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026867d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026867f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:44:08 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:44:13 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:44:13 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:44:08 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.60,StartTime:2020-05-12 13:44:08 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-12 13:44:12 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://e1cf1a640423941e1e67f3605ac4ca0e529dd505a03a33fa17b74e027794519a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:44:24.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5676" for this suite. May 12 13:44:32.213: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:44:32.279: INFO: namespace deployment-5676 deletion completed in 8.083712913s • [SLOW TEST:34.427 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:44:32.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy May 12 13:44:32.667: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix930449061/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:44:32.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6601" for this suite. May 12 13:44:38.764: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:44:38.832: INFO: namespace kubectl-6601 deletion completed in 6.099361711s • [SLOW TEST:6.552 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:44:38.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-00f523c3-adb8-4d7d-a70e-aacafa00e373 STEP: Creating secret with name s-test-opt-upd-1fe46e3d-d025-48ba-9656-408df7c5ff33 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-00f523c3-adb8-4d7d-a70e-aacafa00e373 STEP: Updating secret s-test-opt-upd-1fe46e3d-d025-48ba-9656-408df7c5ff33 STEP: Creating secret with name s-test-opt-create-a406615d-8c5d-46c5-b9c0-2e3e2375fd0f STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:44:49.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2594" for this suite. May 12 13:45:13.162: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:45:13.236: INFO: namespace projected-2594 deletion completed in 24.08406321s • [SLOW TEST:34.404 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:45:13.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 12 13:45:39.947: INFO: Container started at 2020-05-12 13:45:17 +0000 UTC, pod became ready at 2020-05-12 13:45:39 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:45:39.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7647" for this suite. May 12 13:46:02.000: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:46:02.110: INFO: namespace container-probe-7647 deletion completed in 22.159403306s • [SLOW TEST:48.874 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:46:02.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0512 13:46:32.283070 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 12 13:46:32.283: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:46:32.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7205" for this suite. May 12 13:46:42.476: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:46:42.562: INFO: namespace gc-7205 deletion completed in 10.276095569s • [SLOW TEST:40.452 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:46:42.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium May 12 13:46:42.769: INFO: Waiting up to 5m0s for pod "pod-3d40789e-10cc-46d6-bccb-bb95117f408b" in namespace "emptydir-3490" to be "success or failure" May 12 13:46:42.772: INFO: Pod "pod-3d40789e-10cc-46d6-bccb-bb95117f408b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.730901ms May 12 13:46:45.003: INFO: Pod "pod-3d40789e-10cc-46d6-bccb-bb95117f408b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.234031182s May 12 13:46:47.006: INFO: Pod "pod-3d40789e-10cc-46d6-bccb-bb95117f408b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.237257021s May 12 13:46:49.009: INFO: Pod "pod-3d40789e-10cc-46d6-bccb-bb95117f408b": Phase="Running", Reason="", readiness=true. Elapsed: 6.240140533s May 12 13:46:51.012: INFO: Pod "pod-3d40789e-10cc-46d6-bccb-bb95117f408b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.243238465s STEP: Saw pod success May 12 13:46:51.012: INFO: Pod "pod-3d40789e-10cc-46d6-bccb-bb95117f408b" satisfied condition "success or failure" May 12 13:46:51.015: INFO: Trying to get logs from node iruya-worker pod pod-3d40789e-10cc-46d6-bccb-bb95117f408b container test-container: STEP: delete the pod May 12 13:46:51.886: INFO: Waiting for pod pod-3d40789e-10cc-46d6-bccb-bb95117f408b to disappear May 12 13:46:51.897: INFO: Pod pod-3d40789e-10cc-46d6-bccb-bb95117f408b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:46:51.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3490" for this suite. May 12 13:46:58.769: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:46:59.050: INFO: namespace emptydir-3490 deletion completed in 7.082807378s • [SLOW TEST:16.488 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:46:59.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-1267893f-4be6-4583-887d-4fddca5bb0cf in namespace container-probe-9801 May 12 13:47:05.968: INFO: Started pod liveness-1267893f-4be6-4583-887d-4fddca5bb0cf in namespace container-probe-9801 STEP: checking the pod's current state and verifying that restartCount is present May 12 13:47:05.972: INFO: Initial restart count of pod liveness-1267893f-4be6-4583-887d-4fddca5bb0cf is 0 May 12 13:47:27.343: INFO: Restart count of pod container-probe-9801/liveness-1267893f-4be6-4583-887d-4fddca5bb0cf is now 1 (21.370844451s elapsed) May 12 13:47:41.539: INFO: Restart count of pod container-probe-9801/liveness-1267893f-4be6-4583-887d-4fddca5bb0cf is now 2 (35.566843816s elapsed) May 12 13:48:01.575: INFO: Restart count of pod container-probe-9801/liveness-1267893f-4be6-4583-887d-4fddca5bb0cf is now 3 (55.602928639s elapsed) May 12 13:48:21.673: INFO: Restart count of pod container-probe-9801/liveness-1267893f-4be6-4583-887d-4fddca5bb0cf is now 4 (1m15.701235605s elapsed) May 12 13:49:21.883: INFO: Restart count of pod container-probe-9801/liveness-1267893f-4be6-4583-887d-4fddca5bb0cf is now 5 (2m15.910996715s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:49:22.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9801" for this suite. May 12 13:49:30.799: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:49:31.101: INFO: namespace container-probe-9801 deletion completed in 8.546638764s • [SLOW TEST:152.050 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:49:31.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs May 12 13:49:31.615: INFO: Waiting up to 5m0s for pod "pod-8012d6e9-3a09-4598-8e70-4c850a9be44a" in namespace "emptydir-785" to be "success or failure" May 12 13:49:31.863: INFO: Pod "pod-8012d6e9-3a09-4598-8e70-4c850a9be44a": Phase="Pending", Reason="", readiness=false. Elapsed: 247.770734ms May 12 13:49:34.010: INFO: Pod "pod-8012d6e9-3a09-4598-8e70-4c850a9be44a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.394687369s May 12 13:49:36.012: INFO: Pod "pod-8012d6e9-3a09-4598-8e70-4c850a9be44a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.397203502s May 12 13:49:38.016: INFO: Pod "pod-8012d6e9-3a09-4598-8e70-4c850a9be44a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.400732721s STEP: Saw pod success May 12 13:49:38.016: INFO: Pod "pod-8012d6e9-3a09-4598-8e70-4c850a9be44a" satisfied condition "success or failure" May 12 13:49:38.019: INFO: Trying to get logs from node iruya-worker2 pod pod-8012d6e9-3a09-4598-8e70-4c850a9be44a container test-container: STEP: delete the pod May 12 13:49:38.140: INFO: Waiting for pod pod-8012d6e9-3a09-4598-8e70-4c850a9be44a to disappear May 12 13:49:38.170: INFO: Pod pod-8012d6e9-3a09-4598-8e70-4c850a9be44a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:49:38.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-785" for this suite. May 12 13:49:44.210: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:49:44.286: INFO: namespace emptydir-785 deletion completed in 6.112797323s • [SLOW TEST:13.185 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:49:44.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-a819a3c1-cf64-47f0-bfb9-842bc1585a2e in namespace container-probe-7850 May 12 13:49:48.923: INFO: Started pod busybox-a819a3c1-cf64-47f0-bfb9-842bc1585a2e in namespace container-probe-7850 STEP: checking the pod's current state and verifying that restartCount is present May 12 13:49:48.926: INFO: Initial restart count of pod busybox-a819a3c1-cf64-47f0-bfb9-842bc1585a2e is 0 May 12 13:50:40.272: INFO: Restart count of pod container-probe-7850/busybox-a819a3c1-cf64-47f0-bfb9-842bc1585a2e is now 1 (51.345669644s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:50:40.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7850" for this suite. May 12 13:50:47.026: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:50:47.087: INFO: namespace container-probe-7850 deletion completed in 6.472475171s • [SLOW TEST:62.801 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:50:47.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 12 13:50:47.358: INFO: Waiting up to 5m0s for pod "downward-api-5c8ac8ef-787e-402f-9239-ce04fb9e86b1" in namespace "downward-api-5953" to be "success or failure" May 12 13:50:47.375: INFO: Pod "downward-api-5c8ac8ef-787e-402f-9239-ce04fb9e86b1": Phase="Pending", Reason="", readiness=false. Elapsed: 17.356294ms May 12 13:50:49.379: INFO: Pod "downward-api-5c8ac8ef-787e-402f-9239-ce04fb9e86b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021701993s May 12 13:50:51.594: INFO: Pod "downward-api-5c8ac8ef-787e-402f-9239-ce04fb9e86b1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.236108097s May 12 13:50:54.132: INFO: Pod "downward-api-5c8ac8ef-787e-402f-9239-ce04fb9e86b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.774178401s STEP: Saw pod success May 12 13:50:54.132: INFO: Pod "downward-api-5c8ac8ef-787e-402f-9239-ce04fb9e86b1" satisfied condition "success or failure" May 12 13:50:54.135: INFO: Trying to get logs from node iruya-worker pod downward-api-5c8ac8ef-787e-402f-9239-ce04fb9e86b1 container dapi-container: STEP: delete the pod May 12 13:50:54.872: INFO: Waiting for pod downward-api-5c8ac8ef-787e-402f-9239-ce04fb9e86b1 to disappear May 12 13:50:54.938: INFO: Pod downward-api-5c8ac8ef-787e-402f-9239-ce04fb9e86b1 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:50:54.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5953" for this suite. May 12 13:51:01.174: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:51:01.230: INFO: namespace downward-api-5953 deletion completed in 6.289061404s • [SLOW TEST:14.143 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:51:01.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 12 13:51:01.408: INFO: Waiting up to 5m0s for pod "downwardapi-volume-76dfb4d7-d7ad-4b02-8f43-5174000153b5" in namespace "downward-api-8322" to be "success or failure" May 12 13:51:01.438: INFO: Pod "downwardapi-volume-76dfb4d7-d7ad-4b02-8f43-5174000153b5": Phase="Pending", Reason="", readiness=false. Elapsed: 30.464726ms May 12 13:51:03.774: INFO: Pod "downwardapi-volume-76dfb4d7-d7ad-4b02-8f43-5174000153b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.366921783s May 12 13:51:06.306: INFO: Pod "downwardapi-volume-76dfb4d7-d7ad-4b02-8f43-5174000153b5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.898747034s May 12 13:51:08.311: INFO: Pod "downwardapi-volume-76dfb4d7-d7ad-4b02-8f43-5174000153b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.903246822s STEP: Saw pod success May 12 13:51:08.311: INFO: Pod "downwardapi-volume-76dfb4d7-d7ad-4b02-8f43-5174000153b5" satisfied condition "success or failure" May 12 13:51:08.314: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-76dfb4d7-d7ad-4b02-8f43-5174000153b5 container client-container: STEP: delete the pod May 12 13:51:08.501: INFO: Waiting for pod downwardapi-volume-76dfb4d7-d7ad-4b02-8f43-5174000153b5 to disappear May 12 13:51:08.830: INFO: Pod downwardapi-volume-76dfb4d7-d7ad-4b02-8f43-5174000153b5 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:51:08.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8322" for this suite. May 12 13:51:15.061: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:51:15.118: INFO: namespace downward-api-8322 deletion completed in 6.284213883s • [SLOW TEST:13.887 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:51:15.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-6bc81078-b292-4b0f-b844-ad9bd1d1bef3 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-6bc81078-b292-4b0f-b844-ad9bd1d1bef3 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:51:21.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2944" for this suite. May 12 13:51:45.386: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:51:45.453: INFO: namespace projected-2944 deletion completed in 24.184684687s • [SLOW TEST:30.335 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:51:45.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 12 13:51:45.910: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:51:45.933: INFO: Number of nodes with available pods: 0 May 12 13:51:45.933: INFO: Node iruya-worker is running more than one daemon pod May 12 13:51:46.937: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:51:46.939: INFO: Number of nodes with available pods: 0 May 12 13:51:46.939: INFO: Node iruya-worker is running more than one daemon pod May 12 13:51:47.937: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:51:47.939: INFO: Number of nodes with available pods: 0 May 12 13:51:47.939: INFO: Node iruya-worker is running more than one daemon pod May 12 13:51:49.086: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:51:49.090: INFO: Number of nodes with available pods: 0 May 12 13:51:49.090: INFO: Node iruya-worker is running more than one daemon pod May 12 13:51:49.939: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:51:49.942: INFO: Number of nodes with available pods: 0 May 12 13:51:49.942: INFO: Node iruya-worker is running more than one daemon pod May 12 13:51:51.003: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:51:51.006: INFO: Number of nodes with available pods: 0 May 12 13:51:51.006: INFO: Node iruya-worker is running more than one daemon pod May 12 13:51:51.991: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:51:51.995: INFO: Number of nodes with available pods: 0 May 12 13:51:51.995: INFO: Node iruya-worker is running more than one daemon pod May 12 13:51:52.939: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:51:52.943: INFO: Number of nodes with available pods: 2 May 12 13:51:52.943: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 12 13:51:52.976: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:51:52.980: INFO: Number of nodes with available pods: 1 May 12 13:51:52.980: INFO: Node iruya-worker2 is running more than one daemon pod May 12 13:51:53.986: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:51:53.990: INFO: Number of nodes with available pods: 1 May 12 13:51:53.990: INFO: Node iruya-worker2 is running more than one daemon pod May 12 13:51:54.984: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:51:54.987: INFO: Number of nodes with available pods: 1 May 12 13:51:54.987: INFO: Node iruya-worker2 is running more than one daemon pod May 12 13:51:56.014: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:51:56.035: INFO: Number of nodes with available pods: 1 May 12 13:51:56.035: INFO: Node iruya-worker2 is running more than one daemon pod May 12 13:51:56.984: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:51:56.986: INFO: Number of nodes with available pods: 1 May 12 13:51:56.986: INFO: Node iruya-worker2 is running more than one daemon pod May 12 13:51:57.984: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:51:57.987: INFO: Number of nodes with available pods: 1 May 12 13:51:57.987: INFO: Node iruya-worker2 is running more than one daemon pod May 12 13:51:58.983: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:51:58.985: INFO: Number of nodes with available pods: 1 May 12 13:51:58.986: INFO: Node iruya-worker2 is running more than one daemon pod May 12 13:51:59.985: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:51:59.989: INFO: Number of nodes with available pods: 1 May 12 13:51:59.989: INFO: Node iruya-worker2 is running more than one daemon pod May 12 13:52:00.986: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:52:00.990: INFO: Number of nodes with available pods: 1 May 12 13:52:00.990: INFO: Node iruya-worker2 is running more than one daemon pod May 12 13:52:02.111: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:52:02.114: INFO: Number of nodes with available pods: 1 May 12 13:52:02.114: INFO: Node iruya-worker2 is running more than one daemon pod May 12 13:52:02.984: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:52:02.987: INFO: Number of nodes with available pods: 1 May 12 13:52:02.987: INFO: Node iruya-worker2 is running more than one daemon pod May 12 13:52:03.983: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:52:03.985: INFO: Number of nodes with available pods: 1 May 12 13:52:03.985: INFO: Node iruya-worker2 is running more than one daemon pod May 12 13:52:05.117: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:52:05.293: INFO: Number of nodes with available pods: 1 May 12 13:52:05.293: INFO: Node iruya-worker2 is running more than one daemon pod May 12 13:52:06.202: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:52:06.205: INFO: Number of nodes with available pods: 1 May 12 13:52:06.205: INFO: Node iruya-worker2 is running more than one daemon pod May 12 13:52:06.984: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:52:06.988: INFO: Number of nodes with available pods: 1 May 12 13:52:06.988: INFO: Node iruya-worker2 is running more than one daemon pod May 12 13:52:07.984: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:52:07.986: INFO: Number of nodes with available pods: 1 May 12 13:52:07.986: INFO: Node iruya-worker2 is running more than one daemon pod May 12 13:52:08.983: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 13:52:09.265: INFO: Number of nodes with available pods: 2 May 12 13:52:09.265: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4556, will wait for the garbage collector to delete the pods May 12 13:52:09.335: INFO: Deleting DaemonSet.extensions daemon-set took: 4.90123ms May 12 13:52:09.635: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.270328ms May 12 13:52:14.138: INFO: Number of nodes with available pods: 0 May 12 13:52:14.138: INFO: Number of running nodes: 0, number of available pods: 0 May 12 13:52:14.141: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4556/daemonsets","resourceVersion":"10491186"},"items":null} May 12 13:52:14.143: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4556/pods","resourceVersion":"10491186"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:52:14.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4556" for this suite. May 12 13:52:24.255: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:52:24.334: INFO: namespace daemonsets-4556 deletion completed in 10.094869175s • [SLOW TEST:38.881 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:52:24.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-4117d355-befb-472b-8cd2-8978a354fe1f STEP: Creating a pod to test consume secrets May 12 13:52:24.466: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-59b88344-b458-479a-bc79-7f9c8610a846" in namespace "projected-2912" to be "success or failure" May 12 13:52:24.474: INFO: Pod "pod-projected-secrets-59b88344-b458-479a-bc79-7f9c8610a846": Phase="Pending", Reason="", readiness=false. Elapsed: 7.502023ms May 12 13:52:26.530: INFO: Pod "pod-projected-secrets-59b88344-b458-479a-bc79-7f9c8610a846": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063673585s May 12 13:52:28.533: INFO: Pod "pod-projected-secrets-59b88344-b458-479a-bc79-7f9c8610a846": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067368027s May 12 13:52:30.643: INFO: Pod "pod-projected-secrets-59b88344-b458-479a-bc79-7f9c8610a846": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.176521427s STEP: Saw pod success May 12 13:52:30.643: INFO: Pod "pod-projected-secrets-59b88344-b458-479a-bc79-7f9c8610a846" satisfied condition "success or failure" May 12 13:52:30.652: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-59b88344-b458-479a-bc79-7f9c8610a846 container projected-secret-volume-test: STEP: delete the pod May 12 13:52:30.670: INFO: Waiting for pod pod-projected-secrets-59b88344-b458-479a-bc79-7f9c8610a846 to disappear May 12 13:52:30.681: INFO: Pod pod-projected-secrets-59b88344-b458-479a-bc79-7f9c8610a846 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:52:30.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2912" for this suite. May 12 13:52:36.778: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:52:36.842: INFO: namespace projected-2912 deletion completed in 6.158451014s • [SLOW TEST:12.507 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:52:36.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 12 13:52:36.955: INFO: (0) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.558108ms) May 12 13:52:36.957: INFO: (1) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.109281ms) May 12 13:52:36.959: INFO: (2) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 1.913679ms) May 12 13:52:36.961: INFO: (3) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.633026ms) May 12 13:52:36.964: INFO: (4) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.470126ms) May 12 13:52:36.967: INFO: (5) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.812755ms) May 12 13:52:36.970: INFO: (6) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.684052ms) May 12 13:52:36.972: INFO: (7) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 1.98039ms) May 12 13:52:36.974: INFO: (8) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.61821ms) May 12 13:52:36.976: INFO: (9) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 1.895759ms) May 12 13:52:36.978: INFO: (10) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.20206ms) May 12 13:52:36.981: INFO: (11) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.247885ms) May 12 13:52:37.002: INFO: (12) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 21.848502ms) May 12 13:52:37.006: INFO: (13) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.055611ms) May 12 13:52:37.009: INFO: (14) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.036963ms) May 12 13:52:37.012: INFO: (15) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.061141ms) May 12 13:52:37.015: INFO: (16) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.264745ms) May 12 13:52:37.018: INFO: (17) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.241927ms) May 12 13:52:37.022: INFO: (18) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.129085ms) May 12 13:52:37.024: INFO: (19) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.430757ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:52:37.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-7720" for this suite. May 12 13:52:43.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:52:43.119: INFO: namespace proxy-7720 deletion completed in 6.092220565s • [SLOW TEST:6.277 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:52:43.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0512 13:52:44.290164 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 12 13:52:44.290: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:52:44.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9281" for this suite. May 12 13:52:50.428: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:52:50.603: INFO: namespace gc-9281 deletion completed in 6.311274713s • [SLOW TEST:7.483 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:52:50.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-64f4fb76-5059-44b3-9adf-554ba2d336de STEP: Creating configMap with name cm-test-opt-upd-d1954213-0bdb-4c43-83d0-b7a58775d2f8 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-64f4fb76-5059-44b3-9adf-554ba2d336de STEP: Updating configmap cm-test-opt-upd-d1954213-0bdb-4c43-83d0-b7a58775d2f8 STEP: Creating configMap with name cm-test-opt-create-a838dd6b-e3ae-42af-87b3-27884a6049d9 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:54:22.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1163" for this suite. May 12 13:54:46.810: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:54:46.871: INFO: namespace projected-1163 deletion completed in 24.076650089s • [SLOW TEST:116.268 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:54:46.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 12 13:54:47.028: INFO: Creating deployment "nginx-deployment" May 12 13:54:47.032: INFO: Waiting for observed generation 1 May 12 13:54:49.133: INFO: Waiting for all required pods to come up May 12 13:54:49.137: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running May 12 13:55:01.575: INFO: Waiting for deployment "nginx-deployment" to complete May 12 13:55:01.581: INFO: Updating deployment "nginx-deployment" with a non-existent image May 12 13:55:01.588: INFO: Updating deployment nginx-deployment May 12 13:55:01.589: INFO: Waiting for observed generation 2 May 12 13:55:03.742: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 12 13:55:03.746: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 12 13:55:03.814: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 12 13:55:03.823: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 12 13:55:03.823: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 12 13:55:03.825: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 12 13:55:03.829: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas May 12 13:55:03.830: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 May 12 13:55:03.835: INFO: Updating deployment nginx-deployment May 12 13:55:03.835: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas May 12 13:55:04.251: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 12 13:55:04.516: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 12 13:55:07.299: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-1795,SelfLink:/apis/apps/v1/namespaces/deployment-1795/deployments/nginx-deployment,UID:e0e60a1f-759a-477b-9686-009da1b32656,ResourceVersion:10491893,Generation:3,CreationTimestamp:2020-05-12 13:54:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2020-05-12 13:55:04 +0000 UTC 2020-05-12 13:55:04 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-05-12 13:55:04 +0000 UTC 2020-05-12 13:54:47 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},} May 12 13:55:07.579: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-1795,SelfLink:/apis/apps/v1/namespaces/deployment-1795/replicasets/nginx-deployment-55fb7cb77f,UID:82eef196-5df5-4a28-bbfe-c667396901e0,ResourceVersion:10491882,Generation:3,CreationTimestamp:2020-05-12 13:55:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment e0e60a1f-759a-477b-9686-009da1b32656 0xc002d8d1c7 0xc002d8d1c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 12 13:55:07.579: INFO: All old ReplicaSets of Deployment "nginx-deployment": May 12 13:55:07.579: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-1795,SelfLink:/apis/apps/v1/namespaces/deployment-1795/replicasets/nginx-deployment-7b8c6f4498,UID:e95da3f7-8727-45e0-8d37-36b7841bc5bf,ResourceVersion:10491887,Generation:3,CreationTimestamp:2020-05-12 13:54:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment e0e60a1f-759a-477b-9686-009da1b32656 0xc002d8d297 0xc002d8d298}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} May 12 13:55:08.563: INFO: Pod "nginx-deployment-55fb7cb77f-49fk2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-49fk2,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1795,SelfLink:/api/v1/namespaces/deployment-1795/pods/nginx-deployment-55fb7cb77f-49fk2,UID:8bfc1650-81db-4a92-a18b-49e4666c76b7,ResourceVersion:10491871,Generation:0,CreationTimestamp:2020-05-12 13:55:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 82eef196-5df5-4a28-bbfe-c667396901e0 0xc002cb9777 0xc002cb9778}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7dj92 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7dj92,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-7dj92 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cb97f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cb9810}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:04 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 13:55:08.563: INFO: Pod "nginx-deployment-55fb7cb77f-4wbqf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-4wbqf,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1795,SelfLink:/api/v1/namespaces/deployment-1795/pods/nginx-deployment-55fb7cb77f-4wbqf,UID:4cd9d9fc-6001-4a5b-818e-ce67a9118bec,ResourceVersion:10491808,Generation:0,CreationTimestamp:2020-05-12 13:55:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 82eef196-5df5-4a28-bbfe-c667396901e0 0xc002cb9897 0xc002cb9898}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7dj92 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7dj92,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-7dj92 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cb9910} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cb9930}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:01 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-12 13:55:02 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 13:55:08.563: INFO: Pod "nginx-deployment-55fb7cb77f-524f6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-524f6,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1795,SelfLink:/api/v1/namespaces/deployment-1795/pods/nginx-deployment-55fb7cb77f-524f6,UID:108f8d9a-754b-4a3b-8c22-e1adebe2524a,ResourceVersion:10491899,Generation:0,CreationTimestamp:2020-05-12 13:55:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 82eef196-5df5-4a28-bbfe-c667396901e0 0xc002cb9a07 0xc002cb9a08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7dj92 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7dj92,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-7dj92 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cb9a80} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cb9aa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:04 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-12 13:55:04 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 13:55:08.563: INFO: Pod "nginx-deployment-55fb7cb77f-6td76" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-6td76,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1795,SelfLink:/api/v1/namespaces/deployment-1795/pods/nginx-deployment-55fb7cb77f-6td76,UID:3e75c3af-624b-4f39-959c-9dbb4e439d9e,ResourceVersion:10491877,Generation:0,CreationTimestamp:2020-05-12 13:55:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 82eef196-5df5-4a28-bbfe-c667396901e0 0xc002cb9b77 0xc002cb9b78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7dj92 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7dj92,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-7dj92 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cb9bf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cb9c10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:04 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 13:55:08.563: INFO: Pod "nginx-deployment-55fb7cb77f-8nzkq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-8nzkq,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1795,SelfLink:/api/v1/namespaces/deployment-1795/pods/nginx-deployment-55fb7cb77f-8nzkq,UID:c942af23-809b-47b7-aa63-2c53d7f7a04a,ResourceVersion:10491873,Generation:0,CreationTimestamp:2020-05-12 13:55:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 82eef196-5df5-4a28-bbfe-c667396901e0 0xc002cb9c97 0xc002cb9c98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7dj92 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7dj92,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-7dj92 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cb9d10} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cb9d30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:04 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 13:55:08.563: INFO: Pod "nginx-deployment-55fb7cb77f-d495m" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-d495m,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1795,SelfLink:/api/v1/namespaces/deployment-1795/pods/nginx-deployment-55fb7cb77f-d495m,UID:44dde6b8-0dba-4c90-a918-5cdc3d1876c9,ResourceVersion:10491881,Generation:0,CreationTimestamp:2020-05-12 13:55:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 82eef196-5df5-4a28-bbfe-c667396901e0 0xc002cb9db7 0xc002cb9db8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7dj92 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7dj92,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-7dj92 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cb9e30} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cb9e50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:04 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-12 13:55:04 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 13:55:08.564: INFO: Pod "nginx-deployment-55fb7cb77f-dss8v" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-dss8v,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1795,SelfLink:/api/v1/namespaces/deployment-1795/pods/nginx-deployment-55fb7cb77f-dss8v,UID:74cada7a-b030-40fe-8874-10d55fd8098a,ResourceVersion:10491891,Generation:0,CreationTimestamp:2020-05-12 13:55:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 82eef196-5df5-4a28-bbfe-c667396901e0 0xc002cb9f27 0xc002cb9f28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7dj92 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7dj92,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-7dj92 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cb9fa0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cb9fc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:04 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-12 13:55:04 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 13:55:08.564: INFO: Pod "nginx-deployment-55fb7cb77f-hjx9w" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-hjx9w,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1795,SelfLink:/api/v1/namespaces/deployment-1795/pods/nginx-deployment-55fb7cb77f-hjx9w,UID:f54d3ae0-fe51-4940-bcc8-25549da1a7c3,ResourceVersion:10491787,Generation:0,CreationTimestamp:2020-05-12 13:55:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 82eef196-5df5-4a28-bbfe-c667396901e0 0xc000ea40a7 0xc000ea40a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7dj92 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7dj92,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-7dj92 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000ea4120} {node.kubernetes.io/unreachable Exists NoExecute 0xc000ea4140}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:01 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-12 13:55:01 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 13:55:08.564: INFO: Pod "nginx-deployment-55fb7cb77f-qjbhz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-qjbhz,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1795,SelfLink:/api/v1/namespaces/deployment-1795/pods/nginx-deployment-55fb7cb77f-qjbhz,UID:2b16e73e-1ddf-466a-b188-a11b2fc5f634,ResourceVersion:10491793,Generation:0,CreationTimestamp:2020-05-12 13:55:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 82eef196-5df5-4a28-bbfe-c667396901e0 0xc000ea4217 0xc000ea4218}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7dj92 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7dj92,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-7dj92 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000ea4290} {node.kubernetes.io/unreachable Exists NoExecute 0xc000ea42b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:01 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-12 13:55:01 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 13:55:08.564: INFO: Pod "nginx-deployment-55fb7cb77f-vtbbb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-vtbbb,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1795,SelfLink:/api/v1/namespaces/deployment-1795/pods/nginx-deployment-55fb7cb77f-vtbbb,UID:8d10bebe-6b07-484e-9ca9-33732b78c4e9,ResourceVersion:10491874,Generation:0,CreationTimestamp:2020-05-12 13:55:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 82eef196-5df5-4a28-bbfe-c667396901e0 0xc000ea4387 0xc000ea4388}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7dj92 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7dj92,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-7dj92 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000ea4400} {node.kubernetes.io/unreachable Exists NoExecute 0xc000ea4420}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:04 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 13:55:08.564: INFO: Pod "nginx-deployment-55fb7cb77f-wg8zh" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-wg8zh,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1795,SelfLink:/api/v1/namespaces/deployment-1795/pods/nginx-deployment-55fb7cb77f-wg8zh,UID:745584d9-9067-48a3-9806-d0c8b45e9f84,ResourceVersion:10491811,Generation:0,CreationTimestamp:2020-05-12 13:55:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 82eef196-5df5-4a28-bbfe-c667396901e0 0xc000ea44a7 0xc000ea44a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7dj92 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7dj92,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-7dj92 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000ea4520} {node.kubernetes.io/unreachable Exists NoExecute 0xc000ea4540}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:01 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-12 13:55:02 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 13:55:08.564: INFO: Pod "nginx-deployment-55fb7cb77f-z4cx5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-z4cx5,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1795,SelfLink:/api/v1/namespaces/deployment-1795/pods/nginx-deployment-55fb7cb77f-z4cx5,UID:343940c3-3a98-46d4-bf69-a77a29e3eb1e,ResourceVersion:10491932,Generation:0,CreationTimestamp:2020-05-12 13:55:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 82eef196-5df5-4a28-bbfe-c667396901e0 0xc000ea4617 0xc000ea4618}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7dj92 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7dj92,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-7dj92 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000ea4690} {node.kubernetes.io/unreachable Exists NoExecute 0xc000ea46b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:04 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-12 13:55:05 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 13:55:08.565: INFO: Pod "nginx-deployment-55fb7cb77f-z8b54" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-z8b54,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1795,SelfLink:/api/v1/namespaces/deployment-1795/pods/nginx-deployment-55fb7cb77f-z8b54,UID:92bafc86-32b6-4e5a-9157-f243760c256a,ResourceVersion:10491801,Generation:0,CreationTimestamp:2020-05-12 13:55:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 82eef196-5df5-4a28-bbfe-c667396901e0 0xc000ea4787 0xc000ea4788}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7dj92 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7dj92,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-7dj92 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000ea4800} {node.kubernetes.io/unreachable Exists NoExecute 0xc000ea4820}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:01 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-12 13:55:01 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 13:55:08.565: INFO: Pod "nginx-deployment-7b8c6f4498-2svkf" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2svkf,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1795,SelfLink:/api/v1/namespaces/deployment-1795/pods/nginx-deployment-7b8c6f4498-2svkf,UID:6a40f83f-ccbb-482b-9766-7a679d90f4e0,ResourceVersion:10491727,Generation:0,CreationTimestamp:2020-05-12 13:54:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e95da3f7-8727-45e0-8d37-36b7841bc5bf 0xc000ea48f7 0xc000ea48f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7dj92 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7dj92,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7dj92 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000ea4970} {node.kubernetes.io/unreachable Exists NoExecute 0xc000ea4990}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:54:47 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:54:58 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:54:58 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:54:47 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.77,StartTime:2020-05-12 13:54:47 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-12 13:54:57 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://43012d141da5153af16aebdc5fe53d499c52db1ca7e5c9ab185f8b3065f37dc1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 13:55:08.565: INFO: Pod "nginx-deployment-7b8c6f4498-4jljw" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-4jljw,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1795,SelfLink:/api/v1/namespaces/deployment-1795/pods/nginx-deployment-7b8c6f4498-4jljw,UID:78171a56-4c21-4e15-ab97-0b566d6a4e4b,ResourceVersion:10491751,Generation:0,CreationTimestamp:2020-05-12 13:54:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e95da3f7-8727-45e0-8d37-36b7841bc5bf 0xc000ea4a67 0xc000ea4a68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7dj92 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7dj92,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7dj92 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000ea4ae0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000ea4b00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:54:47 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:00 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:54:47 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.79,StartTime:2020-05-12 13:54:47 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-12 13:54:59 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://7d4736fa3c5ea75ea2b4da969915428da1b3a5cb186672cde61640a32d4bdbf4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 13:55:08.565: INFO: Pod "nginx-deployment-7b8c6f4498-62dpz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-62dpz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1795,SelfLink:/api/v1/namespaces/deployment-1795/pods/nginx-deployment-7b8c6f4498-62dpz,UID:c6a24d49-6793-4929-aa5d-1bbefc778a9a,ResourceVersion:10491913,Generation:0,CreationTimestamp:2020-05-12 13:55:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e95da3f7-8727-45e0-8d37-36b7841bc5bf 0xc000ea4bd7 0xc000ea4bd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7dj92 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7dj92,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7dj92 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000ea4c50} {node.kubernetes.io/unreachable Exists NoExecute 0xc000ea4c70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:04 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-12 13:55:04 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 13:55:08.565: INFO: Pod "nginx-deployment-7b8c6f4498-6pfv8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6pfv8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1795,SelfLink:/api/v1/namespaces/deployment-1795/pods/nginx-deployment-7b8c6f4498-6pfv8,UID:959d378e-2cb1-4ede-b968-085a269f3813,ResourceVersion:10491869,Generation:0,CreationTimestamp:2020-05-12 13:55:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e95da3f7-8727-45e0-8d37-36b7841bc5bf 0xc000ea4d37 0xc000ea4d38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7dj92 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7dj92,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7dj92 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000ea4db0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000ea4dd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:04 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 13:55:08.565: INFO: Pod "nginx-deployment-7b8c6f4498-7p59b" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-7p59b,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1795,SelfLink:/api/v1/namespaces/deployment-1795/pods/nginx-deployment-7b8c6f4498-7p59b,UID:9dbbc999-653c-46ee-8c0b-4d49ab10fb19,ResourceVersion:10491744,Generation:0,CreationTimestamp:2020-05-12 13:54:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e95da3f7-8727-45e0-8d37-36b7841bc5bf 0xc000ea4e57 0xc000ea4e58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7dj92 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7dj92,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7dj92 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000ea4ed0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000ea4ef0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:54:47 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:00 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:54:47 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.74,StartTime:2020-05-12 13:54:47 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-12 13:55:00 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://1b220fee8655ede61791666e8f02db565c1449aea690bb02f315d6ba3c679c04}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 13:55:08.565: INFO: Pod "nginx-deployment-7b8c6f4498-8j8vf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8j8vf,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1795,SelfLink:/api/v1/namespaces/deployment-1795/pods/nginx-deployment-7b8c6f4498-8j8vf,UID:51f666d8-1ee9-40a8-bda5-87aa1b11c737,ResourceVersion:10491911,Generation:0,CreationTimestamp:2020-05-12 13:55:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e95da3f7-8727-45e0-8d37-36b7841bc5bf 0xc000ea4fc7 0xc000ea4fc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7dj92 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7dj92,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7dj92 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000ea5040} {node.kubernetes.io/unreachable Exists NoExecute 0xc000ea5060}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:04 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-12 13:55:04 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 13:55:08.566: INFO: Pod "nginx-deployment-7b8c6f4498-bpn5w" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bpn5w,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1795,SelfLink:/api/v1/namespaces/deployment-1795/pods/nginx-deployment-7b8c6f4498-bpn5w,UID:518a9696-522b-499f-9589-66339f404015,ResourceVersion:10491701,Generation:0,CreationTimestamp:2020-05-12 13:54:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e95da3f7-8727-45e0-8d37-36b7841bc5bf 0xc000ea5127 0xc000ea5128}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7dj92 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7dj92,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7dj92 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000ea51a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000ea51c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:54:47 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:54:54 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:54:54 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:54:47 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.76,StartTime:2020-05-12 13:54:47 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-12 13:54:52 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://14eb5b18d05f33cdfbd4c8998531e6039da1b7fe98ece1ebb9d87b8425e25892}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 13:55:08.566: INFO: Pod "nginx-deployment-7b8c6f4498-c2zts" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-c2zts,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1795,SelfLink:/api/v1/namespaces/deployment-1795/pods/nginx-deployment-7b8c6f4498-c2zts,UID:d10f8052-e87c-44f5-b4bf-7465b801fc0b,ResourceVersion:10491757,Generation:0,CreationTimestamp:2020-05-12 13:54:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e95da3f7-8727-45e0-8d37-36b7841bc5bf 0xc000ea5297 0xc000ea5298}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7dj92 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7dj92,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7dj92 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000ea5310} {node.kubernetes.io/unreachable Exists NoExecute 0xc000ea5330}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:54:47 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:00 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:54:47 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.80,StartTime:2020-05-12 13:54:47 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-12 13:54:59 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://65cb5f96f2d75c11691730374c3c07b1e9c48b256d36d9780a47a17721cd3e65}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 13:55:08.566: INFO: Pod "nginx-deployment-7b8c6f4498-frhgp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-frhgp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1795,SelfLink:/api/v1/namespaces/deployment-1795/pods/nginx-deployment-7b8c6f4498-frhgp,UID:dc2f47db-9221-49b3-a7c4-22c28ff47815,ResourceVersion:10491900,Generation:0,CreationTimestamp:2020-05-12 13:55:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e95da3f7-8727-45e0-8d37-36b7841bc5bf 0xc000ea5407 0xc000ea5408}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7dj92 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7dj92,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7dj92 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000ea5480} {node.kubernetes.io/unreachable Exists NoExecute 0xc000ea54b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:04 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-12 13:55:04 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 13:55:08.566: INFO: Pod "nginx-deployment-7b8c6f4498-gbv94" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-gbv94,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1795,SelfLink:/api/v1/namespaces/deployment-1795/pods/nginx-deployment-7b8c6f4498-gbv94,UID:d66825cd-94cc-4d3b-a548-73f06e630a5a,ResourceVersion:10491883,Generation:0,CreationTimestamp:2020-05-12 13:55:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e95da3f7-8727-45e0-8d37-36b7841bc5bf 0xc000ea55a7 0xc000ea55a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7dj92 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7dj92,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7dj92 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000ea5620} {node.kubernetes.io/unreachable Exists NoExecute 0xc000ea5640}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:04 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-12 13:55:04 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 13:55:08.566: INFO: Pod "nginx-deployment-7b8c6f4498-gc4pv" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-gc4pv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1795,SelfLink:/api/v1/namespaces/deployment-1795/pods/nginx-deployment-7b8c6f4498-gc4pv,UID:5070571c-eeaa-4bf8-a591-2078d672c35d,ResourceVersion:10491726,Generation:0,CreationTimestamp:2020-05-12 13:54:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e95da3f7-8727-45e0-8d37-36b7841bc5bf 0xc000ea5707 0xc000ea5708}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7dj92 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7dj92,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7dj92 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000ea5780} {node.kubernetes.io/unreachable Exists NoExecute 0xc000ea57a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:54:47 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:54:58 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:54:58 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:54:47 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.71,StartTime:2020-05-12 13:54:47 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-12 13:54:56 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://f7cef8ae6605b70e6675fa9ee95cdd5431ee76f052b3bce7158318334cb8fae0}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 13:55:08.566: INFO: Pod "nginx-deployment-7b8c6f4498-h8wtf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-h8wtf,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1795,SelfLink:/api/v1/namespaces/deployment-1795/pods/nginx-deployment-7b8c6f4498-h8wtf,UID:ecae35db-a5a1-4056-8ba7-bc0dd08dcdb0,ResourceVersion:10491872,Generation:0,CreationTimestamp:2020-05-12 13:55:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e95da3f7-8727-45e0-8d37-36b7841bc5bf 0xc000ea5877 0xc000ea5878}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7dj92 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7dj92,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7dj92 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000ea58f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000ea5910}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:04 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 13:55:08.566: INFO: Pod "nginx-deployment-7b8c6f4498-j4q6d" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-j4q6d,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1795,SelfLink:/api/v1/namespaces/deployment-1795/pods/nginx-deployment-7b8c6f4498-j4q6d,UID:421eeee4-6f65-43bc-9c59-7bdbcb89ecbf,ResourceVersion:10491716,Generation:0,CreationTimestamp:2020-05-12 13:54:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e95da3f7-8727-45e0-8d37-36b7841bc5bf 0xc000ea5997 0xc000ea5998}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7dj92 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7dj92,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7dj92 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000ea5a10} {node.kubernetes.io/unreachable Exists NoExecute 0xc000ea5a30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:54:47 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:54:56 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:54:56 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:54:47 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.70,StartTime:2020-05-12 13:54:47 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-12 13:54:55 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://f2ccad875cebe6f8db6531bbb104ebed020eb93270839e18d0798b3073ecc643}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 13:55:08.567: INFO: Pod "nginx-deployment-7b8c6f4498-jfr4g" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jfr4g,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1795,SelfLink:/api/v1/namespaces/deployment-1795/pods/nginx-deployment-7b8c6f4498-jfr4g,UID:cad54f07-5649-4a41-bb31-a1a969c3905b,ResourceVersion:10491933,Generation:0,CreationTimestamp:2020-05-12 13:55:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e95da3f7-8727-45e0-8d37-36b7841bc5bf 0xc000ea5b17 0xc000ea5b18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7dj92 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7dj92,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7dj92 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000ea5b90} {node.kubernetes.io/unreachable Exists NoExecute 0xc000ea5bb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:04 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-12 13:55:05 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 13:55:08.567: INFO: Pod "nginx-deployment-7b8c6f4498-nmwz4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-nmwz4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1795,SelfLink:/api/v1/namespaces/deployment-1795/pods/nginx-deployment-7b8c6f4498-nmwz4,UID:1fed17c2-6bb0-48db-854b-336020b2ab6b,ResourceVersion:10491926,Generation:0,CreationTimestamp:2020-05-12 13:55:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e95da3f7-8727-45e0-8d37-36b7841bc5bf 0xc000ea5c77 0xc000ea5c78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7dj92 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7dj92,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7dj92 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000ea5d10} {node.kubernetes.io/unreachable Exists NoExecute 0xc000ea5d30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:04 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-12 13:55:04 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 13:55:08.567: INFO: Pod "nginx-deployment-7b8c6f4498-nq7nz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-nq7nz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1795,SelfLink:/api/v1/namespaces/deployment-1795/pods/nginx-deployment-7b8c6f4498-nq7nz,UID:2676c76e-d20c-4077-a779-d2e034895c8b,ResourceVersion:10491870,Generation:0,CreationTimestamp:2020-05-12 13:55:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e95da3f7-8727-45e0-8d37-36b7841bc5bf 0xc000ea5e07 0xc000ea5e08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7dj92 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7dj92,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7dj92 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000ea5e80} {node.kubernetes.io/unreachable Exists NoExecute 0xc000ea5ea0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:04 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 13:55:08.567: INFO: Pod "nginx-deployment-7b8c6f4498-qm9hl" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-qm9hl,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1795,SelfLink:/api/v1/namespaces/deployment-1795/pods/nginx-deployment-7b8c6f4498-qm9hl,UID:1c3930cb-ed9a-4486-ba24-02d420c4ebec,ResourceVersion:10491750,Generation:0,CreationTimestamp:2020-05-12 13:54:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e95da3f7-8727-45e0-8d37-36b7841bc5bf 0xc000ea5f27 0xc000ea5f28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7dj92 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7dj92,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7dj92 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000ea5fa0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000ea5fc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:54:47 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:00 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:54:47 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.73,StartTime:2020-05-12 13:54:47 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-12 13:54:59 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://11c60cef70c95f1a6d7bffbf664398c40061704e75bdcf1ff54a560b4c87060b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 13:55:08.567: INFO: Pod "nginx-deployment-7b8c6f4498-tk2r8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-tk2r8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1795,SelfLink:/api/v1/namespaces/deployment-1795/pods/nginx-deployment-7b8c6f4498-tk2r8,UID:e1ae1367-5efc-4a49-8f61-1b9d9cef15c5,ResourceVersion:10491892,Generation:0,CreationTimestamp:2020-05-12 13:55:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e95da3f7-8727-45e0-8d37-36b7841bc5bf 0xc002686097 0xc002686098}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7dj92 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7dj92,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7dj92 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002686110} {node.kubernetes.io/unreachable Exists NoExecute 0xc002686130}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:04 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-12 13:55:04 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 13:55:08.567: INFO: Pod "nginx-deployment-7b8c6f4498-v2ngr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-v2ngr,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1795,SelfLink:/api/v1/namespaces/deployment-1795/pods/nginx-deployment-7b8c6f4498-v2ngr,UID:15d6202f-f82d-4cc6-bc9d-7d40195cb6ef,ResourceVersion:10491861,Generation:0,CreationTimestamp:2020-05-12 13:55:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e95da3f7-8727-45e0-8d37-36b7841bc5bf 0xc0026861f7 0xc0026861f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7dj92 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7dj92,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7dj92 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002686270} {node.kubernetes.io/unreachable Exists NoExecute 0xc002686290}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:04 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-12 13:55:04 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 13:55:08.567: INFO: Pod "nginx-deployment-7b8c6f4498-vgcwz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vgcwz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1795,SelfLink:/api/v1/namespaces/deployment-1795/pods/nginx-deployment-7b8c6f4498-vgcwz,UID:7f4c5229-ff0f-44ea-8c70-77e6e2badc10,ResourceVersion:10491925,Generation:0,CreationTimestamp:2020-05-12 13:55:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e95da3f7-8727-45e0-8d37-36b7841bc5bf 0xc002686357 0xc002686358}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7dj92 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7dj92,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7dj92 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026863d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026863f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 13:55:04 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-12 13:55:04 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:55:08.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1795" for this suite. May 12 13:55:33.639: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:55:33.931: INFO: namespace deployment-1795 deletion completed in 25.067927866s • [SLOW TEST:47.059 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:55:33.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode May 12 13:55:34.551: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-7012" to be "success or failure" May 12 13:55:34.561: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 9.589548ms May 12 13:55:37.024: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.472418856s May 12 13:55:39.027: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.476037537s May 12 13:55:41.031: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.479686147s May 12 13:55:43.035: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=true. Elapsed: 8.483720489s May 12 13:55:45.038: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=true. Elapsed: 10.48699459s May 12 13:55:47.042: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 12.490553183s May 12 13:55:49.046: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.494463655s STEP: Saw pod success May 12 13:55:49.046: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" May 12 13:55:49.048: INFO: Trying to get logs from node iruya-worker pod pod-host-path-test container test-container-1: STEP: delete the pod May 12 13:55:49.083: INFO: Waiting for pod pod-host-path-test to disappear May 12 13:55:49.126: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:55:49.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-7012" for this suite. May 12 13:55:55.141: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:55:55.220: INFO: namespace hostpath-7012 deletion completed in 6.090689609s • [SLOW TEST:21.288 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:55:55.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium May 12 13:55:55.487: INFO: Waiting up to 5m0s for pod "pod-73269f72-4945-4641-bb16-c6595b35dacf" in namespace "emptydir-9087" to be "success or failure" May 12 13:55:55.510: INFO: Pod "pod-73269f72-4945-4641-bb16-c6595b35dacf": Phase="Pending", Reason="", readiness=false. Elapsed: 23.081218ms May 12 13:55:57.514: INFO: Pod "pod-73269f72-4945-4641-bb16-c6595b35dacf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026965823s May 12 13:55:59.518: INFO: Pod "pod-73269f72-4945-4641-bb16-c6595b35dacf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030599745s STEP: Saw pod success May 12 13:55:59.518: INFO: Pod "pod-73269f72-4945-4641-bb16-c6595b35dacf" satisfied condition "success or failure" May 12 13:55:59.520: INFO: Trying to get logs from node iruya-worker2 pod pod-73269f72-4945-4641-bb16-c6595b35dacf container test-container: STEP: delete the pod May 12 13:55:59.553: INFO: Waiting for pod pod-73269f72-4945-4641-bb16-c6595b35dacf to disappear May 12 13:55:59.594: INFO: Pod pod-73269f72-4945-4641-bb16-c6595b35dacf no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:55:59.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9087" for this suite. May 12 13:56:05.614: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:56:05.668: INFO: namespace emptydir-9087 deletion completed in 6.07173462s • [SLOW TEST:10.448 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:56:05.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium May 12 13:56:05.823: INFO: Waiting up to 5m0s for pod "pod-cf2ecb4b-1d94-4c99-8101-571c9505322d" in namespace "emptydir-1940" to be "success or failure" May 12 13:56:05.862: INFO: Pod "pod-cf2ecb4b-1d94-4c99-8101-571c9505322d": Phase="Pending", Reason="", readiness=false. Elapsed: 39.555196ms May 12 13:56:07.866: INFO: Pod "pod-cf2ecb4b-1d94-4c99-8101-571c9505322d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04334479s May 12 13:56:09.870: INFO: Pod "pod-cf2ecb4b-1d94-4c99-8101-571c9505322d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047350374s STEP: Saw pod success May 12 13:56:09.870: INFO: Pod "pod-cf2ecb4b-1d94-4c99-8101-571c9505322d" satisfied condition "success or failure" May 12 13:56:09.873: INFO: Trying to get logs from node iruya-worker pod pod-cf2ecb4b-1d94-4c99-8101-571c9505322d container test-container: STEP: delete the pod May 12 13:56:09.983: INFO: Waiting for pod pod-cf2ecb4b-1d94-4c99-8101-571c9505322d to disappear May 12 13:56:10.557: INFO: Pod pod-cf2ecb4b-1d94-4c99-8101-571c9505322d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:56:10.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1940" for this suite. May 12 13:56:16.715: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:56:16.782: INFO: namespace emptydir-1940 deletion completed in 6.220139882s • [SLOW TEST:11.114 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:56:16.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-4089/secret-test-c010d210-8d26-4943-b989-36c8ab706580 STEP: Creating a pod to test consume secrets May 12 13:56:16.876: INFO: Waiting up to 5m0s for pod "pod-configmaps-b03f2d57-6f94-47c9-b162-5300ebe4246e" in namespace "secrets-4089" to be "success or failure" May 12 13:56:16.885: INFO: Pod "pod-configmaps-b03f2d57-6f94-47c9-b162-5300ebe4246e": Phase="Pending", Reason="", readiness=false. Elapsed: 9.634405ms May 12 13:56:18.976: INFO: Pod "pod-configmaps-b03f2d57-6f94-47c9-b162-5300ebe4246e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100116921s May 12 13:56:20.979: INFO: Pod "pod-configmaps-b03f2d57-6f94-47c9-b162-5300ebe4246e": Phase="Running", Reason="", readiness=true. Elapsed: 4.103758582s May 12 13:56:22.983: INFO: Pod "pod-configmaps-b03f2d57-6f94-47c9-b162-5300ebe4246e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.107329743s STEP: Saw pod success May 12 13:56:22.983: INFO: Pod "pod-configmaps-b03f2d57-6f94-47c9-b162-5300ebe4246e" satisfied condition "success or failure" May 12 13:56:22.986: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-b03f2d57-6f94-47c9-b162-5300ebe4246e container env-test: STEP: delete the pod May 12 13:56:23.027: INFO: Waiting for pod pod-configmaps-b03f2d57-6f94-47c9-b162-5300ebe4246e to disappear May 12 13:56:23.043: INFO: Pod pod-configmaps-b03f2d57-6f94-47c9-b162-5300ebe4246e no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:56:23.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4089" for this suite. May 12 13:56:29.058: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:56:29.122: INFO: namespace secrets-4089 deletion completed in 6.075294075s • [SLOW TEST:12.339 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:56:29.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium May 12 13:56:29.171: INFO: Waiting up to 5m0s for pod "pod-672840e4-d0ba-43f2-afe2-e67f53c42e8f" in namespace "emptydir-3709" to be "success or failure" May 12 13:56:29.227: INFO: Pod "pod-672840e4-d0ba-43f2-afe2-e67f53c42e8f": Phase="Pending", Reason="", readiness=false. Elapsed: 56.01086ms May 12 13:56:31.231: INFO: Pod "pod-672840e4-d0ba-43f2-afe2-e67f53c42e8f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060400759s May 12 13:56:33.235: INFO: Pod "pod-672840e4-d0ba-43f2-afe2-e67f53c42e8f": Phase="Running", Reason="", readiness=true. Elapsed: 4.064153728s May 12 13:56:35.294: INFO: Pod "pod-672840e4-d0ba-43f2-afe2-e67f53c42e8f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.123169737s STEP: Saw pod success May 12 13:56:35.294: INFO: Pod "pod-672840e4-d0ba-43f2-afe2-e67f53c42e8f" satisfied condition "success or failure" May 12 13:56:35.296: INFO: Trying to get logs from node iruya-worker pod pod-672840e4-d0ba-43f2-afe2-e67f53c42e8f container test-container: STEP: delete the pod May 12 13:56:35.333: INFO: Waiting for pod pod-672840e4-d0ba-43f2-afe2-e67f53c42e8f to disappear May 12 13:56:35.367: INFO: Pod pod-672840e4-d0ba-43f2-afe2-e67f53c42e8f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:56:35.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3709" for this suite. May 12 13:56:41.470: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:56:41.625: INFO: namespace emptydir-3709 deletion completed in 6.253682064s • [SLOW TEST:12.503 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:56:41.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:56:48.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5657" for this suite. May 12 13:57:28.171: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:57:28.241: INFO: namespace kubelet-test-5657 deletion completed in 40.184171492s • [SLOW TEST:46.616 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:57:28.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-112be132-bb35-4930-b39d-d41bf372e775 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:57:37.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6677" for this suite. May 12 13:58:01.588: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:58:01.645: INFO: namespace configmap-6677 deletion completed in 24.383965496s • [SLOW TEST:33.403 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:58:01.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 May 12 13:58:02.188: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 12 13:58:02.194: INFO: Waiting for terminating namespaces to be deleted... May 12 13:58:02.197: INFO: Logging pods the kubelet thinks is on node iruya-worker before test May 12 13:58:02.201: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 12 13:58:02.201: INFO: Container kube-proxy ready: true, restart count 0 May 12 13:58:02.201: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 12 13:58:02.201: INFO: Container kindnet-cni ready: true, restart count 0 May 12 13:58:02.201: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test May 12 13:58:02.206: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) May 12 13:58:02.206: INFO: Container coredns ready: true, restart count 0 May 12 13:58:02.206: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) May 12 13:58:02.206: INFO: Container coredns ready: true, restart count 0 May 12 13:58:02.206: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) May 12 13:58:02.206: INFO: Container kube-proxy ready: true, restart count 0 May 12 13:58:02.206: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) May 12 13:58:02.206: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-worker STEP: verifying the node has the label node iruya-worker2 May 12 13:58:02.791: INFO: Pod coredns-5d4dd4b4db-6jcgz requesting resource cpu=100m on Node iruya-worker2 May 12 13:58:02.791: INFO: Pod coredns-5d4dd4b4db-gm7vr requesting resource cpu=100m on Node iruya-worker2 May 12 13:58:02.791: INFO: Pod kindnet-gwz5g requesting resource cpu=100m on Node iruya-worker May 12 13:58:02.791: INFO: Pod kindnet-mgd8b requesting resource cpu=100m on Node iruya-worker2 May 12 13:58:02.791: INFO: Pod kube-proxy-pmz4p requesting resource cpu=0m on Node iruya-worker May 12 13:58:02.791: INFO: Pod kube-proxy-vwbcj requesting resource cpu=0m on Node iruya-worker2 STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-18417828-da0f-459c-84ec-283e3e7c1023.160e4c8c39b2faf6], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6250/filler-pod-18417828-da0f-459c-84ec-283e3e7c1023 to iruya-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-18417828-da0f-459c-84ec-283e3e7c1023.160e4c8cd9329cc4], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-18417828-da0f-459c-84ec-283e3e7c1023.160e4c8dcfd0d69f], Reason = [Created], Message = [Created container filler-pod-18417828-da0f-459c-84ec-283e3e7c1023] STEP: Considering event: Type = [Normal], Name = [filler-pod-18417828-da0f-459c-84ec-283e3e7c1023.160e4c8e2ed6022f], Reason = [Started], Message = [Started container filler-pod-18417828-da0f-459c-84ec-283e3e7c1023] STEP: Considering event: Type = [Normal], Name = [filler-pod-d5a53874-155b-427b-82ce-98d92e254343.160e4c8c4ad62209], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6250/filler-pod-d5a53874-155b-427b-82ce-98d92e254343 to iruya-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-d5a53874-155b-427b-82ce-98d92e254343.160e4c8cf5d88484], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-d5a53874-155b-427b-82ce-98d92e254343.160e4c8dc4a0a456], Reason = [Created], Message = [Created container filler-pod-d5a53874-155b-427b-82ce-98d92e254343] STEP: Considering event: Type = [Normal], Name = [filler-pod-d5a53874-155b-427b-82ce-98d92e254343.160e4c8de0acd716], Reason = [Started], Message = [Started container filler-pod-d5a53874-155b-427b-82ce-98d92e254343] STEP: Considering event: Type = [Warning], Name = [additional-pod.160e4c8eadec15ee], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node iruya-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:58:15.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6250" for this suite. May 12 13:58:26.254: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:58:26.327: INFO: namespace sched-pred-6250 deletion completed in 10.342682297s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:24.681 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:58:26.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod May 12 13:58:26.821: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:58:40.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-298" for this suite. May 12 13:59:06.520: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:59:06.576: INFO: namespace init-container-298 deletion completed in 26.123225613s • [SLOW TEST:40.249 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:59:06.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 12 13:59:06.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-6749' May 12 13:59:10.388: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 12 13:59:10.388: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 May 12 13:59:10.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-6749' May 12 13:59:10.728: INFO: stderr: "" May 12 13:59:10.728: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:59:10.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6749" for this suite. May 12 13:59:17.014: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:59:17.111: INFO: namespace kubectl-6749 deletion completed in 6.270250224s • [SLOW TEST:10.534 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:59:17.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 13:59:27.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-81" for this suite. May 12 13:59:33.450: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 13:59:33.518: INFO: namespace emptydir-wrapper-81 deletion completed in 6.149264964s • [SLOW TEST:16.407 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 13:59:33.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-0799cae1-df87-4a65-b5c1-8899cdadcac7 in namespace container-probe-95 May 12 13:59:37.773: INFO: Started pod liveness-0799cae1-df87-4a65-b5c1-8899cdadcac7 in namespace container-probe-95 STEP: checking the pod's current state and verifying that restartCount is present May 12 13:59:37.775: INFO: Initial restart count of pod liveness-0799cae1-df87-4a65-b5c1-8899cdadcac7 is 0 May 12 14:00:04.168: INFO: Restart count of pod container-probe-95/liveness-0799cae1-df87-4a65-b5c1-8899cdadcac7 is now 1 (26.393461661s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:00:04.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-95" for this suite. May 12 14:00:12.268: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:00:12.339: INFO: namespace container-probe-95 deletion completed in 8.089223031s • [SLOW TEST:38.820 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:00:12.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium May 12 14:00:12.519: INFO: Waiting up to 5m0s for pod "pod-2d21c2d2-bd7b-4521-83ae-3e0216464561" in namespace "emptydir-4481" to be "success or failure" May 12 14:00:12.614: INFO: Pod "pod-2d21c2d2-bd7b-4521-83ae-3e0216464561": Phase="Pending", Reason="", readiness=false. Elapsed: 95.226003ms May 12 14:00:14.775: INFO: Pod "pod-2d21c2d2-bd7b-4521-83ae-3e0216464561": Phase="Pending", Reason="", readiness=false. Elapsed: 2.255643884s May 12 14:00:16.778: INFO: Pod "pod-2d21c2d2-bd7b-4521-83ae-3e0216464561": Phase="Pending", Reason="", readiness=false. Elapsed: 4.258953158s May 12 14:00:18.782: INFO: Pod "pod-2d21c2d2-bd7b-4521-83ae-3e0216464561": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.262821508s STEP: Saw pod success May 12 14:00:18.782: INFO: Pod "pod-2d21c2d2-bd7b-4521-83ae-3e0216464561" satisfied condition "success or failure" May 12 14:00:18.784: INFO: Trying to get logs from node iruya-worker2 pod pod-2d21c2d2-bd7b-4521-83ae-3e0216464561 container test-container: STEP: delete the pod May 12 14:00:19.021: INFO: Waiting for pod pod-2d21c2d2-bd7b-4521-83ae-3e0216464561 to disappear May 12 14:00:19.063: INFO: Pod pod-2d21c2d2-bd7b-4521-83ae-3e0216464561 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:00:19.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4481" for this suite. May 12 14:00:25.078: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:00:25.136: INFO: namespace emptydir-4481 deletion completed in 6.070018482s • [SLOW TEST:12.797 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:00:25.137: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-5gm4t in namespace proxy-8940 I0512 14:00:25.460444 6 runners.go:180] Created replication controller with name: proxy-service-5gm4t, namespace: proxy-8940, replica count: 1 I0512 14:00:26.510906 6 runners.go:180] proxy-service-5gm4t Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 14:00:27.511111 6 runners.go:180] proxy-service-5gm4t Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 14:00:28.511309 6 runners.go:180] proxy-service-5gm4t Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 14:00:29.511488 6 runners.go:180] proxy-service-5gm4t Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0512 14:00:30.511653 6 runners.go:180] proxy-service-5gm4t Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0512 14:00:31.511835 6 runners.go:180] proxy-service-5gm4t Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0512 14:00:32.512021 6 runners.go:180] proxy-service-5gm4t Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0512 14:00:33.512217 6 runners.go:180] proxy-service-5gm4t Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0512 14:00:34.512419 6 runners.go:180] proxy-service-5gm4t Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0512 14:00:35.512592 6 runners.go:180] proxy-service-5gm4t Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 12 14:00:35.516: INFO: setup took 10.210529902s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 12 14:00:35.522: INFO: (0) /api/v1/namespaces/proxy-8940/pods/http:proxy-service-5gm4t-bvpwm:160/proxy/: foo (200; 6.470658ms) May 12 14:00:35.524: INFO: (0) /api/v1/namespaces/proxy-8940/pods/http:proxy-service-5gm4t-bvpwm:1080/proxy/: ... (200; 8.087973ms) May 12 14:00:35.524: INFO: (0) /api/v1/namespaces/proxy-8940/pods/proxy-service-5gm4t-bvpwm:160/proxy/: foo (200; 8.547863ms) May 12 14:00:35.525: INFO: (0) /api/v1/namespaces/proxy-8940/services/proxy-service-5gm4t:portname2/proxy/: bar (200; 9.059501ms) May 12 14:00:35.525: INFO: (0) /api/v1/namespaces/proxy-8940/pods/proxy-service-5gm4t-bvpwm:1080/proxy/: test<... (200; 9.327857ms) May 12 14:00:35.525: INFO: (0) /api/v1/namespaces/proxy-8940/pods/http:proxy-service-5gm4t-bvpwm:162/proxy/: bar (200; 9.494216ms) May 12 14:00:35.525: INFO: (0) /api/v1/namespaces/proxy-8940/pods/proxy-service-5gm4t-bvpwm:162/proxy/: bar (200; 9.487168ms) May 12 14:00:35.525: INFO: (0) /api/v1/namespaces/proxy-8940/services/http:proxy-service-5gm4t:portname2/proxy/: bar (200; 9.477334ms) May 12 14:00:35.525: INFO: (0) /api/v1/namespaces/proxy-8940/services/proxy-service-5gm4t:portname1/proxy/: foo (200; 9.478913ms) May 12 14:00:35.526: INFO: (0) /api/v1/namespaces/proxy-8940/services/http:proxy-service-5gm4t:portname1/proxy/: foo (200; 9.690291ms) May 12 14:00:35.530: INFO: (0) /api/v1/namespaces/proxy-8940/pods/proxy-service-5gm4t-bvpwm/proxy/: test (200; 14.196387ms) May 12 14:00:35.531: INFO: (0) /api/v1/namespaces/proxy-8940/pods/https:proxy-service-5gm4t-bvpwm:443/proxy/: ... (200; 2.78767ms) May 12 14:00:35.536: INFO: (1) /api/v1/namespaces/proxy-8940/services/proxy-service-5gm4t:portname1/proxy/: foo (200; 3.777117ms) May 12 14:00:35.537: INFO: (1) /api/v1/namespaces/proxy-8940/pods/https:proxy-service-5gm4t-bvpwm:460/proxy/: tls baz (200; 3.609432ms) May 12 14:00:35.537: INFO: (1) /api/v1/namespaces/proxy-8940/pods/proxy-service-5gm4t-bvpwm:162/proxy/: bar (200; 3.707278ms) May 12 14:00:35.537: INFO: (1) /api/v1/namespaces/proxy-8940/pods/proxy-service-5gm4t-bvpwm:1080/proxy/: test<... (200; 4.047109ms) May 12 14:00:35.537: INFO: (1) /api/v1/namespaces/proxy-8940/pods/https:proxy-service-5gm4t-bvpwm:462/proxy/: tls qux (200; 4.152229ms) May 12 14:00:35.537: INFO: (1) /api/v1/namespaces/proxy-8940/services/https:proxy-service-5gm4t:tlsportname2/proxy/: tls qux (200; 4.280457ms) May 12 14:00:35.537: INFO: (1) /api/v1/namespaces/proxy-8940/services/http:proxy-service-5gm4t:portname2/proxy/: bar (200; 4.232222ms) May 12 14:00:35.538: INFO: (1) /api/v1/namespaces/proxy-8940/pods/proxy-service-5gm4t-bvpwm/proxy/: test (200; 4.724723ms) May 12 14:00:35.538: INFO: (1) /api/v1/namespaces/proxy-8940/services/http:proxy-service-5gm4t:portname1/proxy/: foo (200; 4.710454ms) May 12 14:00:35.538: INFO: (1) /api/v1/namespaces/proxy-8940/services/https:proxy-service-5gm4t:tlsportname1/proxy/: tls baz (200; 4.71719ms) May 12 14:00:35.538: INFO: (1) /api/v1/namespaces/proxy-8940/services/proxy-service-5gm4t:portname2/proxy/: bar (200; 4.828487ms) May 12 14:00:35.538: INFO: (1) /api/v1/namespaces/proxy-8940/pods/http:proxy-service-5gm4t-bvpwm:162/proxy/: bar (200; 4.797948ms) May 12 14:00:35.541: INFO: (2) /api/v1/namespaces/proxy-8940/pods/proxy-service-5gm4t-bvpwm:162/proxy/: bar (200; 3.264875ms) May 12 14:00:35.541: INFO: (2) /api/v1/namespaces/proxy-8940/pods/proxy-service-5gm4t-bvpwm:160/proxy/: foo (200; 3.289259ms) May 12 14:00:35.541: INFO: (2) /api/v1/namespaces/proxy-8940/pods/proxy-service-5gm4t-bvpwm:1080/proxy/: test<... (200; 3.398158ms) May 12 14:00:35.542: INFO: (2) /api/v1/namespaces/proxy-8940/pods/http:proxy-service-5gm4t-bvpwm:162/proxy/: bar (200; 4.069041ms) May 12 14:00:35.542: INFO: (2) /api/v1/namespaces/proxy-8940/services/proxy-service-5gm4t:portname2/proxy/: bar (200; 4.561248ms) May 12 14:00:35.542: INFO: (2) /api/v1/namespaces/proxy-8940/services/http:proxy-service-5gm4t:portname2/proxy/: bar (200; 4.602665ms) May 12 14:00:35.542: INFO: (2) /api/v1/namespaces/proxy-8940/pods/http:proxy-service-5gm4t-bvpwm:1080/proxy/: ... (200; 4.624032ms) May 12 14:00:35.543: INFO: (2) /api/v1/namespaces/proxy-8940/pods/https:proxy-service-5gm4t-bvpwm:443/proxy/: test (200; 5.323658ms) May 12 14:00:35.543: INFO: (2) /api/v1/namespaces/proxy-8940/pods/https:proxy-service-5gm4t-bvpwm:460/proxy/: tls baz (200; 5.354385ms) May 12 14:00:35.543: INFO: (2) /api/v1/namespaces/proxy-8940/pods/http:proxy-service-5gm4t-bvpwm:160/proxy/: foo (200; 5.297486ms) May 12 14:00:35.543: INFO: (2) /api/v1/namespaces/proxy-8940/services/proxy-service-5gm4t:portname1/proxy/: foo (200; 5.3471ms) May 12 14:00:35.543: INFO: (2) /api/v1/namespaces/proxy-8940/services/http:proxy-service-5gm4t:portname1/proxy/: foo (200; 5.321813ms) May 12 14:00:35.544: INFO: (2) /api/v1/namespaces/proxy-8940/services/https:proxy-service-5gm4t:tlsportname2/proxy/: tls qux (200; 5.760831ms) May 12 14:00:35.546: INFO: (3) /api/v1/namespaces/proxy-8940/pods/proxy-service-5gm4t-bvpwm/proxy/: test (200; 2.822181ms) May 12 14:00:35.547: INFO: (3) /api/v1/namespaces/proxy-8940/pods/http:proxy-service-5gm4t-bvpwm:1080/proxy/: ... (200; 2.921163ms) May 12 14:00:35.547: INFO: (3) /api/v1/namespaces/proxy-8940/pods/proxy-service-5gm4t-bvpwm:160/proxy/: foo (200; 2.997333ms) May 12 14:00:35.547: INFO: (3) /api/v1/namespaces/proxy-8940/pods/https:proxy-service-5gm4t-bvpwm:443/proxy/: test<... (200; 5.242624ms) May 12 14:00:35.549: INFO: (3) /api/v1/namespaces/proxy-8940/pods/http:proxy-service-5gm4t-bvpwm:162/proxy/: bar (200; 5.246901ms) May 12 14:00:35.549: INFO: (3) /api/v1/namespaces/proxy-8940/pods/proxy-service-5gm4t-bvpwm:162/proxy/: bar (200; 5.23892ms) May 12 14:00:35.549: INFO: (3) /api/v1/namespaces/proxy-8940/pods/https:proxy-service-5gm4t-bvpwm:460/proxy/: tls baz (200; 5.273363ms) May 12 14:00:35.549: INFO: (3) /api/v1/namespaces/proxy-8940/services/http:proxy-service-5gm4t:portname2/proxy/: bar (200; 5.330571ms) May 12 14:00:35.549: INFO: (3) /api/v1/namespaces/proxy-8940/pods/https:proxy-service-5gm4t-bvpwm:462/proxy/: tls qux (200; 5.274317ms) May 12 14:00:35.552: INFO: (4) /api/v1/namespaces/proxy-8940/pods/https:proxy-service-5gm4t-bvpwm:443/proxy/: ... (200; 5.201021ms) May 12 14:00:35.554: INFO: (4) /api/v1/namespaces/proxy-8940/pods/http:proxy-service-5gm4t-bvpwm:162/proxy/: bar (200; 5.175432ms) May 12 14:00:35.554: INFO: (4) /api/v1/namespaces/proxy-8940/services/https:proxy-service-5gm4t:tlsportname1/proxy/: tls baz (200; 5.379638ms) May 12 14:00:35.554: INFO: (4) /api/v1/namespaces/proxy-8940/pods/proxy-service-5gm4t-bvpwm:1080/proxy/: test<... (200; 5.412831ms) May 12 14:00:35.554: INFO: (4) /api/v1/namespaces/proxy-8940/services/https:proxy-service-5gm4t:tlsportname2/proxy/: tls qux (200; 5.464805ms) May 12 14:00:35.555: INFO: (4) /api/v1/namespaces/proxy-8940/pods/proxy-service-5gm4t-bvpwm/proxy/: test (200; 5.467888ms) May 12 14:00:35.555: INFO: (4) /api/v1/namespaces/proxy-8940/services/proxy-service-5gm4t:portname1/proxy/: foo (200; 5.627316ms) May 12 14:00:35.557: INFO: (5) /api/v1/namespaces/proxy-8940/pods/https:proxy-service-5gm4t-bvpwm:443/proxy/: test<... (200; 3.418091ms) May 12 14:00:35.558: INFO: (5) /api/v1/namespaces/proxy-8940/services/proxy-service-5gm4t:portname1/proxy/: foo (200; 3.575154ms) May 12 14:00:35.558: INFO: (5) /api/v1/namespaces/proxy-8940/services/http:proxy-service-5gm4t:portname2/proxy/: bar (200; 3.542526ms) May 12 14:00:35.558: INFO: (5) /api/v1/namespaces/proxy-8940/pods/proxy-service-5gm4t-bvpwm/proxy/: test (200; 3.661113ms) May 12 14:00:35.559: INFO: (5) /api/v1/namespaces/proxy-8940/services/http:proxy-service-5gm4t:portname1/proxy/: foo (200; 3.854323ms) May 12 14:00:35.559: INFO: (5) /api/v1/namespaces/proxy-8940/pods/proxy-service-5gm4t-bvpwm:160/proxy/: foo (200; 4.124423ms) May 12 14:00:35.559: INFO: (5) /api/v1/namespaces/proxy-8940/pods/http:proxy-service-5gm4t-bvpwm:1080/proxy/: ... (200; 4.158506ms) May 12 14:00:35.559: INFO: (5) /api/v1/namespaces/proxy-8940/pods/proxy-service-5gm4t-bvpwm:162/proxy/: bar (200; 4.176713ms) May 12 14:00:35.559: INFO: (5) /api/v1/namespaces/proxy-8940/pods/https:proxy-service-5gm4t-bvpwm:462/proxy/: tls qux (200; 4.486069ms) May 12 14:00:35.559: INFO: (5) /api/v1/namespaces/proxy-8940/services/https:proxy-service-5gm4t:tlsportname2/proxy/: tls qux (200; 4.495282ms) May 12 14:00:35.559: INFO: (5) /api/v1/namespaces/proxy-8940/services/https:proxy-service-5gm4t:tlsportname1/proxy/: tls baz (200; 4.640042ms) May 12 14:00:35.559: INFO: (5) /api/v1/namespaces/proxy-8940/services/proxy-service-5gm4t:portname2/proxy/: bar (200; 4.726822ms) May 12 14:00:35.563: INFO: (6) /api/v1/namespaces/proxy-8940/services/http:proxy-service-5gm4t:portname2/proxy/: bar (200; 3.795885ms) May 12 14:00:35.563: INFO: (6) /api/v1/namespaces/proxy-8940/pods/proxy-service-5gm4t-bvpwm:1080/proxy/: test<... (200; 3.965084ms) May 12 14:00:35.563: INFO: (6) /api/v1/namespaces/proxy-8940/services/proxy-service-5gm4t:portname1/proxy/: foo (200; 3.964298ms) May 12 14:00:35.564: INFO: (6) /api/v1/namespaces/proxy-8940/services/proxy-service-5gm4t:portname2/proxy/: bar (200; 4.05933ms) May 12 14:00:35.564: INFO: (6) /api/v1/namespaces/proxy-8940/pods/https:proxy-service-5gm4t-bvpwm:443/proxy/: ... (200; 5.110813ms) May 12 14:00:35.565: INFO: (6) /api/v1/namespaces/proxy-8940/pods/proxy-service-5gm4t-bvpwm/proxy/: test (200; 5.112254ms) May 12 14:00:35.565: INFO: (6) /api/v1/namespaces/proxy-8940/pods/http:proxy-service-5gm4t-bvpwm:162/proxy/: bar (200; 5.049103ms) May 12 14:00:35.565: INFO: (6) /api/v1/namespaces/proxy-8940/pods/proxy-service-5gm4t-bvpwm:160/proxy/: foo (200; 5.091317ms) May 12 14:00:35.568: INFO: (7) /api/v1/namespaces/proxy-8940/pods/proxy-service-5gm4t-bvpwm:1080/proxy/: test<... (200; 3.03458ms) May 12 14:00:35.568: INFO: (7) /api/v1/namespaces/proxy-8940/pods/http:proxy-service-5gm4t-bvpwm:1080/proxy/: ... (200; 3.157222ms) May 12 14:00:35.568: INFO: (7) /api/v1/namespaces/proxy-8940/pods/https:proxy-service-5gm4t-bvpwm:443/proxy/: test (200; 3.764943ms) May 12 14:00:35.569: INFO: (7) /api/v1/namespaces/proxy-8940/pods/https:proxy-service-5gm4t-bvpwm:462/proxy/: tls qux (200; 4.038193ms) May 12 14:00:35.569: INFO: (7) /api/v1/namespaces/proxy-8940/pods/proxy-service-5gm4t-bvpwm:162/proxy/: bar (200; 3.958483ms) May 12 14:00:35.569: INFO: (7) /api/v1/namespaces/proxy-8940/pods/https:proxy-service-5gm4t-bvpwm:460/proxy/: tls baz (200; 4.014027ms) May 12 14:00:35.569: INFO: (7) /api/v1/namespaces/proxy-8940/services/http:proxy-service-5gm4t:portname2/proxy/: bar (200; 4.295121ms) May 12 14:00:35.569: INFO: (7) /api/v1/namespaces/proxy-8940/services/https:proxy-service-5gm4t:tlsportname1/proxy/: tls baz (200; 4.39334ms) May 12 14:00:35.569: INFO: (7) /api/v1/namespaces/proxy-8940/services/proxy-service-5gm4t:portname1/proxy/: foo (200; 4.470043ms) May 12 14:00:35.569: INFO: (7) /api/v1/namespaces/proxy-8940/services/proxy-service-5gm4t:portname2/proxy/: bar (200; 4.600673ms) May 12 14:00:35.569: INFO: (7) /api/v1/namespaces/proxy-8940/services/http:proxy-service-5gm4t:portname1/proxy/: foo (200; 4.612133ms) May 12 14:00:35.569: INFO: (7) /api/v1/namespaces/proxy-8940/services/https:proxy-service-5gm4t:tlsportname2/proxy/: tls qux (200; 4.634804ms) May 12 14:00:35.572: INFO: (8) /api/v1/namespaces/proxy-8940/pods/http:proxy-service-5gm4t-bvpwm:160/proxy/: foo (200; 2.93419ms) May 12 14:00:35.572: INFO: (8) /api/v1/namespaces/proxy-8940/pods/proxy-service-5gm4t-bvpwm:162/proxy/: bar (200; 2.91791ms) May 12 14:00:35.573: INFO: (8) /api/v1/namespaces/proxy-8940/pods/proxy-service-5gm4t-bvpwm:1080/proxy/: test<... (200; 3.01112ms) May 12 14:00:35.573: INFO: (8) /api/v1/namespaces/proxy-8940/pods/http:proxy-service-5gm4t-bvpwm:1080/proxy/: ... (200; 3.321554ms) May 12 14:00:35.573: INFO: (8) /api/v1/namespaces/proxy-8940/pods/https:proxy-service-5gm4t-bvpwm:460/proxy/: tls baz (200; 3.699515ms) May 12 14:00:35.573: INFO: (8) /api/v1/namespaces/proxy-8940/pods/https:proxy-service-5gm4t-bvpwm:443/proxy/: test (200; 3.959248ms) May 12 14:00:35.574: INFO: (8) /api/v1/namespaces/proxy-8940/services/https:proxy-service-5gm4t:tlsportname1/proxy/: tls baz (200; 4.514672ms) May 12 14:00:35.574: INFO: (8) /api/v1/namespaces/proxy-8940/services/proxy-service-5gm4t:portname2/proxy/: bar (200; 4.646024ms) May 12 14:00:35.574: INFO: (8) /api/v1/namespaces/proxy-8940/services/proxy-service-5gm4t:portname1/proxy/: foo (200; 4.617082ms) May 12 14:00:35.574: INFO: (8) /api/v1/namespaces/proxy-8940/services/http:proxy-service-5gm4t:portname1/proxy/: foo (200; 4.70363ms) May 12 14:00:35.574: INFO: (8) /api/v1/namespaces/proxy-8940/services/http:proxy-service-5gm4t:portname2/proxy/: bar (200; 4.684389ms) May 12 14:00:35.574: INFO: (8) /api/v1/namespaces/proxy-8940/services/https:proxy-service-5gm4t:tlsportname2/proxy/: tls qux (200; 4.681582ms) May 12 14:00:35.574: INFO: (8) /api/v1/namespaces/proxy-8940/pods/https:proxy-service-5gm4t-bvpwm:462/proxy/: tls qux (200; 4.764757ms) May 12 14:00:35.576: INFO: (9) /api/v1/namespaces/proxy-8940/pods/proxy-service-5gm4t-bvpwm:160/proxy/: foo (200; 2.171323ms) May 12 14:00:35.578: INFO: (9) /api/v1/namespaces/proxy-8940/pods/https:proxy-service-5gm4t-bvpwm:460/proxy/: tls baz (200; 3.538303ms) May 12 14:00:35.578: INFO: (9) /api/v1/namespaces/proxy-8940/pods/proxy-service-5gm4t-bvpwm/proxy/: test (200; 3.540213ms) May 12 14:00:35.578: INFO: (9) /api/v1/namespaces/proxy-8940/pods/proxy-service-5gm4t-bvpwm:1080/proxy/: test<... (200; 3.933396ms) May 12 14:00:35.579: INFO: (9) /api/v1/namespaces/proxy-8940/pods/https:proxy-service-5gm4t-bvpwm:443/proxy/: ... (200; 4.153033ms) May 12 14:00:35.579: INFO: (9) /api/v1/namespaces/proxy-8940/pods/http:proxy-service-5gm4t-bvpwm:160/proxy/: foo (200; 4.110359ms) May 12 14:00:35.579: INFO: (9) /api/v1/namespaces/proxy-8940/pods/https:proxy-service-5gm4t-bvpwm:462/proxy/: tls qux (200; 4.224985ms) May 12 14:00:35.579: INFO: (9) /api/v1/namespaces/proxy-8940/services/https:proxy-service-5gm4t:tlsportname2/proxy/: tls qux (200; 4.202015ms) May 12 14:00:35.580: INFO: (9) /api/v1/namespaces/proxy-8940/pods/http:proxy-service-5gm4t-bvpwm:162/proxy/: bar (200; 5.293757ms) May 12 14:00:35.580: INFO: (9) /api/v1/namespaces/proxy-8940/pods/proxy-service-5gm4t-bvpwm:162/proxy/: bar (200; 5.319861ms) May 12 14:00:35.580: INFO: (9) /api/v1/namespaces/proxy-8940/services/http:proxy-service-5gm4t:portname1/proxy/: foo (200; 5.387343ms) May 12 14:00:35.580: INFO: (9) /api/v1/namespaces/proxy-8940/services/https:proxy-service-5gm4t:tlsportname1/proxy/: tls baz (200; 5.352411ms) May 12 14:00:35.580: INFO: (9) /api/v1/namespaces/proxy-8940/services/proxy-service-5gm4t:portname2/proxy/: bar (200; 5.450136ms) May 12 14:00:35.580: INFO: (9) /api/v1/namespaces/proxy-8940/services/proxy-service-5gm4t:portname1/proxy/: foo (200; 5.426883ms) May 12 14:00:35.580: INFO: (9) /api/v1/namespaces/proxy-8940/services/http:proxy-service-5gm4t:portname2/proxy/: bar (200; 5.648805ms) May 12 14:00:35.584: INFO: (10) /api/v1/namespaces/proxy-8940/services/https:proxy-service-5gm4t:tlsportname1/proxy/: tls baz (200; 4.03216ms) May 12 14:00:35.584: INFO: (10) /api/v1/namespaces/proxy-8940/services/http:proxy-service-5gm4t:portname1/proxy/: foo (200; 4.173464ms) May 12 14:00:35.585: INFO: (10) /api/v1/namespaces/proxy-8940/pods/http:proxy-service-5gm4t-bvpwm:162/proxy/: bar (200; 4.461192ms) May 12 14:00:35.585: INFO: (10) /api/v1/namespaces/proxy-8940/pods/http:proxy-service-5gm4t-bvpwm:160/proxy/: foo (200; 4.844685ms) May 12 14:00:35.585: INFO: (10) /api/v1/namespaces/proxy-8940/pods/proxy-service-5gm4t-bvpwm:160/proxy/: foo (200; 4.868607ms) May 12 14:00:35.585: INFO: (10) /api/v1/namespaces/proxy-8940/pods/https:proxy-service-5gm4t-bvpwm:460/proxy/: tls baz (200; 5.069519ms) May 12 14:00:35.585: INFO: (10) /api/v1/namespaces/proxy-8940/pods/https:proxy-service-5gm4t-bvpwm:462/proxy/: tls qux (200; 5.100151ms) May 12 14:00:35.585: INFO: (10) /api/v1/namespaces/proxy-8940/pods/http:proxy-service-5gm4t-bvpwm:1080/proxy/: ... (200; 5.07492ms) May 12 14:00:35.585: INFO: (10) /api/v1/namespaces/proxy-8940/services/https:proxy-service-5gm4t:tlsportname2/proxy/: tls qux (200; 5.12052ms) May 12 14:00:35.585: INFO: (10) /api/v1/namespaces/proxy-8940/pods/proxy-service-5gm4t-bvpwm/proxy/: test (200; 5.076442ms) May 12 14:00:35.585: INFO: (10) /api/v1/namespaces/proxy-8940/services/http:proxy-service-5gm4t:portname2/proxy/: bar (200; 5.16799ms) May 12 14:00:35.585: INFO: (10) /api/v1/namespaces/proxy-8940/services/proxy-service-5gm4t:portname1/proxy/: foo (200; 5.235129ms) May 12 14:00:35.585: INFO: (10) /api/v1/namespaces/proxy-8940/pods/proxy-service-5gm4t-bvpwm:162/proxy/: bar (200; 5.183881ms) May 12 14:00:35.585: INFO: (10) /api/v1/namespaces/proxy-8940/services/proxy-service-5gm4t:portname2/proxy/: bar (200; 5.275933ms) May 12 14:00:35.586: INFO: (10) /api/v1/namespaces/proxy-8940/pods/https:proxy-service-5gm4t-bvpwm:443/proxy/: test<... (200; 5.496333ms) May 12 14:00:35.588: INFO: (11) /api/v1/namespaces/proxy-8940/pods/proxy-service-5gm4t-bvpwm/proxy/: test (200; 2.796799ms) May 12 14:00:35.589: INFO: (11) /api/v1/namespaces/proxy-8940/pods/http:proxy-service-5gm4t-bvpwm:1080/proxy/: ... (200; 3.516734ms) May 12 14:00:35.589: INFO: (11) /api/v1/namespaces/proxy-8940/services/proxy-service-5gm4t:portname2/proxy/: bar (200; 3.587819ms) May 12 14:00:35.589: INFO: (11) /api/v1/namespaces/proxy-8940/services/https:proxy-service-5gm4t:tlsportname1/proxy/: tls baz (200; 3.749054ms) May 12 14:00:35.590: INFO: (11) /api/v1/namespaces/proxy-8940/pods/http:proxy-service-5gm4t-bvpwm:162/proxy/: bar (200; 3.999916ms) May 12 14:00:35.590: INFO: (11) /api/v1/namespaces/proxy-8940/pods/proxy-service-5gm4t-bvpwm:160/proxy/: foo (200; 4.258884ms) May 12 14:00:35.590: INFO: (11) /api/v1/namespaces/proxy-8940/pods/https:proxy-service-5gm4t-bvpwm:462/proxy/: tls qux (200; 4.229815ms) May 12 14:00:35.590: INFO: (11) /api/v1/namespaces/proxy-8940/pods/https:proxy-service-5gm4t-bvpwm:443/proxy/: test<... (200; 4.539282ms) May 12 14:00:35.590: INFO: (11) /api/v1/namespaces/proxy-8940/pods/https:proxy-service-5gm4t-bvpwm:460/proxy/: tls baz (200; 4.605838ms) May 12 14:00:35.592: INFO: (12) /api/v1/namespaces/proxy-8940/pods/http:proxy-service-5gm4t-bvpwm:160/proxy/: foo (200; 2.04071ms) May 12 14:00:35.592: INFO: (12) /api/v1/namespaces/proxy-8940/pods/http:proxy-service-5gm4t-bvpwm:1080/proxy/: ... (200; 2.039409ms) May 12 14:00:35.593: INFO: (12) /api/v1/namespaces/proxy-8940/pods/proxy-service-5gm4t-bvpwm:160/proxy/: foo (200; 2.276061ms) May 12 14:00:35.594: INFO: (12) /api/v1/namespaces/proxy-8940/pods/https:proxy-service-5gm4t-bvpwm:462/proxy/: tls qux (200; 4.078555ms) May 12 14:00:35.594: INFO: (12) /api/v1/namespaces/proxy-8940/pods/proxy-service-5gm4t-bvpwm:162/proxy/: bar (200; 4.172151ms) May 12 14:00:35.595: INFO: (12) /api/v1/namespaces/proxy-8940/pods/http:proxy-service-5gm4t-bvpwm:162/proxy/: bar (200; 4.163511ms) May 12 14:00:35.595: INFO: (12) /api/v1/namespaces/proxy-8940/pods/https:proxy-service-5gm4t-bvpwm:443/proxy/: test (200; 4.243934ms) May 12 14:00:35.595: INFO: (12) /api/v1/namespaces/proxy-8940/services/http:proxy-service-5gm4t:portname2/proxy/: bar (200; 4.356205ms) May 12 14:00:35.595: INFO: (12) /api/v1/namespaces/proxy-8940/pods/proxy-service-5gm4t-bvpwm:1080/proxy/: test<... (200; 4.274515ms) May 12 14:00:35.595: INFO: (12) /api/v1/namespaces/proxy-8940/services/proxy-service-5gm4t:portname2/proxy/: bar (200; 4.716806ms) May 12 14:00:35.595: INFO: (12) /api/v1/namespaces/proxy-8940/services/http:proxy-service-5gm4t:portname1/proxy/: foo (200; 4.696645ms) May 12 14:00:35.595: INFO: (12) /api/v1/namespaces/proxy-8940/services/https:proxy-service-5gm4t:tlsportname1/proxy/: tls baz (200; 4.80549ms) May 12 14:00:35.595: INFO: (12) /api/v1/namespaces/proxy-8940/services/https:proxy-service-5gm4t:tlsportname2/proxy/: tls qux (200; 4.907716ms) May 12 14:00:35.599: INFO: (13) /api/v1/namespaces/proxy-8940/pods/proxy-service-5gm4t-bvpwm/proxy/: test (200; 3.357604ms) May 12 14:00:35.599: INFO: (13) /api/v1/namespaces/proxy-8940/pods/https:proxy-service-5gm4t-bvpwm:460/proxy/: tls baz (200; 3.426846ms) May 12 14:00:35.599: INFO: (13) /api/v1/namespaces/proxy-8940/pods/http:proxy-service-5gm4t-bvpwm:162/proxy/: bar (200; 3.452313ms) May 12 14:00:35.599: INFO: (13) /api/v1/namespaces/proxy-8940/pods/https:proxy-service-5gm4t-bvpwm:443/proxy/: test<... (200; 4.280183ms) May 12 14:00:35.600: INFO: (13) /api/v1/namespaces/proxy-8940/pods/proxy-service-5gm4t-bvpwm:162/proxy/: bar (200; 4.420334ms) May 12 14:00:35.600: INFO: (13) /api/v1/namespaces/proxy-8940/services/http:proxy-service-5gm4t:portname1/proxy/: foo (200; 4.654352ms) May 12 14:00:35.600: INFO: (13) /api/v1/namespaces/proxy-8940/pods/http:proxy-service-5gm4t-bvpwm:160/proxy/: foo (200; 4.656166ms) May 12 14:00:35.600: INFO: (13) /api/v1/namespaces/proxy-8940/pods/proxy-service-5gm4t-bvpwm:160/proxy/: foo (200; 5.074293ms) May 12 14:00:35.600: INFO: (13) /api/v1/namespaces/proxy-8940/services/proxy-service-5gm4t:portname1/proxy/: foo (200; 5.069505ms) May 12 14:00:35.600: INFO: (13) /api/v1/namespaces/proxy-8940/services/https:proxy-service-5gm4t:tlsportname2/proxy/: tls qux (200; 5.057941ms) May 12 14:00:35.600: INFO: (13) /api/v1/namespaces/proxy-8940/services/https:proxy-service-5gm4t:tlsportname1/proxy/: tls baz (200; 5.093722ms) May 12 14:00:35.601: INFO: (13) /api/v1/namespaces/proxy-8940/pods/http:proxy-service-5gm4t-bvpwm:1080/proxy/: ... (200; 5.514599ms) May 12 14:00:35.604: INFO: (14) /api/v1/namespaces/proxy-8940/pods/proxy-service-5gm4t-bvpwm:1080/proxy/: test<... (200; 3.271397ms) May 12 14:00:35.605: INFO: (14) /api/v1/namespaces/proxy-8940/pods/proxy-service-5gm4t-bvpwm/proxy/: test (200; 4.331048ms) May 12 14:00:35.605: INFO: (14) /api/v1/namespaces/proxy-8940/services/https:proxy-service-5gm4t:tlsportname1/proxy/: tls baz (200; 4.361472ms) May 12 14:00:35.606: INFO: (14) /api/v1/namespaces/proxy-8940/services/proxy-service-5gm4t:portname1/proxy/: foo (200; 4.685939ms) May 12 14:00:35.606: INFO: (14) /api/v1/namespaces/proxy-8940/services/http:proxy-service-5gm4t:portname1/proxy/: foo (200; 4.851879ms) May 12 14:00:35.606: INFO: (14) /api/v1/namespaces/proxy-8940/services/proxy-service-5gm4t:portname2/proxy/: bar (200; 4.940997ms) May 12 14:00:35.606: INFO: (14) /api/v1/namespaces/proxy-8940/pods/http:proxy-service-5gm4t-bvpwm:162/proxy/: bar (200; 4.932637ms) May 12 14:00:35.606: INFO: (14) /api/v1/namespaces/proxy-8940/services/https:proxy-service-5gm4t:tlsportname2/proxy/: tls qux (200; 4.940697ms) May 12 14:00:35.606: INFO: (14) /api/v1/namespaces/proxy-8940/pods/http:proxy-service-5gm4t-bvpwm:160/proxy/: foo (200; 5.029036ms) May 12 14:00:35.606: INFO: (14) /api/v1/namespaces/proxy-8940/pods/proxy-service-5gm4t-bvpwm:162/proxy/: bar (200; 5.065605ms) May 12 14:00:35.606: INFO: (14) /api/v1/namespaces/proxy-8940/pods/https:proxy-service-5gm4t-bvpwm:460/proxy/: tls baz (200; 5.263623ms) May 12 14:00:35.606: INFO: (14) /api/v1/namespaces/proxy-8940/pods/https:proxy-service-5gm4t-bvpwm:443/proxy/: ... (200; 5.394722ms) May 12 14:00:35.606: INFO: (14) /api/v1/namespaces/proxy-8940/services/http:proxy-service-5gm4t:portname2/proxy/: bar (200; 5.574861ms) May 12 14:00:35.610: INFO: (15) /api/v1/namespaces/proxy-8940/pods/http:proxy-service-5gm4t-bvpwm:162/proxy/: bar (200; 3.480711ms) May 12 14:00:35.610: INFO: (15) /api/v1/namespaces/proxy-8940/pods/proxy-service-5gm4t-bvpwm:162/proxy/: bar (200; 3.653467ms) May 12 14:00:35.610: INFO: (15) /api/v1/namespaces/proxy-8940/pods/https:proxy-service-5gm4t-bvpwm:462/proxy/: tls qux (200; 3.900424ms) May 12 14:00:35.611: INFO: (15) /api/v1/namespaces/proxy-8940/pods/proxy-service-5gm4t-bvpwm:1080/proxy/: test<... (200; 3.952176ms) May 12 14:00:35.611: INFO: (15) /api/v1/namespaces/proxy-8940/pods/http:proxy-service-5gm4t-bvpwm:160/proxy/: foo (200; 3.993133ms) May 12 14:00:35.611: INFO: (15) /api/v1/namespaces/proxy-8940/services/http:proxy-service-5gm4t:portname2/proxy/: bar (200; 4.143054ms) May 12 14:00:35.611: INFO: (15) /api/v1/namespaces/proxy-8940/pods/proxy-service-5gm4t-bvpwm/proxy/: test (200; 4.378681ms) May 12 14:00:35.611: INFO: (15) /api/v1/namespaces/proxy-8940/pods/http:proxy-service-5gm4t-bvpwm:1080/proxy/: ... (200; 4.450871ms) May 12 14:00:35.611: INFO: (15) /api/v1/namespaces/proxy-8940/pods/proxy-service-5gm4t-bvpwm:160/proxy/: foo (200; 4.448932ms) May 12 14:00:35.611: INFO: (15) /api/v1/namespaces/proxy-8940/pods/https:proxy-service-5gm4t-bvpwm:443/proxy/: ... (200; 5.279328ms) May 12 14:00:35.617: INFO: (16) /api/v1/namespaces/proxy-8940/services/proxy-service-5gm4t:portname1/proxy/: foo (200; 5.308352ms) May 12 14:00:35.617: INFO: (16) /api/v1/namespaces/proxy-8940/pods/https:proxy-service-5gm4t-bvpwm:462/proxy/: tls qux (200; 5.296069ms) May 12 14:00:35.617: INFO: (16) /api/v1/namespaces/proxy-8940/services/http:proxy-service-5gm4t:portname1/proxy/: foo (200; 5.301488ms) May 12 14:00:35.617: INFO: (16) /api/v1/namespaces/proxy-8940/pods/https:proxy-service-5gm4t-bvpwm:443/proxy/: test<... (200; 5.431777ms) May 12 14:00:35.617: INFO: (16) /api/v1/namespaces/proxy-8940/pods/proxy-service-5gm4t-bvpwm/proxy/: test (200; 5.470416ms) May 12 14:00:35.617: INFO: (16) /api/v1/namespaces/proxy-8940/pods/https:proxy-service-5gm4t-bvpwm:460/proxy/: tls baz (200; 5.40921ms) May 12 14:00:35.617: INFO: (16) /api/v1/namespaces/proxy-8940/services/https:proxy-service-5gm4t:tlsportname2/proxy/: tls qux (200; 5.495015ms) May 12 14:00:35.617: INFO: (16) /api/v1/namespaces/proxy-8940/pods/http:proxy-service-5gm4t-bvpwm:160/proxy/: foo (200; 5.498623ms) May 12 14:00:35.617: INFO: (16) /api/v1/namespaces/proxy-8940/pods/http:proxy-service-5gm4t-bvpwm:162/proxy/: bar (200; 5.55929ms) May 12 14:00:35.620: INFO: (17) /api/v1/namespaces/proxy-8940/pods/proxy-service-5gm4t-bvpwm:1080/proxy/: test<... (200; 2.894371ms) May 12 14:00:35.621: INFO: (17) /api/v1/namespaces/proxy-8940/pods/https:proxy-service-5gm4t-bvpwm:443/proxy/: test (200; 4.347677ms) May 12 14:00:35.621: INFO: (17) /api/v1/namespaces/proxy-8940/pods/http:proxy-service-5gm4t-bvpwm:1080/proxy/: ... (200; 4.30992ms) May 12 14:00:35.622: INFO: (17) /api/v1/namespaces/proxy-8940/services/http:proxy-service-5gm4t:portname1/proxy/: foo (200; 4.784201ms) May 12 14:00:35.622: INFO: (17) /api/v1/namespaces/proxy-8940/services/https:proxy-service-5gm4t:tlsportname1/proxy/: tls baz (200; 4.855403ms) May 12 14:00:35.622: INFO: (17) /api/v1/namespaces/proxy-8940/services/proxy-service-5gm4t:portname2/proxy/: bar (200; 5.031186ms) May 12 14:00:35.622: INFO: (17) /api/v1/namespaces/proxy-8940/services/http:proxy-service-5gm4t:portname2/proxy/: bar (200; 5.027193ms) May 12 14:00:35.622: INFO: (17) /api/v1/namespaces/proxy-8940/services/https:proxy-service-5gm4t:tlsportname2/proxy/: tls qux (200; 5.019807ms) May 12 14:00:35.622: INFO: (17) /api/v1/namespaces/proxy-8940/services/proxy-service-5gm4t:portname1/proxy/: foo (200; 5.192059ms) May 12 14:00:35.628: INFO: (18) /api/v1/namespaces/proxy-8940/pods/proxy-service-5gm4t-bvpwm:162/proxy/: bar (200; 5.824059ms) May 12 14:00:35.628: INFO: (18) /api/v1/namespaces/proxy-8940/pods/http:proxy-service-5gm4t-bvpwm:160/proxy/: foo (200; 5.819994ms) May 12 14:00:35.628: INFO: (18) /api/v1/namespaces/proxy-8940/pods/proxy-service-5gm4t-bvpwm/proxy/: test (200; 5.82792ms) May 12 14:00:35.628: INFO: (18) /api/v1/namespaces/proxy-8940/services/https:proxy-service-5gm4t:tlsportname1/proxy/: tls baz (200; 5.942859ms) May 12 14:00:35.628: INFO: (18) /api/v1/namespaces/proxy-8940/services/http:proxy-service-5gm4t:portname2/proxy/: bar (200; 5.890833ms) May 12 14:00:35.628: INFO: (18) /api/v1/namespaces/proxy-8940/pods/proxy-service-5gm4t-bvpwm:1080/proxy/: test<... (200; 5.840859ms) May 12 14:00:35.628: INFO: (18) /api/v1/namespaces/proxy-8940/pods/https:proxy-service-5gm4t-bvpwm:462/proxy/: tls qux (200; 5.965626ms) May 12 14:00:35.628: INFO: (18) /api/v1/namespaces/proxy-8940/pods/http:proxy-service-5gm4t-bvpwm:1080/proxy/: ... (200; 5.872996ms) May 12 14:00:35.628: INFO: (18) /api/v1/namespaces/proxy-8940/pods/proxy-service-5gm4t-bvpwm:160/proxy/: foo (200; 5.954601ms) May 12 14:00:35.628: INFO: (18) /api/v1/namespaces/proxy-8940/pods/https:proxy-service-5gm4t-bvpwm:443/proxy/: test<... (200; 8.547298ms) May 12 14:00:35.638: INFO: (19) /api/v1/namespaces/proxy-8940/pods/proxy-service-5gm4t-bvpwm/proxy/: test (200; 8.605976ms) May 12 14:00:35.638: INFO: (19) /api/v1/namespaces/proxy-8940/pods/http:proxy-service-5gm4t-bvpwm:1080/proxy/: ... (200; 8.556506ms) May 12 14:00:35.638: INFO: (19) /api/v1/namespaces/proxy-8940/pods/https:proxy-service-5gm4t-bvpwm:462/proxy/: tls qux (200; 8.681174ms) May 12 14:00:35.638: INFO: (19) /api/v1/namespaces/proxy-8940/pods/http:proxy-service-5gm4t-bvpwm:162/proxy/: bar (200; 8.691319ms) May 12 14:00:35.638: INFO: (19) /api/v1/namespaces/proxy-8940/pods/http:proxy-service-5gm4t-bvpwm:160/proxy/: foo (200; 8.624178ms) May 12 14:00:35.638: INFO: (19) /api/v1/namespaces/proxy-8940/pods/proxy-service-5gm4t-bvpwm:160/proxy/: foo (200; 8.682946ms) May 12 14:00:35.638: INFO: (19) /api/v1/namespaces/proxy-8940/pods/proxy-service-5gm4t-bvpwm:162/proxy/: bar (200; 8.599824ms) May 12 14:00:35.638: INFO: (19) /api/v1/namespaces/proxy-8940/pods/https:proxy-service-5gm4t-bvpwm:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod May 12 14:00:48.830: INFO: Pod pod-hostip-582b94db-df5e-49c5-9b46-93bddf9883dd has hostIP: 172.17.0.5 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:00:48.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4655" for this suite. May 12 14:01:10.875: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:01:11.096: INFO: namespace pods-4655 deletion completed in 22.262813212s • [SLOW TEST:26.592 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:01:11.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-9a07e6ac-83ad-4eee-8e08-539c6b6a7701 STEP: Creating a pod to test consume secrets May 12 14:01:11.186: INFO: Waiting up to 5m0s for pod "pod-secrets-fb608a68-0ccd-48e4-aaa8-fa87ca9d8796" in namespace "secrets-7455" to be "success or failure" May 12 14:01:11.216: INFO: Pod "pod-secrets-fb608a68-0ccd-48e4-aaa8-fa87ca9d8796": Phase="Pending", Reason="", readiness=false. Elapsed: 30.311264ms May 12 14:01:13.221: INFO: Pod "pod-secrets-fb608a68-0ccd-48e4-aaa8-fa87ca9d8796": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034799885s May 12 14:01:15.233: INFO: Pod "pod-secrets-fb608a68-0ccd-48e4-aaa8-fa87ca9d8796": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046409649s May 12 14:01:17.237: INFO: Pod "pod-secrets-fb608a68-0ccd-48e4-aaa8-fa87ca9d8796": Phase="Running", Reason="", readiness=true. Elapsed: 6.050730765s May 12 14:01:19.376: INFO: Pod "pod-secrets-fb608a68-0ccd-48e4-aaa8-fa87ca9d8796": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.189704326s STEP: Saw pod success May 12 14:01:19.376: INFO: Pod "pod-secrets-fb608a68-0ccd-48e4-aaa8-fa87ca9d8796" satisfied condition "success or failure" May 12 14:01:19.379: INFO: Trying to get logs from node iruya-worker pod pod-secrets-fb608a68-0ccd-48e4-aaa8-fa87ca9d8796 container secret-volume-test: STEP: delete the pod May 12 14:01:19.467: INFO: Waiting for pod pod-secrets-fb608a68-0ccd-48e4-aaa8-fa87ca9d8796 to disappear May 12 14:01:19.543: INFO: Pod pod-secrets-fb608a68-0ccd-48e4-aaa8-fa87ca9d8796 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:01:19.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7455" for this suite. May 12 14:01:25.584: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:01:25.647: INFO: namespace secrets-7455 deletion completed in 6.101578554s • [SLOW TEST:14.551 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:01:25.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:01:31.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9812" for this suite. May 12 14:01:55.241: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:01:55.304: INFO: namespace replication-controller-9812 deletion completed in 24.140630566s • [SLOW TEST:29.657 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:01:55.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:01:55.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2058" for this suite. May 12 14:02:04.067: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:02:04.124: INFO: namespace kubelet-test-2058 deletion completed in 8.089000229s • [SLOW TEST:8.819 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:02:04.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-eb3533a2-6b64-4a4b-89b1-7e048e19e537 STEP: Creating a pod to test consume configMaps May 12 14:02:05.211: INFO: Waiting up to 5m0s for pod "pod-configmaps-b5d2e686-c39c-4679-83de-db03df0dbb25" in namespace "configmap-4831" to be "success or failure" May 12 14:02:05.301: INFO: Pod "pod-configmaps-b5d2e686-c39c-4679-83de-db03df0dbb25": Phase="Pending", Reason="", readiness=false. Elapsed: 89.890399ms May 12 14:02:07.306: INFO: Pod "pod-configmaps-b5d2e686-c39c-4679-83de-db03df0dbb25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094519342s May 12 14:02:09.309: INFO: Pod "pod-configmaps-b5d2e686-c39c-4679-83de-db03df0dbb25": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098155991s May 12 14:02:11.313: INFO: Pod "pod-configmaps-b5d2e686-c39c-4679-83de-db03df0dbb25": Phase="Pending", Reason="", readiness=false. Elapsed: 6.101803942s May 12 14:02:13.316: INFO: Pod "pod-configmaps-b5d2e686-c39c-4679-83de-db03df0dbb25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.104806949s STEP: Saw pod success May 12 14:02:13.316: INFO: Pod "pod-configmaps-b5d2e686-c39c-4679-83de-db03df0dbb25" satisfied condition "success or failure" May 12 14:02:13.318: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-b5d2e686-c39c-4679-83de-db03df0dbb25 container configmap-volume-test: STEP: delete the pod May 12 14:02:13.757: INFO: Waiting for pod pod-configmaps-b5d2e686-c39c-4679-83de-db03df0dbb25 to disappear May 12 14:02:13.957: INFO: Pod pod-configmaps-b5d2e686-c39c-4679-83de-db03df0dbb25 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:02:13.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4831" for this suite. May 12 14:02:22.076: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:02:22.132: INFO: namespace configmap-4831 deletion completed in 8.171791418s • [SLOW TEST:18.007 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:02:22.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-fa26d0e4-05d5-4d9e-b687-e10b489e25f4 STEP: Creating a pod to test consume configMaps May 12 14:02:22.285: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bbff5959-d44b-4995-9892-ecb50f33f3f5" in namespace "projected-7000" to be "success or failure" May 12 14:02:22.294: INFO: Pod "pod-projected-configmaps-bbff5959-d44b-4995-9892-ecb50f33f3f5": Phase="Pending", Reason="", readiness=false. Elapsed: 9.174656ms May 12 14:02:24.298: INFO: Pod "pod-projected-configmaps-bbff5959-d44b-4995-9892-ecb50f33f3f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013141227s May 12 14:02:26.302: INFO: Pod "pod-projected-configmaps-bbff5959-d44b-4995-9892-ecb50f33f3f5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017494413s May 12 14:02:28.323: INFO: Pod "pod-projected-configmaps-bbff5959-d44b-4995-9892-ecb50f33f3f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.038491009s STEP: Saw pod success May 12 14:02:28.323: INFO: Pod "pod-projected-configmaps-bbff5959-d44b-4995-9892-ecb50f33f3f5" satisfied condition "success or failure" May 12 14:02:28.327: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-bbff5959-d44b-4995-9892-ecb50f33f3f5 container projected-configmap-volume-test: STEP: delete the pod May 12 14:02:28.415: INFO: Waiting for pod pod-projected-configmaps-bbff5959-d44b-4995-9892-ecb50f33f3f5 to disappear May 12 14:02:28.479: INFO: Pod pod-projected-configmaps-bbff5959-d44b-4995-9892-ecb50f33f3f5 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:02:28.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7000" for this suite. May 12 14:02:36.930: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:02:36.991: INFO: namespace projected-7000 deletion completed in 8.508607344s • [SLOW TEST:14.858 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:02:36.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 12 14:02:50.546: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 14:02:50.641: INFO: Pod pod-with-prestop-exec-hook still exists May 12 14:02:52.641: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 14:02:52.646: INFO: Pod pod-with-prestop-exec-hook still exists May 12 14:02:54.641: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 14:02:54.646: INFO: Pod pod-with-prestop-exec-hook still exists May 12 14:02:56.641: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 14:02:56.644: INFO: Pod pod-with-prestop-exec-hook still exists May 12 14:02:58.641: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 14:02:58.644: INFO: Pod pod-with-prestop-exec-hook still exists May 12 14:03:00.641: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 14:03:00.665: INFO: Pod pod-with-prestop-exec-hook still exists May 12 14:03:02.641: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 14:03:02.645: INFO: Pod pod-with-prestop-exec-hook still exists May 12 14:03:04.641: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 14:03:04.644: INFO: Pod pod-with-prestop-exec-hook still exists May 12 14:03:06.641: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 14:03:06.646: INFO: Pod pod-with-prestop-exec-hook still exists May 12 14:03:08.641: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 14:03:08.954: INFO: Pod pod-with-prestop-exec-hook still exists May 12 14:03:10.641: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 14:03:10.696: INFO: Pod pod-with-prestop-exec-hook still exists May 12 14:03:12.641: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 14:03:12.644: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:03:12.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1359" for this suite. May 12 14:03:37.495: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:03:37.553: INFO: namespace container-lifecycle-hook-1359 deletion completed in 24.901495568s • [SLOW TEST:60.562 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:03:37.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating server pod server in namespace prestop-5379 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-5379 STEP: Deleting pre-stop pod May 12 14:03:53.433: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:03:53.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-5379" for this suite. May 12 14:04:33.520: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:04:33.578: INFO: namespace prestop-5379 deletion completed in 40.107480744s • [SLOW TEST:56.025 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:04:33.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition May 12 14:04:35.409: INFO: Waiting up to 5m0s for pod "var-expansion-f910fb48-e515-46ca-bb48-693d0198fff5" in namespace "var-expansion-3098" to be "success or failure" May 12 14:04:35.413: INFO: Pod "var-expansion-f910fb48-e515-46ca-bb48-693d0198fff5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.517933ms May 12 14:04:37.418: INFO: Pod "var-expansion-f910fb48-e515-46ca-bb48-693d0198fff5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008360178s May 12 14:04:39.499: INFO: Pod "var-expansion-f910fb48-e515-46ca-bb48-693d0198fff5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090036843s May 12 14:04:41.510: INFO: Pod "var-expansion-f910fb48-e515-46ca-bb48-693d0198fff5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.100565519s May 12 14:04:43.514: INFO: Pod "var-expansion-f910fb48-e515-46ca-bb48-693d0198fff5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.105105835s STEP: Saw pod success May 12 14:04:43.515: INFO: Pod "var-expansion-f910fb48-e515-46ca-bb48-693d0198fff5" satisfied condition "success or failure" May 12 14:04:43.517: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-f910fb48-e515-46ca-bb48-693d0198fff5 container dapi-container: STEP: delete the pod May 12 14:04:43.669: INFO: Waiting for pod var-expansion-f910fb48-e515-46ca-bb48-693d0198fff5 to disappear May 12 14:04:43.729: INFO: Pod var-expansion-f910fb48-e515-46ca-bb48-693d0198fff5 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:04:43.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3098" for this suite. May 12 14:04:51.866: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:04:51.927: INFO: namespace var-expansion-3098 deletion completed in 8.19337227s • [SLOW TEST:18.349 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:04:51.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 12 14:04:52.436: INFO: Waiting up to 5m0s for pod "downward-api-f9ec9b11-fe1e-45bb-b410-3c6ee8749b3f" in namespace "downward-api-5428" to be "success or failure" May 12 14:04:52.487: INFO: Pod "downward-api-f9ec9b11-fe1e-45bb-b410-3c6ee8749b3f": Phase="Pending", Reason="", readiness=false. Elapsed: 51.345883ms May 12 14:04:54.926: INFO: Pod "downward-api-f9ec9b11-fe1e-45bb-b410-3c6ee8749b3f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.490437476s May 12 14:04:56.931: INFO: Pod "downward-api-f9ec9b11-fe1e-45bb-b410-3c6ee8749b3f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.494513803s May 12 14:04:59.847: INFO: Pod "downward-api-f9ec9b11-fe1e-45bb-b410-3c6ee8749b3f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.410535706s May 12 14:05:01.851: INFO: Pod "downward-api-f9ec9b11-fe1e-45bb-b410-3c6ee8749b3f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.414728003s STEP: Saw pod success May 12 14:05:01.851: INFO: Pod "downward-api-f9ec9b11-fe1e-45bb-b410-3c6ee8749b3f" satisfied condition "success or failure" May 12 14:05:01.853: INFO: Trying to get logs from node iruya-worker pod downward-api-f9ec9b11-fe1e-45bb-b410-3c6ee8749b3f container dapi-container: STEP: delete the pod May 12 14:05:02.024: INFO: Waiting for pod downward-api-f9ec9b11-fe1e-45bb-b410-3c6ee8749b3f to disappear May 12 14:05:02.105: INFO: Pod downward-api-f9ec9b11-fe1e-45bb-b410-3c6ee8749b3f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:05:02.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5428" for this suite. May 12 14:05:10.250: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:05:11.285: INFO: namespace downward-api-5428 deletion completed in 9.176855324s • [SLOW TEST:19.357 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:05:11.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:05:12.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6753" for this suite. May 12 14:05:18.750: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:05:19.069: INFO: namespace services-6753 deletion completed in 6.978418757s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:7.784 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:05:19.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 12 14:05:20.058: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota May 12 14:05:23.152: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:05:23.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2942" for this suite. May 12 14:05:33.557: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:05:33.613: INFO: namespace replication-controller-2942 deletion completed in 10.131822083s • [SLOW TEST:14.544 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:05:33.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-efb66697-0830-4242-802e-fb401dd6c32e STEP: Creating a pod to test consume secrets May 12 14:05:33.999: INFO: Waiting up to 5m0s for pod "pod-secrets-2d59be04-9d1c-4d8f-bb97-8ede3ad6c31e" in namespace "secrets-2158" to be "success or failure" May 12 14:05:34.003: INFO: Pod "pod-secrets-2d59be04-9d1c-4d8f-bb97-8ede3ad6c31e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.522723ms May 12 14:05:36.158: INFO: Pod "pod-secrets-2d59be04-9d1c-4d8f-bb97-8ede3ad6c31e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.158739078s May 12 14:05:38.161: INFO: Pod "pod-secrets-2d59be04-9d1c-4d8f-bb97-8ede3ad6c31e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.161987576s May 12 14:05:40.224: INFO: Pod "pod-secrets-2d59be04-9d1c-4d8f-bb97-8ede3ad6c31e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.224213706s STEP: Saw pod success May 12 14:05:40.224: INFO: Pod "pod-secrets-2d59be04-9d1c-4d8f-bb97-8ede3ad6c31e" satisfied condition "success or failure" May 12 14:05:40.226: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-2d59be04-9d1c-4d8f-bb97-8ede3ad6c31e container secret-volume-test: STEP: delete the pod May 12 14:05:40.292: INFO: Waiting for pod pod-secrets-2d59be04-9d1c-4d8f-bb97-8ede3ad6c31e to disappear May 12 14:05:40.397: INFO: Pod pod-secrets-2d59be04-9d1c-4d8f-bb97-8ede3ad6c31e no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:05:40.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2158" for this suite. May 12 14:05:46.892: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:05:47.021: INFO: namespace secrets-2158 deletion completed in 6.619373064s • [SLOW TEST:13.407 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:05:47.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 12 14:05:47.094: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b37f320e-2291-41e6-86c3-6852d74630ac" in namespace "downward-api-7680" to be "success or failure" May 12 14:05:47.146: INFO: Pod "downwardapi-volume-b37f320e-2291-41e6-86c3-6852d74630ac": Phase="Pending", Reason="", readiness=false. Elapsed: 51.661954ms May 12 14:05:49.851: INFO: Pod "downwardapi-volume-b37f320e-2291-41e6-86c3-6852d74630ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.756781232s May 12 14:05:52.242: INFO: Pod "downwardapi-volume-b37f320e-2291-41e6-86c3-6852d74630ac": Phase="Pending", Reason="", readiness=false. Elapsed: 5.14839233s May 12 14:05:54.246: INFO: Pod "downwardapi-volume-b37f320e-2291-41e6-86c3-6852d74630ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.151943573s STEP: Saw pod success May 12 14:05:54.246: INFO: Pod "downwardapi-volume-b37f320e-2291-41e6-86c3-6852d74630ac" satisfied condition "success or failure" May 12 14:05:54.250: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-b37f320e-2291-41e6-86c3-6852d74630ac container client-container: STEP: delete the pod May 12 14:05:54.399: INFO: Waiting for pod downwardapi-volume-b37f320e-2291-41e6-86c3-6852d74630ac to disappear May 12 14:05:54.411: INFO: Pod downwardapi-volume-b37f320e-2291-41e6-86c3-6852d74630ac no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:05:54.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7680" for this suite. May 12 14:06:01.189: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:06:02.023: INFO: namespace downward-api-7680 deletion completed in 7.609564763s • [SLOW TEST:15.001 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:06:02.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 12 14:06:02.866: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9f5a643b-ae9d-43fc-9aac-22130df21cae" in namespace "downward-api-9762" to be "success or failure" May 12 14:06:02.910: INFO: Pod "downwardapi-volume-9f5a643b-ae9d-43fc-9aac-22130df21cae": Phase="Pending", Reason="", readiness=false. Elapsed: 43.97375ms May 12 14:06:04.913: INFO: Pod "downwardapi-volume-9f5a643b-ae9d-43fc-9aac-22130df21cae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047114355s May 12 14:06:06.930: INFO: Pod "downwardapi-volume-9f5a643b-ae9d-43fc-9aac-22130df21cae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063910987s May 12 14:06:08.934: INFO: Pod "downwardapi-volume-9f5a643b-ae9d-43fc-9aac-22130df21cae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.06826304s STEP: Saw pod success May 12 14:06:08.935: INFO: Pod "downwardapi-volume-9f5a643b-ae9d-43fc-9aac-22130df21cae" satisfied condition "success or failure" May 12 14:06:08.938: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-9f5a643b-ae9d-43fc-9aac-22130df21cae container client-container: STEP: delete the pod May 12 14:06:09.116: INFO: Waiting for pod downwardapi-volume-9f5a643b-ae9d-43fc-9aac-22130df21cae to disappear May 12 14:06:09.118: INFO: Pod downwardapi-volume-9f5a643b-ae9d-43fc-9aac-22130df21cae no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:06:09.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9762" for this suite. May 12 14:06:17.151: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:06:17.212: INFO: namespace downward-api-9762 deletion completed in 8.089727786s • [SLOW TEST:15.188 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:06:17.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-8d4b5d18-b5dd-4617-af15-0dbc826fba4d STEP: Creating a pod to test consume configMaps May 12 14:06:17.449: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bbdc2edb-1420-4610-8d7a-826b58effbad" in namespace "projected-9808" to be "success or failure" May 12 14:06:17.565: INFO: Pod "pod-projected-configmaps-bbdc2edb-1420-4610-8d7a-826b58effbad": Phase="Pending", Reason="", readiness=false. Elapsed: 116.267437ms May 12 14:06:19.569: INFO: Pod "pod-projected-configmaps-bbdc2edb-1420-4610-8d7a-826b58effbad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120372415s May 12 14:06:21.624: INFO: Pod "pod-projected-configmaps-bbdc2edb-1420-4610-8d7a-826b58effbad": Phase="Pending", Reason="", readiness=false. Elapsed: 4.175402091s May 12 14:06:23.834: INFO: Pod "pod-projected-configmaps-bbdc2edb-1420-4610-8d7a-826b58effbad": Phase="Pending", Reason="", readiness=false. Elapsed: 6.385280067s May 12 14:06:25.865: INFO: Pod "pod-projected-configmaps-bbdc2edb-1420-4610-8d7a-826b58effbad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.415403849s STEP: Saw pod success May 12 14:06:25.865: INFO: Pod "pod-projected-configmaps-bbdc2edb-1420-4610-8d7a-826b58effbad" satisfied condition "success or failure" May 12 14:06:25.867: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-bbdc2edb-1420-4610-8d7a-826b58effbad container projected-configmap-volume-test: STEP: delete the pod May 12 14:06:26.134: INFO: Waiting for pod pod-projected-configmaps-bbdc2edb-1420-4610-8d7a-826b58effbad to disappear May 12 14:06:26.137: INFO: Pod pod-projected-configmaps-bbdc2edb-1420-4610-8d7a-826b58effbad no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:06:26.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9808" for this suite. May 12 14:06:34.718: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:06:34.785: INFO: namespace projected-9808 deletion completed in 8.28080949s • [SLOW TEST:17.573 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:06:34.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 12 14:06:34.905: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-1842,SelfLink:/api/v1/namespaces/watch-1842/configmaps/e2e-watch-test-resource-version,UID:b93fdca1-c288-4ed3-884b-7d96ef7f38c6,ResourceVersion:10494375,Generation:0,CreationTimestamp:2020-05-12 14:06:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 12 14:06:34.905: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-1842,SelfLink:/api/v1/namespaces/watch-1842/configmaps/e2e-watch-test-resource-version,UID:b93fdca1-c288-4ed3-884b-7d96ef7f38c6,ResourceVersion:10494376,Generation:0,CreationTimestamp:2020-05-12 14:06:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:06:34.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1842" for this suite. May 12 14:06:42.972: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:06:43.043: INFO: namespace watch-1842 deletion completed in 8.117528133s • [SLOW TEST:8.257 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:06:43.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod May 12 14:06:43.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9600' May 12 14:06:44.887: INFO: stderr: "" May 12 14:06:44.887: INFO: stdout: "pod/pause created\n" May 12 14:06:44.887: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 12 14:06:44.887: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-9600" to be "running and ready" May 12 14:06:44.915: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 27.424665ms May 12 14:06:46.919: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032134963s May 12 14:06:48.923: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036158105s May 12 14:06:50.926: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 6.039134284s May 12 14:06:50.926: INFO: Pod "pause" satisfied condition "running and ready" May 12 14:06:50.926: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod May 12 14:06:50.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-9600' May 12 14:06:51.039: INFO: stderr: "" May 12 14:06:51.039: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 12 14:06:51.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9600' May 12 14:06:51.127: INFO: stderr: "" May 12 14:06:51.127: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 7s testing-label-value\n" STEP: removing the label testing-label of a pod May 12 14:06:51.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-9600' May 12 14:06:51.242: INFO: stderr: "" May 12 14:06:51.242: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 12 14:06:51.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9600' May 12 14:06:51.333: INFO: stderr: "" May 12 14:06:51.333: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 7s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources May 12 14:06:51.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9600' May 12 14:06:51.668: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 14:06:51.668: INFO: stdout: "pod \"pause\" force deleted\n" May 12 14:06:51.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-9600' May 12 14:06:53.112: INFO: stderr: "No resources found.\n" May 12 14:06:53.112: INFO: stdout: "" May 12 14:06:53.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-9600 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 12 14:06:53.818: INFO: stderr: "" May 12 14:06:53.818: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:06:53.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9600" for this suite. May 12 14:07:00.900: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:07:00.963: INFO: namespace kubectl-9600 deletion completed in 6.527378179s • [SLOW TEST:17.919 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:07:00.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc May 12 14:07:01.101: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-641' May 12 14:07:01.450: INFO: stderr: "" May 12 14:07:01.450: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. May 12 14:07:02.823: INFO: Selector matched 1 pods for map[app:redis] May 12 14:07:02.823: INFO: Found 0 / 1 May 12 14:07:03.527: INFO: Selector matched 1 pods for map[app:redis] May 12 14:07:03.527: INFO: Found 0 / 1 May 12 14:07:04.679: INFO: Selector matched 1 pods for map[app:redis] May 12 14:07:04.679: INFO: Found 0 / 1 May 12 14:07:05.475: INFO: Selector matched 1 pods for map[app:redis] May 12 14:07:05.475: INFO: Found 0 / 1 May 12 14:07:06.455: INFO: Selector matched 1 pods for map[app:redis] May 12 14:07:06.455: INFO: Found 0 / 1 May 12 14:07:07.454: INFO: Selector matched 1 pods for map[app:redis] May 12 14:07:07.454: INFO: Found 0 / 1 May 12 14:07:08.787: INFO: Selector matched 1 pods for map[app:redis] May 12 14:07:08.787: INFO: Found 1 / 1 May 12 14:07:08.787: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 12 14:07:08.790: INFO: Selector matched 1 pods for map[app:redis] May 12 14:07:08.790: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings May 12 14:07:08.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-5dsdq redis-master --namespace=kubectl-641' May 12 14:07:09.053: INFO: stderr: "" May 12 14:07:09.053: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 12 May 14:07:07.645 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 12 May 14:07:07.645 # Server started, Redis version 3.2.12\n1:M 12 May 14:07:07.645 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 12 May 14:07:07.645 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines May 12 14:07:09.053: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-5dsdq redis-master --namespace=kubectl-641 --tail=1' May 12 14:07:09.216: INFO: stderr: "" May 12 14:07:09.216: INFO: stdout: "1:M 12 May 14:07:07.645 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes May 12 14:07:09.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-5dsdq redis-master --namespace=kubectl-641 --limit-bytes=1' May 12 14:07:09.321: INFO: stderr: "" May 12 14:07:09.321: INFO: stdout: " " STEP: exposing timestamps May 12 14:07:09.321: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-5dsdq redis-master --namespace=kubectl-641 --tail=1 --timestamps' May 12 14:07:09.430: INFO: stderr: "" May 12 14:07:09.430: INFO: stdout: "2020-05-12T14:07:07.645931883Z 1:M 12 May 14:07:07.645 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range May 12 14:07:11.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-5dsdq redis-master --namespace=kubectl-641 --since=1s' May 12 14:07:12.046: INFO: stderr: "" May 12 14:07:12.046: INFO: stdout: "" May 12 14:07:12.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-5dsdq redis-master --namespace=kubectl-641 --since=24h' May 12 14:07:12.168: INFO: stderr: "" May 12 14:07:12.168: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 12 May 14:07:07.645 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 12 May 14:07:07.645 # Server started, Redis version 3.2.12\n1:M 12 May 14:07:07.645 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 12 May 14:07:07.645 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources May 12 14:07:12.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-641' May 12 14:07:12.575: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 14:07:12.575: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" May 12 14:07:12.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-641' May 12 14:07:13.708: INFO: stderr: "No resources found.\n" May 12 14:07:13.708: INFO: stdout: "" May 12 14:07:13.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-641 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 12 14:07:14.387: INFO: stderr: "" May 12 14:07:14.387: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:07:14.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-641" for this suite. May 12 14:07:36.793: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:07:37.290: INFO: namespace kubectl-641 deletion completed in 22.898377696s • [SLOW TEST:36.327 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:07:37.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium May 12 14:07:38.318: INFO: Waiting up to 5m0s for pod "pod-959f750f-edfd-48bf-8046-d24ab36a1b0f" in namespace "emptydir-57" to be "success or failure" May 12 14:07:38.990: INFO: Pod "pod-959f750f-edfd-48bf-8046-d24ab36a1b0f": Phase="Pending", Reason="", readiness=false. Elapsed: 672.841927ms May 12 14:07:41.548: INFO: Pod "pod-959f750f-edfd-48bf-8046-d24ab36a1b0f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.230421956s May 12 14:07:43.552: INFO: Pod "pod-959f750f-edfd-48bf-8046-d24ab36a1b0f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.234685958s May 12 14:07:45.590: INFO: Pod "pod-959f750f-edfd-48bf-8046-d24ab36a1b0f": Phase="Running", Reason="", readiness=true. Elapsed: 7.272133769s May 12 14:07:47.593: INFO: Pod "pod-959f750f-edfd-48bf-8046-d24ab36a1b0f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.275425734s STEP: Saw pod success May 12 14:07:47.593: INFO: Pod "pod-959f750f-edfd-48bf-8046-d24ab36a1b0f" satisfied condition "success or failure" May 12 14:07:47.595: INFO: Trying to get logs from node iruya-worker2 pod pod-959f750f-edfd-48bf-8046-d24ab36a1b0f container test-container: STEP: delete the pod May 12 14:07:47.723: INFO: Waiting for pod pod-959f750f-edfd-48bf-8046-d24ab36a1b0f to disappear May 12 14:07:47.813: INFO: Pod pod-959f750f-edfd-48bf-8046-d24ab36a1b0f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:07:47.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-57" for this suite. May 12 14:07:53.912: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:07:53.970: INFO: namespace emptydir-57 deletion completed in 6.154395732s • [SLOW TEST:16.680 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:07:53.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller May 12 14:07:54.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9256' May 12 14:07:54.285: INFO: stderr: "" May 12 14:07:54.285: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 12 14:07:54.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9256' May 12 14:07:54.471: INFO: stderr: "" May 12 14:07:54.471: INFO: stdout: "update-demo-nautilus-5zxzf update-demo-nautilus-wx2vm " May 12 14:07:54.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5zxzf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9256' May 12 14:07:54.588: INFO: stderr: "" May 12 14:07:54.588: INFO: stdout: "" May 12 14:07:54.588: INFO: update-demo-nautilus-5zxzf is created but not running May 12 14:07:59.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9256' May 12 14:07:59.676: INFO: stderr: "" May 12 14:07:59.676: INFO: stdout: "update-demo-nautilus-5zxzf update-demo-nautilus-wx2vm " May 12 14:07:59.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5zxzf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9256' May 12 14:07:59.768: INFO: stderr: "" May 12 14:07:59.768: INFO: stdout: "true" May 12 14:07:59.768: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5zxzf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9256' May 12 14:07:59.852: INFO: stderr: "" May 12 14:07:59.852: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 12 14:07:59.852: INFO: validating pod update-demo-nautilus-5zxzf May 12 14:07:59.855: INFO: got data: { "image": "nautilus.jpg" } May 12 14:07:59.855: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 12 14:07:59.855: INFO: update-demo-nautilus-5zxzf is verified up and running May 12 14:07:59.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wx2vm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9256' May 12 14:07:59.939: INFO: stderr: "" May 12 14:07:59.939: INFO: stdout: "true" May 12 14:07:59.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wx2vm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9256' May 12 14:08:00.036: INFO: stderr: "" May 12 14:08:00.036: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 12 14:08:00.036: INFO: validating pod update-demo-nautilus-wx2vm May 12 14:08:00.039: INFO: got data: { "image": "nautilus.jpg" } May 12 14:08:00.039: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 12 14:08:00.039: INFO: update-demo-nautilus-wx2vm is verified up and running STEP: scaling down the replication controller May 12 14:08:00.041: INFO: scanned /root for discovery docs: May 12 14:08:00.041: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-9256' May 12 14:08:01.162: INFO: stderr: "" May 12 14:08:01.162: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 12 14:08:01.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9256' May 12 14:08:01.384: INFO: stderr: "" May 12 14:08:01.384: INFO: stdout: "update-demo-nautilus-5zxzf update-demo-nautilus-wx2vm " STEP: Replicas for name=update-demo: expected=1 actual=2 May 12 14:08:06.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9256' May 12 14:08:06.515: INFO: stderr: "" May 12 14:08:06.515: INFO: stdout: "update-demo-nautilus-wx2vm " May 12 14:08:06.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wx2vm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9256' May 12 14:08:06.599: INFO: stderr: "" May 12 14:08:06.599: INFO: stdout: "true" May 12 14:08:06.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wx2vm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9256' May 12 14:08:06.691: INFO: stderr: "" May 12 14:08:06.691: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 12 14:08:06.691: INFO: validating pod update-demo-nautilus-wx2vm May 12 14:08:06.694: INFO: got data: { "image": "nautilus.jpg" } May 12 14:08:06.694: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 12 14:08:06.694: INFO: update-demo-nautilus-wx2vm is verified up and running STEP: scaling up the replication controller May 12 14:08:06.696: INFO: scanned /root for discovery docs: May 12 14:08:06.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-9256' May 12 14:08:08.038: INFO: stderr: "" May 12 14:08:08.038: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 12 14:08:08.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9256' May 12 14:08:08.137: INFO: stderr: "" May 12 14:08:08.137: INFO: stdout: "update-demo-nautilus-2cjkl update-demo-nautilus-wx2vm " May 12 14:08:08.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2cjkl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9256' May 12 14:08:09.144: INFO: stderr: "" May 12 14:08:09.144: INFO: stdout: "" May 12 14:08:09.144: INFO: update-demo-nautilus-2cjkl is created but not running May 12 14:08:14.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9256' May 12 14:08:14.450: INFO: stderr: "" May 12 14:08:14.450: INFO: stdout: "update-demo-nautilus-2cjkl update-demo-nautilus-wx2vm " May 12 14:08:14.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2cjkl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9256' May 12 14:08:14.706: INFO: stderr: "" May 12 14:08:14.706: INFO: stdout: "" May 12 14:08:14.706: INFO: update-demo-nautilus-2cjkl is created but not running May 12 14:08:19.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9256' May 12 14:08:19.979: INFO: stderr: "" May 12 14:08:19.979: INFO: stdout: "update-demo-nautilus-2cjkl update-demo-nautilus-wx2vm " May 12 14:08:19.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2cjkl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9256' May 12 14:08:20.074: INFO: stderr: "" May 12 14:08:20.074: INFO: stdout: "true" May 12 14:08:20.074: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2cjkl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9256' May 12 14:08:20.172: INFO: stderr: "" May 12 14:08:20.172: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 12 14:08:20.172: INFO: validating pod update-demo-nautilus-2cjkl May 12 14:08:20.176: INFO: got data: { "image": "nautilus.jpg" } May 12 14:08:20.176: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 12 14:08:20.176: INFO: update-demo-nautilus-2cjkl is verified up and running May 12 14:08:20.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wx2vm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9256' May 12 14:08:20.279: INFO: stderr: "" May 12 14:08:20.279: INFO: stdout: "true" May 12 14:08:20.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wx2vm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9256' May 12 14:08:20.455: INFO: stderr: "" May 12 14:08:20.456: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 12 14:08:20.456: INFO: validating pod update-demo-nautilus-wx2vm May 12 14:08:20.503: INFO: got data: { "image": "nautilus.jpg" } May 12 14:08:20.503: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 12 14:08:20.503: INFO: update-demo-nautilus-wx2vm is verified up and running STEP: using delete to clean up resources May 12 14:08:20.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9256' May 12 14:08:20.679: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 14:08:20.679: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 12 14:08:20.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9256' May 12 14:08:20.820: INFO: stderr: "No resources found.\n" May 12 14:08:20.821: INFO: stdout: "" May 12 14:08:20.821: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9256 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 12 14:08:20.948: INFO: stderr: "" May 12 14:08:20.948: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:08:20.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9256" for this suite. May 12 14:08:45.020: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:08:45.607: INFO: namespace kubectl-9256 deletion completed in 24.655839234s • [SLOW TEST:51.636 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:08:45.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 12 14:08:45.769: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-7375,SelfLink:/api/v1/namespaces/watch-7375/configmaps/e2e-watch-test-watch-closed,UID:595351f3-471c-4eb1-83b0-b4dbcb5bf5f9,ResourceVersion:10494798,Generation:0,CreationTimestamp:2020-05-12 14:08:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 12 14:08:45.770: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-7375,SelfLink:/api/v1/namespaces/watch-7375/configmaps/e2e-watch-test-watch-closed,UID:595351f3-471c-4eb1-83b0-b4dbcb5bf5f9,ResourceVersion:10494799,Generation:0,CreationTimestamp:2020-05-12 14:08:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 12 14:08:45.779: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-7375,SelfLink:/api/v1/namespaces/watch-7375/configmaps/e2e-watch-test-watch-closed,UID:595351f3-471c-4eb1-83b0-b4dbcb5bf5f9,ResourceVersion:10494800,Generation:0,CreationTimestamp:2020-05-12 14:08:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 12 14:08:45.779: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-7375,SelfLink:/api/v1/namespaces/watch-7375/configmaps/e2e-watch-test-watch-closed,UID:595351f3-471c-4eb1-83b0-b4dbcb5bf5f9,ResourceVersion:10494801,Generation:0,CreationTimestamp:2020-05-12 14:08:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:08:45.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7375" for this suite. May 12 14:08:51.811: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:08:51.871: INFO: namespace watch-7375 deletion completed in 6.086787094s • [SLOW TEST:6.263 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:08:51.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 12 14:08:51.984: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d3772c47-2cb2-4038-8703-85feb2cf79fa" in namespace "projected-9924" to be "success or failure" May 12 14:08:52.018: INFO: Pod "downwardapi-volume-d3772c47-2cb2-4038-8703-85feb2cf79fa": Phase="Pending", Reason="", readiness=false. Elapsed: 34.166153ms May 12 14:08:54.022: INFO: Pod "downwardapi-volume-d3772c47-2cb2-4038-8703-85feb2cf79fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037699621s May 12 14:08:56.025: INFO: Pod "downwardapi-volume-d3772c47-2cb2-4038-8703-85feb2cf79fa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040932381s May 12 14:08:58.105: INFO: Pod "downwardapi-volume-d3772c47-2cb2-4038-8703-85feb2cf79fa": Phase="Running", Reason="", readiness=true. Elapsed: 6.120745439s May 12 14:09:00.108: INFO: Pod "downwardapi-volume-d3772c47-2cb2-4038-8703-85feb2cf79fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.123923323s STEP: Saw pod success May 12 14:09:00.108: INFO: Pod "downwardapi-volume-d3772c47-2cb2-4038-8703-85feb2cf79fa" satisfied condition "success or failure" May 12 14:09:00.110: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-d3772c47-2cb2-4038-8703-85feb2cf79fa container client-container: STEP: delete the pod May 12 14:09:00.127: INFO: Waiting for pod downwardapi-volume-d3772c47-2cb2-4038-8703-85feb2cf79fa to disappear May 12 14:09:00.131: INFO: Pod downwardapi-volume-d3772c47-2cb2-4038-8703-85feb2cf79fa no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:09:00.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9924" for this suite. May 12 14:09:08.164: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:09:08.219: INFO: namespace projected-9924 deletion completed in 8.084051316s • [SLOW TEST:16.348 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:09:08.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:09:14.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3033" for this suite. May 12 14:09:21.011: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:09:21.073: INFO: namespace watch-3033 deletion completed in 6.163610022s • [SLOW TEST:12.854 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:09:21.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod May 12 14:09:25.844: INFO: Successfully updated pod "annotationupdate55da8f5e-bdff-4ab1-8589-7c63c06c7990" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:09:27.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1434" for this suite. May 12 14:09:49.895: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:09:49.959: INFO: namespace projected-1434 deletion completed in 22.083105808s • [SLOW TEST:28.887 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:09:49.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 12 14:09:50.050: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d5024cd2-a74d-474d-9225-d207cebc2f70" in namespace "downward-api-843" to be "success or failure" May 12 14:09:50.054: INFO: Pod "downwardapi-volume-d5024cd2-a74d-474d-9225-d207cebc2f70": Phase="Pending", Reason="", readiness=false. Elapsed: 3.160988ms May 12 14:09:52.057: INFO: Pod "downwardapi-volume-d5024cd2-a74d-474d-9225-d207cebc2f70": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006615476s May 12 14:09:54.061: INFO: Pod "downwardapi-volume-d5024cd2-a74d-474d-9225-d207cebc2f70": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010818826s STEP: Saw pod success May 12 14:09:54.061: INFO: Pod "downwardapi-volume-d5024cd2-a74d-474d-9225-d207cebc2f70" satisfied condition "success or failure" May 12 14:09:54.064: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-d5024cd2-a74d-474d-9225-d207cebc2f70 container client-container: STEP: delete the pod May 12 14:09:54.280: INFO: Waiting for pod downwardapi-volume-d5024cd2-a74d-474d-9225-d207cebc2f70 to disappear May 12 14:09:54.341: INFO: Pod downwardapi-volume-d5024cd2-a74d-474d-9225-d207cebc2f70 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:09:54.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-843" for this suite. May 12 14:10:00.473: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:10:00.559: INFO: namespace downward-api-843 deletion completed in 6.213584671s • [SLOW TEST:10.599 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:10:00.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 12 14:10:00.922: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a3aa84bf-1a53-453c-b1eb-05660b391776" in namespace "downward-api-2091" to be "success or failure" May 12 14:10:01.135: INFO: Pod "downwardapi-volume-a3aa84bf-1a53-453c-b1eb-05660b391776": Phase="Pending", Reason="", readiness=false. Elapsed: 213.017043ms May 12 14:10:03.139: INFO: Pod "downwardapi-volume-a3aa84bf-1a53-453c-b1eb-05660b391776": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217227052s May 12 14:10:05.143: INFO: Pod "downwardapi-volume-a3aa84bf-1a53-453c-b1eb-05660b391776": Phase="Running", Reason="", readiness=true. Elapsed: 4.221110136s May 12 14:10:07.148: INFO: Pod "downwardapi-volume-a3aa84bf-1a53-453c-b1eb-05660b391776": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.225493947s STEP: Saw pod success May 12 14:10:07.148: INFO: Pod "downwardapi-volume-a3aa84bf-1a53-453c-b1eb-05660b391776" satisfied condition "success or failure" May 12 14:10:07.150: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-a3aa84bf-1a53-453c-b1eb-05660b391776 container client-container: STEP: delete the pod May 12 14:10:07.256: INFO: Waiting for pod downwardapi-volume-a3aa84bf-1a53-453c-b1eb-05660b391776 to disappear May 12 14:10:07.294: INFO: Pod downwardapi-volume-a3aa84bf-1a53-453c-b1eb-05660b391776 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:10:07.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2091" for this suite. May 12 14:10:13.527: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:10:13.582: INFO: namespace downward-api-2091 deletion completed in 6.285153505s • [SLOW TEST:13.023 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:10:13.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions May 12 14:10:13.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' May 12 14:10:14.006: INFO: stderr: "" May 12 14:10:14.006: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:10:14.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8959" for this suite. May 12 14:10:22.248: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:10:23.171: INFO: namespace kubectl-8959 deletion completed in 9.161429676s • [SLOW TEST:9.588 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:10:23.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6519.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6519.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6519.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6519.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6519.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6519.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6519.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6519.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6519.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6519.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6519.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6519.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6519.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 88.29.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.29.88_udp@PTR;check="$$(dig +tcp +noall +answer +search 88.29.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.29.88_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6519.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6519.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6519.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6519.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6519.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6519.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6519.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6519.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6519.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6519.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6519.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6519.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6519.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 88.29.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.29.88_udp@PTR;check="$$(dig +tcp +noall +answer +search 88.29.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.29.88_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 12 14:10:30.500: INFO: Unable to read wheezy_udp@dns-test-service.dns-6519.svc.cluster.local from pod dns-6519/dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f: the server could not find the requested resource (get pods dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f) May 12 14:10:30.536: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6519.svc.cluster.local from pod dns-6519/dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f: the server could not find the requested resource (get pods dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f) May 12 14:10:30.566: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6519.svc.cluster.local from pod dns-6519/dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f: the server could not find the requested resource (get pods dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f) May 12 14:10:30.569: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6519.svc.cluster.local from pod dns-6519/dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f: the server could not find the requested resource (get pods dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f) May 12 14:10:30.591: INFO: Unable to read jessie_udp@dns-test-service.dns-6519.svc.cluster.local from pod dns-6519/dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f: the server could not find the requested resource (get pods dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f) May 12 14:10:30.594: INFO: Unable to read jessie_tcp@dns-test-service.dns-6519.svc.cluster.local from pod dns-6519/dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f: the server could not find the requested resource (get pods dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f) May 12 14:10:30.626: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6519.svc.cluster.local from pod dns-6519/dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f: the server could not find the requested resource (get pods dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f) May 12 14:10:30.629: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6519.svc.cluster.local from pod dns-6519/dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f: the server could not find the requested resource (get pods dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f) May 12 14:10:30.643: INFO: Lookups using dns-6519/dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f failed for: [wheezy_udp@dns-test-service.dns-6519.svc.cluster.local wheezy_tcp@dns-test-service.dns-6519.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6519.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6519.svc.cluster.local jessie_udp@dns-test-service.dns-6519.svc.cluster.local jessie_tcp@dns-test-service.dns-6519.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6519.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6519.svc.cluster.local] May 12 14:10:35.648: INFO: Unable to read wheezy_udp@dns-test-service.dns-6519.svc.cluster.local from pod dns-6519/dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f: the server could not find the requested resource (get pods dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f) May 12 14:10:35.651: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6519.svc.cluster.local from pod dns-6519/dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f: the server could not find the requested resource (get pods dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f) May 12 14:10:35.654: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6519.svc.cluster.local from pod dns-6519/dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f: the server could not find the requested resource (get pods dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f) May 12 14:10:35.657: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6519.svc.cluster.local from pod dns-6519/dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f: the server could not find the requested resource (get pods dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f) May 12 14:10:35.674: INFO: Unable to read jessie_udp@dns-test-service.dns-6519.svc.cluster.local from pod dns-6519/dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f: the server could not find the requested resource (get pods dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f) May 12 14:10:35.677: INFO: Unable to read jessie_tcp@dns-test-service.dns-6519.svc.cluster.local from pod dns-6519/dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f: the server could not find the requested resource (get pods dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f) May 12 14:10:35.679: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6519.svc.cluster.local from pod dns-6519/dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f: the server could not find the requested resource (get pods dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f) May 12 14:10:35.682: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6519.svc.cluster.local from pod dns-6519/dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f: the server could not find the requested resource (get pods dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f) May 12 14:10:35.698: INFO: Lookups using dns-6519/dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f failed for: [wheezy_udp@dns-test-service.dns-6519.svc.cluster.local wheezy_tcp@dns-test-service.dns-6519.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6519.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6519.svc.cluster.local jessie_udp@dns-test-service.dns-6519.svc.cluster.local jessie_tcp@dns-test-service.dns-6519.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6519.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6519.svc.cluster.local] May 12 14:10:40.648: INFO: Unable to read wheezy_udp@dns-test-service.dns-6519.svc.cluster.local from pod dns-6519/dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f: the server could not find the requested resource (get pods dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f) May 12 14:10:40.652: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6519.svc.cluster.local from pod dns-6519/dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f: the server could not find the requested resource (get pods dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f) May 12 14:10:40.655: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6519.svc.cluster.local from pod dns-6519/dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f: the server could not find the requested resource (get pods dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f) May 12 14:10:40.657: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6519.svc.cluster.local from pod dns-6519/dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f: the server could not find the requested resource (get pods dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f) May 12 14:10:40.696: INFO: Unable to read jessie_udp@dns-test-service.dns-6519.svc.cluster.local from pod dns-6519/dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f: the server could not find the requested resource (get pods dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f) May 12 14:10:40.699: INFO: Unable to read jessie_tcp@dns-test-service.dns-6519.svc.cluster.local from pod dns-6519/dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f: the server could not find the requested resource (get pods dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f) May 12 14:10:40.800: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6519.svc.cluster.local from pod dns-6519/dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f: the server could not find the requested resource (get pods dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f) May 12 14:10:40.804: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6519.svc.cluster.local from pod dns-6519/dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f: the server could not find the requested resource (get pods dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f) May 12 14:10:40.821: INFO: Lookups using dns-6519/dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f failed for: [wheezy_udp@dns-test-service.dns-6519.svc.cluster.local wheezy_tcp@dns-test-service.dns-6519.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6519.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6519.svc.cluster.local jessie_udp@dns-test-service.dns-6519.svc.cluster.local jessie_tcp@dns-test-service.dns-6519.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6519.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6519.svc.cluster.local] May 12 14:10:45.648: INFO: Unable to read wheezy_udp@dns-test-service.dns-6519.svc.cluster.local from pod dns-6519/dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f: the server could not find the requested resource (get pods dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f) May 12 14:10:45.651: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6519.svc.cluster.local from pod dns-6519/dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f: the server could not find the requested resource (get pods dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f) May 12 14:10:45.654: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6519.svc.cluster.local from pod dns-6519/dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f: the server could not find the requested resource (get pods dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f) May 12 14:10:45.674: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6519.svc.cluster.local from pod dns-6519/dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f: the server could not find the requested resource (get pods dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f) May 12 14:10:45.701: INFO: Unable to read jessie_udp@dns-test-service.dns-6519.svc.cluster.local from pod dns-6519/dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f: the server could not find the requested resource (get pods dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f) May 12 14:10:45.704: INFO: Unable to read jessie_tcp@dns-test-service.dns-6519.svc.cluster.local from pod dns-6519/dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f: the server could not find the requested resource (get pods dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f) May 12 14:10:45.706: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6519.svc.cluster.local from pod dns-6519/dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f: the server could not find the requested resource (get pods dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f) May 12 14:10:45.709: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6519.svc.cluster.local from pod dns-6519/dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f: the server could not find the requested resource (get pods dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f) May 12 14:10:45.721: INFO: Lookups using dns-6519/dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f failed for: [wheezy_udp@dns-test-service.dns-6519.svc.cluster.local wheezy_tcp@dns-test-service.dns-6519.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6519.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6519.svc.cluster.local jessie_udp@dns-test-service.dns-6519.svc.cluster.local jessie_tcp@dns-test-service.dns-6519.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6519.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6519.svc.cluster.local] May 12 14:10:50.650: INFO: Unable to read wheezy_udp@dns-test-service.dns-6519.svc.cluster.local from pod dns-6519/dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f: the server could not find the requested resource (get pods dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f) May 12 14:10:50.653: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6519.svc.cluster.local from pod dns-6519/dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f: the server could not find the requested resource (get pods dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f) May 12 14:10:50.655: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6519.svc.cluster.local from pod dns-6519/dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f: the server could not find the requested resource (get pods dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f) May 12 14:10:50.658: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6519.svc.cluster.local from pod dns-6519/dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f: the server could not find the requested resource (get pods dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f) May 12 14:10:50.711: INFO: Unable to read jessie_udp@dns-test-service.dns-6519.svc.cluster.local from pod dns-6519/dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f: the server could not find the requested resource (get pods dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f) May 12 14:10:50.713: INFO: Unable to read jessie_tcp@dns-test-service.dns-6519.svc.cluster.local from pod dns-6519/dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f: the server could not find the requested resource (get pods dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f) May 12 14:10:50.715: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6519.svc.cluster.local from pod dns-6519/dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f: the server could not find the requested resource (get pods dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f) May 12 14:10:50.716: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6519.svc.cluster.local from pod dns-6519/dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f: the server could not find the requested resource (get pods dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f) May 12 14:10:50.727: INFO: Lookups using dns-6519/dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f failed for: [wheezy_udp@dns-test-service.dns-6519.svc.cluster.local wheezy_tcp@dns-test-service.dns-6519.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6519.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6519.svc.cluster.local jessie_udp@dns-test-service.dns-6519.svc.cluster.local jessie_tcp@dns-test-service.dns-6519.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6519.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6519.svc.cluster.local] May 12 14:10:55.648: INFO: Unable to read wheezy_udp@dns-test-service.dns-6519.svc.cluster.local from pod dns-6519/dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f: the server could not find the requested resource (get pods dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f) May 12 14:10:55.651: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6519.svc.cluster.local from pod dns-6519/dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f: the server could not find the requested resource (get pods dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f) May 12 14:10:55.654: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6519.svc.cluster.local from pod dns-6519/dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f: the server could not find the requested resource (get pods dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f) May 12 14:10:55.656: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6519.svc.cluster.local from pod dns-6519/dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f: the server could not find the requested resource (get pods dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f) May 12 14:10:55.671: INFO: Unable to read jessie_udp@dns-test-service.dns-6519.svc.cluster.local from pod dns-6519/dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f: the server could not find the requested resource (get pods dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f) May 12 14:10:55.674: INFO: Unable to read jessie_tcp@dns-test-service.dns-6519.svc.cluster.local from pod dns-6519/dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f: the server could not find the requested resource (get pods dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f) May 12 14:10:55.676: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6519.svc.cluster.local from pod dns-6519/dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f: the server could not find the requested resource (get pods dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f) May 12 14:10:55.678: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6519.svc.cluster.local from pod dns-6519/dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f: the server could not find the requested resource (get pods dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f) May 12 14:10:55.693: INFO: Lookups using dns-6519/dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f failed for: [wheezy_udp@dns-test-service.dns-6519.svc.cluster.local wheezy_tcp@dns-test-service.dns-6519.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6519.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6519.svc.cluster.local jessie_udp@dns-test-service.dns-6519.svc.cluster.local jessie_tcp@dns-test-service.dns-6519.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6519.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6519.svc.cluster.local] May 12 14:11:01.090: INFO: DNS probes using dns-6519/dns-test-bbc5d57b-817d-4182-a437-3755640f6f7f succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:11:02.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6519" for this suite. May 12 14:11:10.571: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:11:10.897: INFO: namespace dns-6519 deletion completed in 8.584017278s • [SLOW TEST:47.726 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:11:10.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all May 12 14:11:11.556: INFO: Waiting up to 5m0s for pod "client-containers-a19683a8-e8c5-49f0-8964-89f8aa5b0223" in namespace "containers-715" to be "success or failure" May 12 14:11:11.776: INFO: Pod "client-containers-a19683a8-e8c5-49f0-8964-89f8aa5b0223": Phase="Pending", Reason="", readiness=false. Elapsed: 219.354918ms May 12 14:11:13.779: INFO: Pod "client-containers-a19683a8-e8c5-49f0-8964-89f8aa5b0223": Phase="Pending", Reason="", readiness=false. Elapsed: 2.222534854s May 12 14:11:15.935: INFO: Pod "client-containers-a19683a8-e8c5-49f0-8964-89f8aa5b0223": Phase="Running", Reason="", readiness=true. Elapsed: 4.378278988s May 12 14:11:17.942: INFO: Pod "client-containers-a19683a8-e8c5-49f0-8964-89f8aa5b0223": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.385522268s STEP: Saw pod success May 12 14:11:17.942: INFO: Pod "client-containers-a19683a8-e8c5-49f0-8964-89f8aa5b0223" satisfied condition "success or failure" May 12 14:11:17.945: INFO: Trying to get logs from node iruya-worker pod client-containers-a19683a8-e8c5-49f0-8964-89f8aa5b0223 container test-container: STEP: delete the pod May 12 14:11:17.962: INFO: Waiting for pod client-containers-a19683a8-e8c5-49f0-8964-89f8aa5b0223 to disappear May 12 14:11:17.966: INFO: Pod client-containers-a19683a8-e8c5-49f0-8964-89f8aa5b0223 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:11:17.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-715" for this suite. May 12 14:11:23.982: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:11:24.088: INFO: namespace containers-715 deletion completed in 6.118832938s • [SLOW TEST:13.191 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:11:24.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-2572/configmap-test-8a70e725-98c7-439a-842b-a0b70cde3808 STEP: Creating a pod to test consume configMaps May 12 14:11:24.191: INFO: Waiting up to 5m0s for pod "pod-configmaps-0f5d07c2-9fb5-442f-a6bc-45a99eabec70" in namespace "configmap-2572" to be "success or failure" May 12 14:11:24.194: INFO: Pod "pod-configmaps-0f5d07c2-9fb5-442f-a6bc-45a99eabec70": Phase="Pending", Reason="", readiness=false. Elapsed: 2.978092ms May 12 14:11:26.198: INFO: Pod "pod-configmaps-0f5d07c2-9fb5-442f-a6bc-45a99eabec70": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006665064s May 12 14:11:28.201: INFO: Pod "pod-configmaps-0f5d07c2-9fb5-442f-a6bc-45a99eabec70": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009989041s May 12 14:11:30.205: INFO: Pod "pod-configmaps-0f5d07c2-9fb5-442f-a6bc-45a99eabec70": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014004697s STEP: Saw pod success May 12 14:11:30.205: INFO: Pod "pod-configmaps-0f5d07c2-9fb5-442f-a6bc-45a99eabec70" satisfied condition "success or failure" May 12 14:11:30.208: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-0f5d07c2-9fb5-442f-a6bc-45a99eabec70 container env-test: STEP: delete the pod May 12 14:11:30.240: INFO: Waiting for pod pod-configmaps-0f5d07c2-9fb5-442f-a6bc-45a99eabec70 to disappear May 12 14:11:30.303: INFO: Pod pod-configmaps-0f5d07c2-9fb5-442f-a6bc-45a99eabec70 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:11:30.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2572" for this suite. May 12 14:11:38.502: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:11:38.582: INFO: namespace configmap-2572 deletion completed in 8.275065021s • [SLOW TEST:14.493 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:11:38.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-420c7b4c-b818-49e7-91a3-fa362443d12d STEP: Creating a pod to test consume configMaps May 12 14:11:38.674: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e798d120-dfba-4d38-ac2f-6507bfa978ea" in namespace "projected-2470" to be "success or failure" May 12 14:11:38.678: INFO: Pod "pod-projected-configmaps-e798d120-dfba-4d38-ac2f-6507bfa978ea": Phase="Pending", Reason="", readiness=false. Elapsed: 3.769017ms May 12 14:11:40.681: INFO: Pod "pod-projected-configmaps-e798d120-dfba-4d38-ac2f-6507bfa978ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007240704s May 12 14:11:42.830: INFO: Pod "pod-projected-configmaps-e798d120-dfba-4d38-ac2f-6507bfa978ea": Phase="Pending", Reason="", readiness=false. Elapsed: 4.156327689s May 12 14:11:44.834: INFO: Pod "pod-projected-configmaps-e798d120-dfba-4d38-ac2f-6507bfa978ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.160328793s STEP: Saw pod success May 12 14:11:44.835: INFO: Pod "pod-projected-configmaps-e798d120-dfba-4d38-ac2f-6507bfa978ea" satisfied condition "success or failure" May 12 14:11:44.838: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-e798d120-dfba-4d38-ac2f-6507bfa978ea container projected-configmap-volume-test: STEP: delete the pod May 12 14:11:44.876: INFO: Waiting for pod pod-projected-configmaps-e798d120-dfba-4d38-ac2f-6507bfa978ea to disappear May 12 14:11:44.894: INFO: Pod pod-projected-configmaps-e798d120-dfba-4d38-ac2f-6507bfa978ea no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:11:44.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2470" for this suite. May 12 14:11:50.929: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:11:51.027: INFO: namespace projected-2470 deletion completed in 6.127832829s • [SLOW TEST:12.445 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:11:51.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 12 14:11:51.115: INFO: Pod name pod-release: Found 0 pods out of 1 May 12 14:11:56.117: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:11:57.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8499" for this suite. May 12 14:12:05.802: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:12:05.861: INFO: namespace replication-controller-8499 deletion completed in 8.203202074s • [SLOW TEST:14.833 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:12:05.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs May 12 14:12:06.095: INFO: Waiting up to 5m0s for pod "pod-60b34c11-ad50-4bc2-bc5b-2d8170926f0d" in namespace "emptydir-1827" to be "success or failure" May 12 14:12:06.106: INFO: Pod "pod-60b34c11-ad50-4bc2-bc5b-2d8170926f0d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.584876ms May 12 14:12:08.340: INFO: Pod "pod-60b34c11-ad50-4bc2-bc5b-2d8170926f0d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.244101592s May 12 14:12:10.363: INFO: Pod "pod-60b34c11-ad50-4bc2-bc5b-2d8170926f0d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.268056103s May 12 14:12:12.399: INFO: Pod "pod-60b34c11-ad50-4bc2-bc5b-2d8170926f0d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.303582849s STEP: Saw pod success May 12 14:12:12.399: INFO: Pod "pod-60b34c11-ad50-4bc2-bc5b-2d8170926f0d" satisfied condition "success or failure" May 12 14:12:12.402: INFO: Trying to get logs from node iruya-worker2 pod pod-60b34c11-ad50-4bc2-bc5b-2d8170926f0d container test-container: STEP: delete the pod May 12 14:12:12.497: INFO: Waiting for pod pod-60b34c11-ad50-4bc2-bc5b-2d8170926f0d to disappear May 12 14:12:12.599: INFO: Pod pod-60b34c11-ad50-4bc2-bc5b-2d8170926f0d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:12:12.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1827" for this suite. May 12 14:12:18.645: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:12:18.719: INFO: namespace emptydir-1827 deletion completed in 6.115385678s • [SLOW TEST:12.858 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:12:18.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1863.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1863.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 12 14:12:28.870: INFO: DNS probes using dns-1863/dns-test-ea5eb9b7-505d-4fd3-ad8e-37aa068b62b0 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:12:28.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1863" for this suite. May 12 14:12:34.989: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:12:35.073: INFO: namespace dns-1863 deletion completed in 6.116564353s • [SLOW TEST:16.353 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:12:35.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 12 14:12:35.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3206' May 12 14:12:39.839: INFO: stderr: "" May 12 14:12:39.839: INFO: stdout: "replicationcontroller/redis-master created\n" May 12 14:12:39.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3206' May 12 14:12:40.161: INFO: stderr: "" May 12 14:12:40.161: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. May 12 14:12:41.166: INFO: Selector matched 1 pods for map[app:redis] May 12 14:12:41.166: INFO: Found 0 / 1 May 12 14:12:42.165: INFO: Selector matched 1 pods for map[app:redis] May 12 14:12:42.165: INFO: Found 0 / 1 May 12 14:12:43.196: INFO: Selector matched 1 pods for map[app:redis] May 12 14:12:43.196: INFO: Found 0 / 1 May 12 14:12:44.165: INFO: Selector matched 1 pods for map[app:redis] May 12 14:12:44.165: INFO: Found 1 / 1 May 12 14:12:44.165: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 12 14:12:44.168: INFO: Selector matched 1 pods for map[app:redis] May 12 14:12:44.168: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 12 14:12:44.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-cbngf --namespace=kubectl-3206' May 12 14:12:44.285: INFO: stderr: "" May 12 14:12:44.285: INFO: stdout: "Name: redis-master-cbngf\nNamespace: kubectl-3206\nPriority: 0\nNode: iruya-worker2/172.17.0.5\nStart Time: Tue, 12 May 2020 14:12:40 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.114\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://06b18dfd1147b9e785d8ad6c7f69a77659525cd708e7abb9bd7aafa18f0b217d\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Tue, 12 May 2020 14:12:43 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-5xxt8 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-5xxt8:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-5xxt8\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 5s default-scheduler Successfully assigned kubectl-3206/redis-master-cbngf to iruya-worker2\n Normal Pulled 3s kubelet, iruya-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 2s kubelet, iruya-worker2 Created container redis-master\n Normal Started 1s kubelet, iruya-worker2 Started container redis-master\n" May 12 14:12:44.286: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-3206' May 12 14:12:44.388: INFO: stderr: "" May 12 14:12:44.388: INFO: stdout: "Name: redis-master\nNamespace: kubectl-3206\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 5s replication-controller Created pod: redis-master-cbngf\n" May 12 14:12:44.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-3206' May 12 14:12:44.494: INFO: stderr: "" May 12 14:12:44.494: INFO: stdout: "Name: redis-master\nNamespace: kubectl-3206\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.109.164.124\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.1.114:6379\nSession Affinity: None\nEvents: \n" May 12 14:12:44.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-control-plane' May 12 14:12:44.628: INFO: stderr: "" May 12 14:12:44.628: INFO: stdout: "Name: iruya-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:24:20 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Tue, 12 May 2020 14:12:08 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Tue, 12 May 2020 14:12:08 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Tue, 12 May 2020 14:12:08 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Tue, 12 May 2020 14:12:08 +0000 Sun, 15 Mar 2020 18:25:00 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.7\n Hostname: iruya-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 09f14f6f4d1640fcaab2243401c9f154\n System UUID: 7c6ca533-492e-400c-b058-c282f97a69ec\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.15.7\n Kube-Proxy Version: v1.15.7\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-iruya-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 57d\n kube-system kindnet-zn8sx 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 57d\n kube-system kube-apiserver-iruya-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 57d\n kube-system kube-controller-manager-iruya-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 57d\n kube-system kube-proxy-46nsr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 57d\n kube-system kube-scheduler-iruya-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 57d\n local-path-storage local-path-provisioner-d4947b89c-72frh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 57d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" May 12 14:12:44.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-3206' May 12 14:12:44.722: INFO: stderr: "" May 12 14:12:44.722: INFO: stdout: "Name: kubectl-3206\nLabels: e2e-framework=kubectl\n e2e-run=ccb131b2-2f45-424e-9856-67b9421c922f\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:12:44.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3206" for this suite. May 12 14:13:08.743: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:13:08.817: INFO: namespace kubectl-3206 deletion completed in 24.092299862s • [SLOW TEST:33.744 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:13:08.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs May 12 14:13:08.916: INFO: Waiting up to 5m0s for pod "pod-1189b295-02e2-4a67-8437-f20786c8edf7" in namespace "emptydir-9847" to be "success or failure" May 12 14:13:08.927: INFO: Pod "pod-1189b295-02e2-4a67-8437-f20786c8edf7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.845715ms May 12 14:13:10.956: INFO: Pod "pod-1189b295-02e2-4a67-8437-f20786c8edf7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039635103s May 12 14:13:12.958: INFO: Pod "pod-1189b295-02e2-4a67-8437-f20786c8edf7": Phase="Running", Reason="", readiness=true. Elapsed: 4.042380576s May 12 14:13:14.962: INFO: Pod "pod-1189b295-02e2-4a67-8437-f20786c8edf7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.046118231s STEP: Saw pod success May 12 14:13:14.962: INFO: Pod "pod-1189b295-02e2-4a67-8437-f20786c8edf7" satisfied condition "success or failure" May 12 14:13:14.965: INFO: Trying to get logs from node iruya-worker pod pod-1189b295-02e2-4a67-8437-f20786c8edf7 container test-container: STEP: delete the pod May 12 14:13:15.011: INFO: Waiting for pod pod-1189b295-02e2-4a67-8437-f20786c8edf7 to disappear May 12 14:13:15.021: INFO: Pod pod-1189b295-02e2-4a67-8437-f20786c8edf7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:13:15.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9847" for this suite. May 12 14:13:21.035: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:13:21.103: INFO: namespace emptydir-9847 deletion completed in 6.07932811s • [SLOW TEST:12.285 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:13:21.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 12 14:13:21.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-9414' May 12 14:13:21.388: INFO: stderr: "" May 12 14:13:21.388: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created May 12 14:13:26.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-9414 -o json' May 12 14:13:26.534: INFO: stderr: "" May 12 14:13:26.534: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-12T14:13:21Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-9414\",\n \"resourceVersion\": \"10495871\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-9414/pods/e2e-test-nginx-pod\",\n \"uid\": \"34987f1e-83d8-459a-b9b0-b9d994d0682b\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-ktgsg\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-ktgsg\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-ktgsg\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-12T14:13:21Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-12T14:13:25Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-12T14:13:25Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-12T14:13:21Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://54cf8c7e05399b25374f54e7f060f6250b08c710188780fd1c6c68aa4c910d67\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-12T14:13:24Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.5\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.115\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-12T14:13:21Z\"\n }\n}\n" STEP: replace the image in the pod May 12 14:13:26.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-9414' May 12 14:13:26.820: INFO: stderr: "" May 12 14:13:26.820: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 May 12 14:13:26.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-9414' May 12 14:13:41.875: INFO: stderr: "" May 12 14:13:41.875: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:13:41.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9414" for this suite. May 12 14:13:47.889: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:13:47.954: INFO: namespace kubectl-9414 deletion completed in 6.075885125s • [SLOW TEST:26.851 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:13:47.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 12 14:13:48.038: INFO: Waiting up to 5m0s for pod "downward-api-5389dd84-a0c3-4bea-a506-a7369306383f" in namespace "downward-api-6725" to be "success or failure" May 12 14:13:48.048: INFO: Pod "downward-api-5389dd84-a0c3-4bea-a506-a7369306383f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.884827ms May 12 14:13:50.233: INFO: Pod "downward-api-5389dd84-a0c3-4bea-a506-a7369306383f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195421521s May 12 14:13:52.237: INFO: Pod "downward-api-5389dd84-a0c3-4bea-a506-a7369306383f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.198828121s STEP: Saw pod success May 12 14:13:52.237: INFO: Pod "downward-api-5389dd84-a0c3-4bea-a506-a7369306383f" satisfied condition "success or failure" May 12 14:13:52.240: INFO: Trying to get logs from node iruya-worker pod downward-api-5389dd84-a0c3-4bea-a506-a7369306383f container dapi-container: STEP: delete the pod May 12 14:13:52.450: INFO: Waiting for pod downward-api-5389dd84-a0c3-4bea-a506-a7369306383f to disappear May 12 14:13:52.570: INFO: Pod downward-api-5389dd84-a0c3-4bea-a506-a7369306383f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:13:52.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6725" for this suite. May 12 14:13:58.584: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:13:58.659: INFO: namespace downward-api-6725 deletion completed in 6.085932022s • [SLOW TEST:10.705 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:13:58.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-527dcfae-30cc-4fa1-a997-6e935f68ca4e STEP: Creating a pod to test consume configMaps May 12 14:13:58.760: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5478071c-f1ec-4bbd-86d4-505f1ffc8ad5" in namespace "projected-3832" to be "success or failure" May 12 14:13:58.789: INFO: Pod "pod-projected-configmaps-5478071c-f1ec-4bbd-86d4-505f1ffc8ad5": Phase="Pending", Reason="", readiness=false. Elapsed: 28.509996ms May 12 14:14:00.831: INFO: Pod "pod-projected-configmaps-5478071c-f1ec-4bbd-86d4-505f1ffc8ad5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070522846s May 12 14:14:02.835: INFO: Pod "pod-projected-configmaps-5478071c-f1ec-4bbd-86d4-505f1ffc8ad5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.074679416s STEP: Saw pod success May 12 14:14:02.835: INFO: Pod "pod-projected-configmaps-5478071c-f1ec-4bbd-86d4-505f1ffc8ad5" satisfied condition "success or failure" May 12 14:14:02.838: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-5478071c-f1ec-4bbd-86d4-505f1ffc8ad5 container projected-configmap-volume-test: STEP: delete the pod May 12 14:14:02.890: INFO: Waiting for pod pod-projected-configmaps-5478071c-f1ec-4bbd-86d4-505f1ffc8ad5 to disappear May 12 14:14:02.914: INFO: Pod pod-projected-configmaps-5478071c-f1ec-4bbd-86d4-505f1ffc8ad5 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:14:02.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3832" for this suite. May 12 14:14:09.002: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:14:09.079: INFO: namespace projected-3832 deletion completed in 6.160470126s • [SLOW TEST:10.419 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:14:09.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-7482 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-7482 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7482 May 12 14:14:09.262: INFO: Found 0 stateful pods, waiting for 1 May 12 14:14:19.266: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 12 14:14:19.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7482 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 12 14:14:19.510: INFO: stderr: "I0512 14:14:19.391977 2988 log.go:172] (0xc000a16630) (0xc0005e8b40) Create stream\nI0512 14:14:19.392023 2988 log.go:172] (0xc000a16630) (0xc0005e8b40) Stream added, broadcasting: 1\nI0512 14:14:19.394946 2988 log.go:172] (0xc000a16630) Reply frame received for 1\nI0512 14:14:19.394982 2988 log.go:172] (0xc000a16630) (0xc0005e8280) Create stream\nI0512 14:14:19.394991 2988 log.go:172] (0xc000a16630) (0xc0005e8280) Stream added, broadcasting: 3\nI0512 14:14:19.395790 2988 log.go:172] (0xc000a16630) Reply frame received for 3\nI0512 14:14:19.395820 2988 log.go:172] (0xc000a16630) (0xc000110000) Create stream\nI0512 14:14:19.395833 2988 log.go:172] (0xc000a16630) (0xc000110000) Stream added, broadcasting: 5\nI0512 14:14:19.396559 2988 log.go:172] (0xc000a16630) Reply frame received for 5\nI0512 14:14:19.468027 2988 log.go:172] (0xc000a16630) Data frame received for 5\nI0512 14:14:19.468048 2988 log.go:172] (0xc000110000) (5) Data frame handling\nI0512 14:14:19.468058 2988 log.go:172] (0xc000110000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0512 14:14:19.502610 2988 log.go:172] (0xc000a16630) Data frame received for 3\nI0512 14:14:19.502641 2988 log.go:172] (0xc0005e8280) (3) Data frame handling\nI0512 14:14:19.502663 2988 log.go:172] (0xc0005e8280) (3) Data frame sent\nI0512 14:14:19.502764 2988 log.go:172] (0xc000a16630) Data frame received for 3\nI0512 14:14:19.502791 2988 log.go:172] (0xc0005e8280) (3) Data frame handling\nI0512 14:14:19.502833 2988 log.go:172] (0xc000a16630) Data frame received for 5\nI0512 14:14:19.502873 2988 log.go:172] (0xc000110000) (5) Data frame handling\nI0512 14:14:19.504986 2988 log.go:172] (0xc000a16630) Data frame received for 1\nI0512 14:14:19.505013 2988 log.go:172] (0xc0005e8b40) (1) Data frame handling\nI0512 14:14:19.505032 2988 log.go:172] (0xc0005e8b40) (1) Data frame sent\nI0512 14:14:19.505054 2988 log.go:172] (0xc000a16630) (0xc0005e8b40) Stream removed, broadcasting: 1\nI0512 14:14:19.505334 2988 log.go:172] (0xc000a16630) Go away received\nI0512 14:14:19.505595 2988 log.go:172] (0xc000a16630) (0xc0005e8b40) Stream removed, broadcasting: 1\nI0512 14:14:19.505631 2988 log.go:172] (0xc000a16630) (0xc0005e8280) Stream removed, broadcasting: 3\nI0512 14:14:19.505644 2988 log.go:172] (0xc000a16630) (0xc000110000) Stream removed, broadcasting: 5\n" May 12 14:14:19.510: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 12 14:14:19.510: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 12 14:14:19.514: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 12 14:14:29.519: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 12 14:14:29.519: INFO: Waiting for statefulset status.replicas updated to 0 May 12 14:14:29.540: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999509s May 12 14:14:30.545: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.989133252s May 12 14:14:31.548: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.98432794s May 12 14:14:32.552: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.980565764s May 12 14:14:33.556: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.977045824s May 12 14:14:34.561: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.972663075s May 12 14:14:35.565: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.967581564s May 12 14:14:36.569: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.963901693s May 12 14:14:37.572: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.959475761s May 12 14:14:38.577: INFO: Verifying statefulset ss doesn't scale past 1 for another 956.472712ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7482 May 12 14:14:39.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7482 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 14:14:39.810: INFO: stderr: "I0512 14:14:39.705820 3008 log.go:172] (0xc000968370) (0xc0008ea640) Create stream\nI0512 14:14:39.705885 3008 log.go:172] (0xc000968370) (0xc0008ea640) Stream added, broadcasting: 1\nI0512 14:14:39.707899 3008 log.go:172] (0xc000968370) Reply frame received for 1\nI0512 14:14:39.707966 3008 log.go:172] (0xc000968370) (0xc0008f4000) Create stream\nI0512 14:14:39.707990 3008 log.go:172] (0xc000968370) (0xc0008f4000) Stream added, broadcasting: 3\nI0512 14:14:39.708850 3008 log.go:172] (0xc000968370) Reply frame received for 3\nI0512 14:14:39.708891 3008 log.go:172] (0xc000968370) (0xc0008ea6e0) Create stream\nI0512 14:14:39.708905 3008 log.go:172] (0xc000968370) (0xc0008ea6e0) Stream added, broadcasting: 5\nI0512 14:14:39.709830 3008 log.go:172] (0xc000968370) Reply frame received for 5\nI0512 14:14:39.802162 3008 log.go:172] (0xc000968370) Data frame received for 5\nI0512 14:14:39.802186 3008 log.go:172] (0xc0008ea6e0) (5) Data frame handling\nI0512 14:14:39.802193 3008 log.go:172] (0xc0008ea6e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0512 14:14:39.802206 3008 log.go:172] (0xc000968370) Data frame received for 3\nI0512 14:14:39.802212 3008 log.go:172] (0xc0008f4000) (3) Data frame handling\nI0512 14:14:39.802224 3008 log.go:172] (0xc0008f4000) (3) Data frame sent\nI0512 14:14:39.802236 3008 log.go:172] (0xc000968370) Data frame received for 3\nI0512 14:14:39.802241 3008 log.go:172] (0xc0008f4000) (3) Data frame handling\nI0512 14:14:39.802273 3008 log.go:172] (0xc000968370) Data frame received for 5\nI0512 14:14:39.802303 3008 log.go:172] (0xc0008ea6e0) (5) Data frame handling\nI0512 14:14:39.804082 3008 log.go:172] (0xc000968370) Data frame received for 1\nI0512 14:14:39.804119 3008 log.go:172] (0xc0008ea640) (1) Data frame handling\nI0512 14:14:39.804150 3008 log.go:172] (0xc0008ea640) (1) Data frame sent\nI0512 14:14:39.804194 3008 log.go:172] (0xc000968370) (0xc0008ea640) Stream removed, broadcasting: 1\nI0512 14:14:39.804226 3008 log.go:172] (0xc000968370) Go away received\nI0512 14:14:39.804563 3008 log.go:172] (0xc000968370) (0xc0008ea640) Stream removed, broadcasting: 1\nI0512 14:14:39.804586 3008 log.go:172] (0xc000968370) (0xc0008f4000) Stream removed, broadcasting: 3\nI0512 14:14:39.804594 3008 log.go:172] (0xc000968370) (0xc0008ea6e0) Stream removed, broadcasting: 5\n" May 12 14:14:39.810: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 12 14:14:39.810: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 12 14:14:39.814: INFO: Found 1 stateful pods, waiting for 3 May 12 14:14:49.818: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 12 14:14:49.818: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 12 14:14:49.818: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 12 14:14:49.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7482 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 12 14:14:50.041: INFO: stderr: "I0512 14:14:49.948598 3029 log.go:172] (0xc000a10630) (0xc00064ebe0) Create stream\nI0512 14:14:49.948656 3029 log.go:172] (0xc000a10630) (0xc00064ebe0) Stream added, broadcasting: 1\nI0512 14:14:49.950833 3029 log.go:172] (0xc000a10630) Reply frame received for 1\nI0512 14:14:49.950868 3029 log.go:172] (0xc000a10630) (0xc000702000) Create stream\nI0512 14:14:49.950879 3029 log.go:172] (0xc000a10630) (0xc000702000) Stream added, broadcasting: 3\nI0512 14:14:49.951779 3029 log.go:172] (0xc000a10630) Reply frame received for 3\nI0512 14:14:49.951845 3029 log.go:172] (0xc000a10630) (0xc0004c0140) Create stream\nI0512 14:14:49.951888 3029 log.go:172] (0xc000a10630) (0xc0004c0140) Stream added, broadcasting: 5\nI0512 14:14:49.952688 3029 log.go:172] (0xc000a10630) Reply frame received for 5\nI0512 14:14:50.034858 3029 log.go:172] (0xc000a10630) Data frame received for 3\nI0512 14:14:50.034910 3029 log.go:172] (0xc000702000) (3) Data frame handling\nI0512 14:14:50.034931 3029 log.go:172] (0xc000702000) (3) Data frame sent\nI0512 14:14:50.034942 3029 log.go:172] (0xc000a10630) Data frame received for 3\nI0512 14:14:50.034951 3029 log.go:172] (0xc000702000) (3) Data frame handling\nI0512 14:14:50.034994 3029 log.go:172] (0xc000a10630) Data frame received for 5\nI0512 14:14:50.035020 3029 log.go:172] (0xc0004c0140) (5) Data frame handling\nI0512 14:14:50.035042 3029 log.go:172] (0xc0004c0140) (5) Data frame sent\nI0512 14:14:50.035054 3029 log.go:172] (0xc000a10630) Data frame received for 5\nI0512 14:14:50.035072 3029 log.go:172] (0xc0004c0140) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0512 14:14:50.036309 3029 log.go:172] (0xc000a10630) Data frame received for 1\nI0512 14:14:50.036336 3029 log.go:172] (0xc00064ebe0) (1) Data frame handling\nI0512 14:14:50.036369 3029 log.go:172] (0xc00064ebe0) (1) Data frame sent\nI0512 14:14:50.036391 3029 log.go:172] (0xc000a10630) (0xc00064ebe0) Stream removed, broadcasting: 1\nI0512 14:14:50.036414 3029 log.go:172] (0xc000a10630) Go away received\nI0512 14:14:50.036797 3029 log.go:172] (0xc000a10630) (0xc00064ebe0) Stream removed, broadcasting: 1\nI0512 14:14:50.036839 3029 log.go:172] (0xc000a10630) (0xc000702000) Stream removed, broadcasting: 3\nI0512 14:14:50.036863 3029 log.go:172] (0xc000a10630) (0xc0004c0140) Stream removed, broadcasting: 5\n" May 12 14:14:50.041: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 12 14:14:50.041: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 12 14:14:50.041: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7482 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 12 14:14:50.256: INFO: stderr: "I0512 14:14:50.160227 3049 log.go:172] (0xc000118630) (0xc000410820) Create stream\nI0512 14:14:50.160369 3049 log.go:172] (0xc000118630) (0xc000410820) Stream added, broadcasting: 1\nI0512 14:14:50.163363 3049 log.go:172] (0xc000118630) Reply frame received for 1\nI0512 14:14:50.163431 3049 log.go:172] (0xc000118630) (0xc000870000) Create stream\nI0512 14:14:50.163449 3049 log.go:172] (0xc000118630) (0xc000870000) Stream added, broadcasting: 3\nI0512 14:14:50.164394 3049 log.go:172] (0xc000118630) Reply frame received for 3\nI0512 14:14:50.164441 3049 log.go:172] (0xc000118630) (0xc00037a000) Create stream\nI0512 14:14:50.164463 3049 log.go:172] (0xc000118630) (0xc00037a000) Stream added, broadcasting: 5\nI0512 14:14:50.165538 3049 log.go:172] (0xc000118630) Reply frame received for 5\nI0512 14:14:50.219363 3049 log.go:172] (0xc000118630) Data frame received for 5\nI0512 14:14:50.219412 3049 log.go:172] (0xc00037a000) (5) Data frame handling\nI0512 14:14:50.219446 3049 log.go:172] (0xc00037a000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0512 14:14:50.249496 3049 log.go:172] (0xc000118630) Data frame received for 3\nI0512 14:14:50.249523 3049 log.go:172] (0xc000870000) (3) Data frame handling\nI0512 14:14:50.249558 3049 log.go:172] (0xc000870000) (3) Data frame sent\nI0512 14:14:50.249580 3049 log.go:172] (0xc000118630) Data frame received for 3\nI0512 14:14:50.249596 3049 log.go:172] (0xc000870000) (3) Data frame handling\nI0512 14:14:50.249686 3049 log.go:172] (0xc000118630) Data frame received for 5\nI0512 14:14:50.249709 3049 log.go:172] (0xc00037a000) (5) Data frame handling\nI0512 14:14:50.251327 3049 log.go:172] (0xc000118630) Data frame received for 1\nI0512 14:14:50.251356 3049 log.go:172] (0xc000410820) (1) Data frame handling\nI0512 14:14:50.251376 3049 log.go:172] (0xc000410820) (1) Data frame sent\nI0512 14:14:50.251396 3049 log.go:172] (0xc000118630) (0xc000410820) Stream removed, broadcasting: 1\nI0512 14:14:50.251434 3049 log.go:172] (0xc000118630) Go away received\nI0512 14:14:50.252033 3049 log.go:172] (0xc000118630) (0xc000410820) Stream removed, broadcasting: 1\nI0512 14:14:50.252066 3049 log.go:172] (0xc000118630) (0xc000870000) Stream removed, broadcasting: 3\nI0512 14:14:50.252086 3049 log.go:172] (0xc000118630) (0xc00037a000) Stream removed, broadcasting: 5\n" May 12 14:14:50.256: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 12 14:14:50.256: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 12 14:14:50.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7482 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 12 14:14:50.507: INFO: stderr: "I0512 14:14:50.385581 3066 log.go:172] (0xc000116dc0) (0xc0002de820) Create stream\nI0512 14:14:50.385620 3066 log.go:172] (0xc000116dc0) (0xc0002de820) Stream added, broadcasting: 1\nI0512 14:14:50.386950 3066 log.go:172] (0xc000116dc0) Reply frame received for 1\nI0512 14:14:50.386989 3066 log.go:172] (0xc000116dc0) (0xc0007e0000) Create stream\nI0512 14:14:50.387008 3066 log.go:172] (0xc000116dc0) (0xc0007e0000) Stream added, broadcasting: 3\nI0512 14:14:50.387648 3066 log.go:172] (0xc000116dc0) Reply frame received for 3\nI0512 14:14:50.387674 3066 log.go:172] (0xc000116dc0) (0xc00080a000) Create stream\nI0512 14:14:50.387683 3066 log.go:172] (0xc000116dc0) (0xc00080a000) Stream added, broadcasting: 5\nI0512 14:14:50.388550 3066 log.go:172] (0xc000116dc0) Reply frame received for 5\nI0512 14:14:50.451986 3066 log.go:172] (0xc000116dc0) Data frame received for 5\nI0512 14:14:50.452018 3066 log.go:172] (0xc00080a000) (5) Data frame handling\nI0512 14:14:50.452038 3066 log.go:172] (0xc00080a000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0512 14:14:50.501534 3066 log.go:172] (0xc000116dc0) Data frame received for 5\nI0512 14:14:50.501554 3066 log.go:172] (0xc00080a000) (5) Data frame handling\nI0512 14:14:50.501598 3066 log.go:172] (0xc000116dc0) Data frame received for 3\nI0512 14:14:50.501617 3066 log.go:172] (0xc0007e0000) (3) Data frame handling\nI0512 14:14:50.501633 3066 log.go:172] (0xc0007e0000) (3) Data frame sent\nI0512 14:14:50.501643 3066 log.go:172] (0xc000116dc0) Data frame received for 3\nI0512 14:14:50.501655 3066 log.go:172] (0xc0007e0000) (3) Data frame handling\nI0512 14:14:50.503289 3066 log.go:172] (0xc000116dc0) Data frame received for 1\nI0512 14:14:50.503310 3066 log.go:172] (0xc0002de820) (1) Data frame handling\nI0512 14:14:50.503324 3066 log.go:172] (0xc0002de820) (1) Data frame sent\nI0512 14:14:50.503351 3066 log.go:172] (0xc000116dc0) (0xc0002de820) Stream removed, broadcasting: 1\nI0512 14:14:50.503376 3066 log.go:172] (0xc000116dc0) Go away received\nI0512 14:14:50.503555 3066 log.go:172] (0xc000116dc0) (0xc0002de820) Stream removed, broadcasting: 1\nI0512 14:14:50.503565 3066 log.go:172] (0xc000116dc0) (0xc0007e0000) Stream removed, broadcasting: 3\nI0512 14:14:50.503570 3066 log.go:172] (0xc000116dc0) (0xc00080a000) Stream removed, broadcasting: 5\n" May 12 14:14:50.507: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 12 14:14:50.507: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 12 14:14:50.507: INFO: Waiting for statefulset status.replicas updated to 0 May 12 14:14:50.509: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 12 14:15:00.515: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 12 14:15:00.515: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 12 14:15:00.515: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 12 14:15:00.528: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999627s May 12 14:15:01.659: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.991841445s May 12 14:15:02.663: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.861230959s May 12 14:15:03.669: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.856510515s May 12 14:15:04.676: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.850996411s May 12 14:15:05.726: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.843482757s May 12 14:15:06.736: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.793588814s May 12 14:15:07.739: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.783930918s May 12 14:15:08.814: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.780657181s May 12 14:15:09.818: INFO: Verifying statefulset ss doesn't scale past 3 for another 705.767211ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7482 May 12 14:15:10.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7482 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 14:15:11.073: INFO: stderr: "I0512 14:15:10.954250 3087 log.go:172] (0xc000116fd0) (0xc000694b40) Create stream\nI0512 14:15:10.954322 3087 log.go:172] (0xc000116fd0) (0xc000694b40) Stream added, broadcasting: 1\nI0512 14:15:10.957739 3087 log.go:172] (0xc000116fd0) Reply frame received for 1\nI0512 14:15:10.957813 3087 log.go:172] (0xc000116fd0) (0xc000924000) Create stream\nI0512 14:15:10.957836 3087 log.go:172] (0xc000116fd0) (0xc000924000) Stream added, broadcasting: 3\nI0512 14:15:10.959126 3087 log.go:172] (0xc000116fd0) Reply frame received for 3\nI0512 14:15:10.959208 3087 log.go:172] (0xc000116fd0) (0xc0009240a0) Create stream\nI0512 14:15:10.959491 3087 log.go:172] (0xc000116fd0) (0xc0009240a0) Stream added, broadcasting: 5\nI0512 14:15:10.960613 3087 log.go:172] (0xc000116fd0) Reply frame received for 5\nI0512 14:15:11.054070 3087 log.go:172] (0xc000116fd0) Data frame received for 5\nI0512 14:15:11.054098 3087 log.go:172] (0xc0009240a0) (5) Data frame handling\nI0512 14:15:11.054124 3087 log.go:172] (0xc0009240a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0512 14:15:11.064673 3087 log.go:172] (0xc000116fd0) Data frame received for 3\nI0512 14:15:11.064702 3087 log.go:172] (0xc000924000) (3) Data frame handling\nI0512 14:15:11.064713 3087 log.go:172] (0xc000924000) (3) Data frame sent\nI0512 14:15:11.064744 3087 log.go:172] (0xc000116fd0) Data frame received for 5\nI0512 14:15:11.064761 3087 log.go:172] (0xc0009240a0) (5) Data frame handling\nI0512 14:15:11.065009 3087 log.go:172] (0xc000116fd0) Data frame received for 3\nI0512 14:15:11.065042 3087 log.go:172] (0xc000924000) (3) Data frame handling\nI0512 14:15:11.067052 3087 log.go:172] (0xc000116fd0) Data frame received for 1\nI0512 14:15:11.067074 3087 log.go:172] (0xc000694b40) (1) Data frame handling\nI0512 14:15:11.067089 3087 log.go:172] (0xc000694b40) (1) Data frame sent\nI0512 14:15:11.067104 3087 log.go:172] (0xc000116fd0) (0xc000694b40) Stream removed, broadcasting: 1\nI0512 14:15:11.067154 3087 log.go:172] (0xc000116fd0) Go away received\nI0512 14:15:11.067413 3087 log.go:172] (0xc000116fd0) (0xc000694b40) Stream removed, broadcasting: 1\nI0512 14:15:11.067430 3087 log.go:172] (0xc000116fd0) (0xc000924000) Stream removed, broadcasting: 3\nI0512 14:15:11.067439 3087 log.go:172] (0xc000116fd0) (0xc0009240a0) Stream removed, broadcasting: 5\n" May 12 14:15:11.073: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 12 14:15:11.073: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 12 14:15:11.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7482 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 14:15:11.280: INFO: stderr: "I0512 14:15:11.207919 3108 log.go:172] (0xc000a0a420) (0xc0008426e0) Create stream\nI0512 14:15:11.207988 3108 log.go:172] (0xc000a0a420) (0xc0008426e0) Stream added, broadcasting: 1\nI0512 14:15:11.209876 3108 log.go:172] (0xc000a0a420) Reply frame received for 1\nI0512 14:15:11.209938 3108 log.go:172] (0xc000a0a420) (0xc00057e460) Create stream\nI0512 14:15:11.209958 3108 log.go:172] (0xc000a0a420) (0xc00057e460) Stream added, broadcasting: 3\nI0512 14:15:11.210821 3108 log.go:172] (0xc000a0a420) Reply frame received for 3\nI0512 14:15:11.210856 3108 log.go:172] (0xc000a0a420) (0xc0008d0000) Create stream\nI0512 14:15:11.210871 3108 log.go:172] (0xc000a0a420) (0xc0008d0000) Stream added, broadcasting: 5\nI0512 14:15:11.211728 3108 log.go:172] (0xc000a0a420) Reply frame received for 5\nI0512 14:15:11.274299 3108 log.go:172] (0xc000a0a420) Data frame received for 3\nI0512 14:15:11.274320 3108 log.go:172] (0xc00057e460) (3) Data frame handling\nI0512 14:15:11.274338 3108 log.go:172] (0xc000a0a420) Data frame received for 5\nI0512 14:15:11.274362 3108 log.go:172] (0xc0008d0000) (5) Data frame handling\nI0512 14:15:11.274379 3108 log.go:172] (0xc0008d0000) (5) Data frame sent\nI0512 14:15:11.274393 3108 log.go:172] (0xc000a0a420) Data frame received for 5\nI0512 14:15:11.274401 3108 log.go:172] (0xc0008d0000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0512 14:15:11.274421 3108 log.go:172] (0xc00057e460) (3) Data frame sent\nI0512 14:15:11.274430 3108 log.go:172] (0xc000a0a420) Data frame received for 3\nI0512 14:15:11.274437 3108 log.go:172] (0xc00057e460) (3) Data frame handling\nI0512 14:15:11.275705 3108 log.go:172] (0xc000a0a420) Data frame received for 1\nI0512 14:15:11.275733 3108 log.go:172] (0xc0008426e0) (1) Data frame handling\nI0512 14:15:11.275758 3108 log.go:172] (0xc0008426e0) (1) Data frame sent\nI0512 14:15:11.275775 3108 log.go:172] (0xc000a0a420) (0xc0008426e0) Stream removed, broadcasting: 1\nI0512 14:15:11.276007 3108 log.go:172] (0xc000a0a420) Go away received\nI0512 14:15:11.276159 3108 log.go:172] (0xc000a0a420) (0xc0008426e0) Stream removed, broadcasting: 1\nI0512 14:15:11.276176 3108 log.go:172] (0xc000a0a420) (0xc00057e460) Stream removed, broadcasting: 3\nI0512 14:15:11.276185 3108 log.go:172] (0xc000a0a420) (0xc0008d0000) Stream removed, broadcasting: 5\n" May 12 14:15:11.280: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 12 14:15:11.280: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 12 14:15:11.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7482 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 14:15:11.473: INFO: stderr: "I0512 14:15:11.396434 3131 log.go:172] (0xc00097e370) (0xc0008b66e0) Create stream\nI0512 14:15:11.396485 3131 log.go:172] (0xc00097e370) (0xc0008b66e0) Stream added, broadcasting: 1\nI0512 14:15:11.398299 3131 log.go:172] (0xc00097e370) Reply frame received for 1\nI0512 14:15:11.398329 3131 log.go:172] (0xc00097e370) (0xc0002fa140) Create stream\nI0512 14:15:11.398340 3131 log.go:172] (0xc00097e370) (0xc0002fa140) Stream added, broadcasting: 3\nI0512 14:15:11.399013 3131 log.go:172] (0xc00097e370) Reply frame received for 3\nI0512 14:15:11.399037 3131 log.go:172] (0xc00097e370) (0xc0002fa1e0) Create stream\nI0512 14:15:11.399049 3131 log.go:172] (0xc00097e370) (0xc0002fa1e0) Stream added, broadcasting: 5\nI0512 14:15:11.399706 3131 log.go:172] (0xc00097e370) Reply frame received for 5\nI0512 14:15:11.468474 3131 log.go:172] (0xc00097e370) Data frame received for 3\nI0512 14:15:11.468503 3131 log.go:172] (0xc0002fa140) (3) Data frame handling\nI0512 14:15:11.468514 3131 log.go:172] (0xc0002fa140) (3) Data frame sent\nI0512 14:15:11.468522 3131 log.go:172] (0xc00097e370) Data frame received for 3\nI0512 14:15:11.468529 3131 log.go:172] (0xc0002fa140) (3) Data frame handling\nI0512 14:15:11.468554 3131 log.go:172] (0xc00097e370) Data frame received for 5\nI0512 14:15:11.468566 3131 log.go:172] (0xc0002fa1e0) (5) Data frame handling\nI0512 14:15:11.468581 3131 log.go:172] (0xc0002fa1e0) (5) Data frame sent\nI0512 14:15:11.468590 3131 log.go:172] (0xc00097e370) Data frame received for 5\nI0512 14:15:11.468601 3131 log.go:172] (0xc0002fa1e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0512 14:15:11.469931 3131 log.go:172] (0xc00097e370) Data frame received for 1\nI0512 14:15:11.469947 3131 log.go:172] (0xc0008b66e0) (1) Data frame handling\nI0512 14:15:11.469956 3131 log.go:172] (0xc0008b66e0) (1) Data frame sent\nI0512 14:15:11.469966 3131 log.go:172] (0xc00097e370) (0xc0008b66e0) Stream removed, broadcasting: 1\nI0512 14:15:11.469976 3131 log.go:172] (0xc00097e370) Go away received\nI0512 14:15:11.470270 3131 log.go:172] (0xc00097e370) (0xc0008b66e0) Stream removed, broadcasting: 1\nI0512 14:15:11.470289 3131 log.go:172] (0xc00097e370) (0xc0002fa140) Stream removed, broadcasting: 3\nI0512 14:15:11.470298 3131 log.go:172] (0xc00097e370) (0xc0002fa1e0) Stream removed, broadcasting: 5\n" May 12 14:15:11.473: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 12 14:15:11.473: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 12 14:15:11.473: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 12 14:15:41.487: INFO: Deleting all statefulset in ns statefulset-7482 May 12 14:15:41.491: INFO: Scaling statefulset ss to 0 May 12 14:15:41.500: INFO: Waiting for statefulset status.replicas updated to 0 May 12 14:15:41.503: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:15:41.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7482" for this suite. May 12 14:15:47.530: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:15:47.586: INFO: namespace statefulset-7482 deletion completed in 6.067201683s • [SLOW TEST:98.507 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:15:47.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 12 14:15:47.770: INFO: Waiting up to 5m0s for pod "downwardapi-volume-755708fb-0c25-41dc-b635-054573669ad5" in namespace "downward-api-502" to be "success or failure" May 12 14:15:47.774: INFO: Pod "downwardapi-volume-755708fb-0c25-41dc-b635-054573669ad5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.529299ms May 12 14:15:49.855: INFO: Pod "downwardapi-volume-755708fb-0c25-41dc-b635-054573669ad5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085579969s May 12 14:15:51.858: INFO: Pod "downwardapi-volume-755708fb-0c25-41dc-b635-054573669ad5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.088664101s STEP: Saw pod success May 12 14:15:51.858: INFO: Pod "downwardapi-volume-755708fb-0c25-41dc-b635-054573669ad5" satisfied condition "success or failure" May 12 14:15:51.861: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-755708fb-0c25-41dc-b635-054573669ad5 container client-container: STEP: delete the pod May 12 14:15:51.901: INFO: Waiting for pod downwardapi-volume-755708fb-0c25-41dc-b635-054573669ad5 to disappear May 12 14:15:51.934: INFO: Pod downwardapi-volume-755708fb-0c25-41dc-b635-054573669ad5 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:15:51.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-502" for this suite. May 12 14:15:57.958: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:15:58.035: INFO: namespace downward-api-502 deletion completed in 6.096227382s • [SLOW TEST:10.448 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:15:58.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs May 12 14:15:58.100: INFO: Waiting up to 5m0s for pod "pod-cff3dbb3-07d0-412e-813c-768e77afbda9" in namespace "emptydir-143" to be "success or failure" May 12 14:15:58.105: INFO: Pod "pod-cff3dbb3-07d0-412e-813c-768e77afbda9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.727505ms May 12 14:16:00.177: INFO: Pod "pod-cff3dbb3-07d0-412e-813c-768e77afbda9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076458991s May 12 14:16:02.180: INFO: Pod "pod-cff3dbb3-07d0-412e-813c-768e77afbda9": Phase="Running", Reason="", readiness=true. Elapsed: 4.079394493s May 12 14:16:04.184: INFO: Pod "pod-cff3dbb3-07d0-412e-813c-768e77afbda9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.083260419s STEP: Saw pod success May 12 14:16:04.184: INFO: Pod "pod-cff3dbb3-07d0-412e-813c-768e77afbda9" satisfied condition "success or failure" May 12 14:16:04.187: INFO: Trying to get logs from node iruya-worker pod pod-cff3dbb3-07d0-412e-813c-768e77afbda9 container test-container: STEP: delete the pod May 12 14:16:04.208: INFO: Waiting for pod pod-cff3dbb3-07d0-412e-813c-768e77afbda9 to disappear May 12 14:16:04.212: INFO: Pod pod-cff3dbb3-07d0-412e-813c-768e77afbda9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:16:04.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-143" for this suite. May 12 14:16:10.227: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:16:10.299: INFO: namespace emptydir-143 deletion completed in 6.083349248s • [SLOW TEST:12.263 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:16:10.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 12 14:16:10.416: INFO: Waiting up to 5m0s for pod "downwardapi-volume-34fe6edb-214e-4955-8ebb-a979f4e0a79c" in namespace "downward-api-840" to be "success or failure" May 12 14:16:10.420: INFO: Pod "downwardapi-volume-34fe6edb-214e-4955-8ebb-a979f4e0a79c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.761615ms May 12 14:16:12.423: INFO: Pod "downwardapi-volume-34fe6edb-214e-4955-8ebb-a979f4e0a79c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007305113s May 12 14:16:14.460: INFO: Pod "downwardapi-volume-34fe6edb-214e-4955-8ebb-a979f4e0a79c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044462869s STEP: Saw pod success May 12 14:16:14.460: INFO: Pod "downwardapi-volume-34fe6edb-214e-4955-8ebb-a979f4e0a79c" satisfied condition "success or failure" May 12 14:16:14.463: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-34fe6edb-214e-4955-8ebb-a979f4e0a79c container client-container: STEP: delete the pod May 12 14:16:14.491: INFO: Waiting for pod downwardapi-volume-34fe6edb-214e-4955-8ebb-a979f4e0a79c to disappear May 12 14:16:14.970: INFO: Pod downwardapi-volume-34fe6edb-214e-4955-8ebb-a979f4e0a79c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:16:14.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-840" for this suite. May 12 14:16:21.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:16:21.113: INFO: namespace downward-api-840 deletion completed in 6.140760046s • [SLOW TEST:10.814 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:16:21.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-b406d8e9-e478-4ba3-8659-bfb690946af9 STEP: Creating a pod to test consume secrets May 12 14:16:21.192: INFO: Waiting up to 5m0s for pod "pod-secrets-68a0329e-efde-431d-bb2c-89830a9e375a" in namespace "secrets-9783" to be "success or failure" May 12 14:16:21.197: INFO: Pod "pod-secrets-68a0329e-efde-431d-bb2c-89830a9e375a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.648129ms May 12 14:16:23.354: INFO: Pod "pod-secrets-68a0329e-efde-431d-bb2c-89830a9e375a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.161498028s May 12 14:16:25.370: INFO: Pod "pod-secrets-68a0329e-efde-431d-bb2c-89830a9e375a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.177970739s STEP: Saw pod success May 12 14:16:25.371: INFO: Pod "pod-secrets-68a0329e-efde-431d-bb2c-89830a9e375a" satisfied condition "success or failure" May 12 14:16:25.372: INFO: Trying to get logs from node iruya-worker pod pod-secrets-68a0329e-efde-431d-bb2c-89830a9e375a container secret-env-test: STEP: delete the pod May 12 14:16:25.393: INFO: Waiting for pod pod-secrets-68a0329e-efde-431d-bb2c-89830a9e375a to disappear May 12 14:16:25.442: INFO: Pod pod-secrets-68a0329e-efde-431d-bb2c-89830a9e375a no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:16:25.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9783" for this suite. May 12 14:16:31.502: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:16:31.560: INFO: namespace secrets-9783 deletion completed in 6.115360964s • [SLOW TEST:10.446 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:16:31.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 12 14:16:31.666: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ff3b3995-fdca-4efb-9b6b-a2acadcd5b0a" in namespace "projected-2535" to be "success or failure" May 12 14:16:31.682: INFO: Pod "downwardapi-volume-ff3b3995-fdca-4efb-9b6b-a2acadcd5b0a": Phase="Pending", Reason="", readiness=false. Elapsed: 16.175697ms May 12 14:16:33.687: INFO: Pod "downwardapi-volume-ff3b3995-fdca-4efb-9b6b-a2acadcd5b0a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020938456s May 12 14:16:35.691: INFO: Pod "downwardapi-volume-ff3b3995-fdca-4efb-9b6b-a2acadcd5b0a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024562215s STEP: Saw pod success May 12 14:16:35.691: INFO: Pod "downwardapi-volume-ff3b3995-fdca-4efb-9b6b-a2acadcd5b0a" satisfied condition "success or failure" May 12 14:16:35.693: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-ff3b3995-fdca-4efb-9b6b-a2acadcd5b0a container client-container: STEP: delete the pod May 12 14:16:35.850: INFO: Waiting for pod downwardapi-volume-ff3b3995-fdca-4efb-9b6b-a2acadcd5b0a to disappear May 12 14:16:35.871: INFO: Pod downwardapi-volume-ff3b3995-fdca-4efb-9b6b-a2acadcd5b0a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:16:35.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2535" for this suite. May 12 14:16:41.898: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:16:41.959: INFO: namespace projected-2535 deletion completed in 6.086004252s • [SLOW TEST:10.400 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:16:41.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 12 14:16:42.124: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3828,SelfLink:/api/v1/namespaces/watch-3828/configmaps/e2e-watch-test-label-changed,UID:b9371efb-aa7e-4adb-be1c-6e5215f922ea,ResourceVersion:10496644,Generation:0,CreationTimestamp:2020-05-12 14:16:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 12 14:16:42.124: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3828,SelfLink:/api/v1/namespaces/watch-3828/configmaps/e2e-watch-test-label-changed,UID:b9371efb-aa7e-4adb-be1c-6e5215f922ea,ResourceVersion:10496646,Generation:0,CreationTimestamp:2020-05-12 14:16:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 12 14:16:42.124: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3828,SelfLink:/api/v1/namespaces/watch-3828/configmaps/e2e-watch-test-label-changed,UID:b9371efb-aa7e-4adb-be1c-6e5215f922ea,ResourceVersion:10496647,Generation:0,CreationTimestamp:2020-05-12 14:16:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 12 14:16:52.181: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3828,SelfLink:/api/v1/namespaces/watch-3828/configmaps/e2e-watch-test-label-changed,UID:b9371efb-aa7e-4adb-be1c-6e5215f922ea,ResourceVersion:10496668,Generation:0,CreationTimestamp:2020-05-12 14:16:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 12 14:16:52.181: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3828,SelfLink:/api/v1/namespaces/watch-3828/configmaps/e2e-watch-test-label-changed,UID:b9371efb-aa7e-4adb-be1c-6e5215f922ea,ResourceVersion:10496669,Generation:0,CreationTimestamp:2020-05-12 14:16:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} May 12 14:16:52.181: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3828,SelfLink:/api/v1/namespaces/watch-3828/configmaps/e2e-watch-test-label-changed,UID:b9371efb-aa7e-4adb-be1c-6e5215f922ea,ResourceVersion:10496670,Generation:0,CreationTimestamp:2020-05-12 14:16:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:16:52.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3828" for this suite. May 12 14:16:58.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:16:58.271: INFO: namespace watch-3828 deletion completed in 6.066435814s • [SLOW TEST:16.311 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:16:58.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod May 12 14:16:58.348: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:17:04.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1683" for this suite. May 12 14:17:10.470: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:17:10.534: INFO: namespace init-container-1683 deletion completed in 6.090585936s • [SLOW TEST:12.263 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:17:10.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:17:16.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8963" for this suite. May 12 14:17:22.791: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:17:22.846: INFO: namespace namespaces-8963 deletion completed in 6.066459611s STEP: Destroying namespace "nsdeletetest-9215" for this suite. May 12 14:17:22.847: INFO: Namespace nsdeletetest-9215 was already deleted STEP: Destroying namespace "nsdeletetest-6035" for this suite. May 12 14:17:28.889: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:17:28.964: INFO: namespace nsdeletetest-6035 deletion completed in 6.116283698s • [SLOW TEST:18.429 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:17:28.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 12 14:17:28.994: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:17:33.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1697" for this suite. May 12 14:18:11.081: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:18:11.144: INFO: namespace pods-1697 deletion completed in 38.096653303s • [SLOW TEST:42.180 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:18:11.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes May 12 14:18:11.245: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:18:22.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9885" for this suite. May 12 14:18:28.243: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:18:28.309: INFO: namespace pods-9885 deletion completed in 6.11443124s • [SLOW TEST:17.165 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:18:28.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-9962 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet May 12 14:18:28.428: INFO: Found 0 stateful pods, waiting for 3 May 12 14:18:38.434: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 12 14:18:38.434: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 12 14:18:38.434: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 12 14:18:48.432: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 12 14:18:48.432: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 12 14:18:48.432: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 12 14:18:48.456: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 12 14:18:58.534: INFO: Updating stateful set ss2 May 12 14:18:58.594: INFO: Waiting for Pod statefulset-9962/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 12 14:19:08.603: INFO: Waiting for Pod statefulset-9962/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted May 12 14:19:19.195: INFO: Found 2 stateful pods, waiting for 3 May 12 14:19:29.200: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 12 14:19:29.200: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 12 14:19:29.200: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 12 14:19:29.223: INFO: Updating stateful set ss2 May 12 14:19:29.357: INFO: Waiting for Pod statefulset-9962/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 12 14:19:39.364: INFO: Waiting for Pod statefulset-9962/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 12 14:19:49.384: INFO: Updating stateful set ss2 May 12 14:19:49.453: INFO: Waiting for StatefulSet statefulset-9962/ss2 to complete update May 12 14:19:49.453: INFO: Waiting for Pod statefulset-9962/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 12 14:19:59.459: INFO: Waiting for StatefulSet statefulset-9962/ss2 to complete update May 12 14:19:59.459: INFO: Waiting for Pod statefulset-9962/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 12 14:20:09.460: INFO: Deleting all statefulset in ns statefulset-9962 May 12 14:20:09.463: INFO: Scaling statefulset ss2 to 0 May 12 14:20:39.479: INFO: Waiting for statefulset status.replicas updated to 0 May 12 14:20:39.482: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:20:39.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9962" for this suite. May 12 14:20:47.533: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:20:47.605: INFO: namespace statefulset-9962 deletion completed in 8.090818242s • [SLOW TEST:139.296 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:20:47.606: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test use defaults May 12 14:20:48.508: INFO: Waiting up to 5m0s for pod "client-containers-206c8c0f-0fd5-444a-8a46-e1446363b072" in namespace "containers-2258" to be "success or failure" May 12 14:20:48.545: INFO: Pod "client-containers-206c8c0f-0fd5-444a-8a46-e1446363b072": Phase="Pending", Reason="", readiness=false. Elapsed: 36.882125ms May 12 14:20:50.571: INFO: Pod "client-containers-206c8c0f-0fd5-444a-8a46-e1446363b072": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062600354s May 12 14:20:52.667: INFO: Pod "client-containers-206c8c0f-0fd5-444a-8a46-e1446363b072": Phase="Pending", Reason="", readiness=false. Elapsed: 4.158449274s May 12 14:20:54.670: INFO: Pod "client-containers-206c8c0f-0fd5-444a-8a46-e1446363b072": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.161970039s STEP: Saw pod success May 12 14:20:54.670: INFO: Pod "client-containers-206c8c0f-0fd5-444a-8a46-e1446363b072" satisfied condition "success or failure" May 12 14:20:54.673: INFO: Trying to get logs from node iruya-worker2 pod client-containers-206c8c0f-0fd5-444a-8a46-e1446363b072 container test-container: STEP: delete the pod May 12 14:20:54.747: INFO: Waiting for pod client-containers-206c8c0f-0fd5-444a-8a46-e1446363b072 to disappear May 12 14:20:54.758: INFO: Pod client-containers-206c8c0f-0fd5-444a-8a46-e1446363b072 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:20:54.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2258" for this suite. May 12 14:21:00.773: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:21:00.835: INFO: namespace containers-2258 deletion completed in 6.073750666s • [SLOW TEST:13.229 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:21:00.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium May 12 14:21:01.276: INFO: Waiting up to 5m0s for pod "pod-ae45bae6-5e88-45e5-b704-3abc8cc0c940" in namespace "emptydir-2251" to be "success or failure" May 12 14:21:01.280: INFO: Pod "pod-ae45bae6-5e88-45e5-b704-3abc8cc0c940": Phase="Pending", Reason="", readiness=false. Elapsed: 3.188862ms May 12 14:21:03.619: INFO: Pod "pod-ae45bae6-5e88-45e5-b704-3abc8cc0c940": Phase="Pending", Reason="", readiness=false. Elapsed: 2.342761858s May 12 14:21:05.654: INFO: Pod "pod-ae45bae6-5e88-45e5-b704-3abc8cc0c940": Phase="Pending", Reason="", readiness=false. Elapsed: 4.377410983s May 12 14:21:07.658: INFO: Pod "pod-ae45bae6-5e88-45e5-b704-3abc8cc0c940": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.381538556s STEP: Saw pod success May 12 14:21:07.658: INFO: Pod "pod-ae45bae6-5e88-45e5-b704-3abc8cc0c940" satisfied condition "success or failure" May 12 14:21:07.661: INFO: Trying to get logs from node iruya-worker pod pod-ae45bae6-5e88-45e5-b704-3abc8cc0c940 container test-container: STEP: delete the pod May 12 14:21:07.694: INFO: Waiting for pod pod-ae45bae6-5e88-45e5-b704-3abc8cc0c940 to disappear May 12 14:21:07.717: INFO: Pod pod-ae45bae6-5e88-45e5-b704-3abc8cc0c940 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:21:07.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2251" for this suite. May 12 14:21:13.775: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:21:13.850: INFO: namespace emptydir-2251 deletion completed in 6.109690975s • [SLOW TEST:13.014 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:21:13.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 May 12 14:21:13.993: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. May 12 14:21:14.873: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set May 12 14:21:18.177: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724890074, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724890074, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724890074, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724890074, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 14:21:20.180: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724890074, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724890074, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724890074, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724890074, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 14:21:22.809: INFO: Waited 619.902454ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:21:25.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-2712" for this suite. May 12 14:21:31.288: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:21:31.341: INFO: namespace aggregator-2712 deletion completed in 6.083709307s • [SLOW TEST:17.491 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:21:31.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 12 14:21:31.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-9388' May 12 14:21:31.760: INFO: stderr: "" May 12 14:21:31.760: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 May 12 14:21:31.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-9388' May 12 14:21:34.952: INFO: stderr: "" May 12 14:21:34.952: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:21:34.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9388" for this suite. May 12 14:21:41.033: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:21:41.127: INFO: namespace kubectl-9388 deletion completed in 6.120055476s • [SLOW TEST:9.786 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:21:41.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:21:45.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2935" for this suite. May 12 14:22:35.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:22:35.354: INFO: namespace kubelet-test-2935 deletion completed in 50.114779335s • [SLOW TEST:54.227 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:22:35.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 12 14:22:35.441: INFO: Waiting up to 5m0s for pod "downwardapi-volume-75c3f3c2-652a-4572-8708-f62f98d17145" in namespace "projected-1297" to be "success or failure" May 12 14:22:35.449: INFO: Pod "downwardapi-volume-75c3f3c2-652a-4572-8708-f62f98d17145": Phase="Pending", Reason="", readiness=false. Elapsed: 8.193496ms May 12 14:22:37.453: INFO: Pod "downwardapi-volume-75c3f3c2-652a-4572-8708-f62f98d17145": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011938693s May 12 14:22:39.456: INFO: Pod "downwardapi-volume-75c3f3c2-652a-4572-8708-f62f98d17145": Phase="Running", Reason="", readiness=true. Elapsed: 4.015124549s May 12 14:22:41.461: INFO: Pod "downwardapi-volume-75c3f3c2-652a-4572-8708-f62f98d17145": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.019911516s STEP: Saw pod success May 12 14:22:41.461: INFO: Pod "downwardapi-volume-75c3f3c2-652a-4572-8708-f62f98d17145" satisfied condition "success or failure" May 12 14:22:41.464: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-75c3f3c2-652a-4572-8708-f62f98d17145 container client-container: STEP: delete the pod May 12 14:22:41.488: INFO: Waiting for pod downwardapi-volume-75c3f3c2-652a-4572-8708-f62f98d17145 to disappear May 12 14:22:41.491: INFO: Pod downwardapi-volume-75c3f3c2-652a-4572-8708-f62f98d17145 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:22:41.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1297" for this suite. May 12 14:22:47.534: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:22:48.099: INFO: namespace projected-1297 deletion completed in 6.603156414s • [SLOW TEST:12.744 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:22:48.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-4667 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-4667 STEP: Creating statefulset with conflicting port in namespace statefulset-4667 STEP: Waiting until pod test-pod will start running in namespace statefulset-4667 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-4667 May 12 14:22:52.647: INFO: Observed stateful pod in namespace: statefulset-4667, name: ss-0, uid: c54ce0f2-a66d-402d-b8e9-5cef16aa948e, status phase: Pending. Waiting for statefulset controller to delete. May 12 14:27:52.647: INFO: Pod ss-0 expected to be re-created at least once [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 12 14:27:52.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe po ss-0 --namespace=statefulset-4667' May 12 14:27:55.460: INFO: stderr: "" May 12 14:27:55.460: INFO: stdout: "Name: ss-0\nNamespace: statefulset-4667\nPriority: 0\nNode: iruya-worker/\nLabels: baz=blah\n controller-revision-hash=ss-5867494796\n foo=bar\n statefulset.kubernetes.io/pod-name=ss-0\nAnnotations: \nStatus: Pending\nIP: \nControlled By: StatefulSet/ss\nContainers:\n nginx:\n Image: docker.io/library/nginx:1.14-alpine\n Port: 21017/TCP\n Host Port: 21017/TCP\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-qs6sv (ro)\nVolumes:\n default-token-qs6sv:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-qs6sv\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Warning PodFitsHostPorts 5m3s kubelet, iruya-worker Predicate PodFitsHostPorts failed\n" May 12 14:27:55.460: INFO: Output of kubectl describe ss-0: Name: ss-0 Namespace: statefulset-4667 Priority: 0 Node: iruya-worker/ Labels: baz=blah controller-revision-hash=ss-5867494796 foo=bar statefulset.kubernetes.io/pod-name=ss-0 Annotations: Status: Pending IP: Controlled By: StatefulSet/ss Containers: nginx: Image: docker.io/library/nginx:1.14-alpine Port: 21017/TCP Host Port: 21017/TCP Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-qs6sv (ro) Volumes: default-token-qs6sv: Type: Secret (a volume populated by a Secret) SecretName: default-token-qs6sv Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning PodFitsHostPorts 5m3s kubelet, iruya-worker Predicate PodFitsHostPorts failed May 12 14:27:55.460: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs ss-0 --namespace=statefulset-4667 --tail=100' May 12 14:27:55.562: INFO: rc: 1 May 12 14:27:55.562: INFO: Last 100 log lines of ss-0: May 12 14:27:55.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe po test-pod --namespace=statefulset-4667' May 12 14:27:55.669: INFO: stderr: "" May 12 14:27:55.669: INFO: stdout: "Name: test-pod\nNamespace: statefulset-4667\nPriority: 0\nNode: iruya-worker/172.17.0.6\nStart Time: Tue, 12 May 2020 14:22:48 +0000\nLabels: \nAnnotations: \nStatus: Running\nIP: 10.244.2.133\nContainers:\n nginx:\n Container ID: containerd://f7ecc5c5c135b773c0c3b90887526554b1c8c9a07146cf7ef9ede3a87d139440\n Image: docker.io/library/nginx:1.14-alpine\n Image ID: docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\n Port: 21017/TCP\n Host Port: 21017/TCP\n State: Running\n Started: Tue, 12 May 2020 14:22:51 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-qs6sv (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-qs6sv:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-qs6sv\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Pulled 5m6s kubelet, iruya-worker Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\n Normal Created 5m4s kubelet, iruya-worker Created container nginx\n Normal Started 5m4s kubelet, iruya-worker Started container nginx\n" May 12 14:27:55.669: INFO: Output of kubectl describe test-pod: Name: test-pod Namespace: statefulset-4667 Priority: 0 Node: iruya-worker/172.17.0.6 Start Time: Tue, 12 May 2020 14:22:48 +0000 Labels: Annotations: Status: Running IP: 10.244.2.133 Containers: nginx: Container ID: containerd://f7ecc5c5c135b773c0c3b90887526554b1c8c9a07146cf7ef9ede3a87d139440 Image: docker.io/library/nginx:1.14-alpine Image ID: docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 Port: 21017/TCP Host Port: 21017/TCP State: Running Started: Tue, 12 May 2020 14:22:51 +0000 Ready: True Restart Count: 0 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-qs6sv (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-token-qs6sv: Type: Secret (a volume populated by a Secret) SecretName: default-token-qs6sv Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Pulled 5m6s kubelet, iruya-worker Container image "docker.io/library/nginx:1.14-alpine" already present on machine Normal Created 5m4s kubelet, iruya-worker Created container nginx Normal Started 5m4s kubelet, iruya-worker Started container nginx May 12 14:27:55.669: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs test-pod --namespace=statefulset-4667 --tail=100' May 12 14:27:55.774: INFO: stderr: "" May 12 14:27:55.774: INFO: stdout: "" May 12 14:27:55.774: INFO: Last 100 log lines of test-pod: May 12 14:27:55.774: INFO: Deleting all statefulset in ns statefulset-4667 May 12 14:27:55.776: INFO: Scaling statefulset ss to 0 May 12 14:28:05.786: INFO: Waiting for statefulset status.replicas updated to 0 May 12 14:28:05.788: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Collecting events from namespace "statefulset-4667". STEP: Found 14 events. May 12 14:28:05.807: INFO: At 2020-05-12 14:22:48 +0000 UTC - event for ss: {statefulset-controller } SuccessfulCreate: create Pod ss-0 in StatefulSet ss successful May 12 14:28:05.807: INFO: At 2020-05-12 14:22:48 +0000 UTC - event for ss: {statefulset-controller } SuccessfulDelete: delete Pod ss-0 in StatefulSet ss successful May 12 14:28:05.807: INFO: At 2020-05-12 14:22:48 +0000 UTC - event for ss: {statefulset-controller } RecreatingFailedPod: StatefulSet statefulset-4667/ss is recreating failed Pod ss-0 May 12 14:28:05.807: INFO: At 2020-05-12 14:22:48 +0000 UTC - event for ss-0: {kubelet iruya-worker} PodFitsHostPorts: Predicate PodFitsHostPorts failed May 12 14:28:05.807: INFO: At 2020-05-12 14:22:48 +0000 UTC - event for ss-0: {kubelet iruya-worker} PodFitsHostPorts: Predicate PodFitsHostPorts failed May 12 14:28:05.807: INFO: At 2020-05-12 14:22:48 +0000 UTC - event for ss-0: {kubelet iruya-worker} PodFitsHostPorts: Predicate PodFitsHostPorts failed May 12 14:28:05.807: INFO: At 2020-05-12 14:22:49 +0000 UTC - event for ss: {statefulset-controller } FailedCreate: create Pod ss-0 in StatefulSet ss failed error: The POST operation against Pod could not be completed at this time, please try again. May 12 14:28:05.807: INFO: At 2020-05-12 14:22:49 +0000 UTC - event for ss-0: {kubelet iruya-worker} PodFitsHostPorts: Predicate PodFitsHostPorts failed May 12 14:28:05.807: INFO: At 2020-05-12 14:22:49 +0000 UTC - event for ss-0: {kubelet iruya-worker} PodFitsHostPorts: Predicate PodFitsHostPorts failed May 12 14:28:05.807: INFO: At 2020-05-12 14:22:49 +0000 UTC - event for test-pod: {kubelet iruya-worker} Pulled: Container image "docker.io/library/nginx:1.14-alpine" already present on machine May 12 14:28:05.807: INFO: At 2020-05-12 14:22:51 +0000 UTC - event for ss-0: {kubelet iruya-worker} PodFitsHostPorts: Predicate PodFitsHostPorts failed May 12 14:28:05.807: INFO: At 2020-05-12 14:22:51 +0000 UTC - event for test-pod: {kubelet iruya-worker} Created: Created container nginx May 12 14:28:05.807: INFO: At 2020-05-12 14:22:51 +0000 UTC - event for test-pod: {kubelet iruya-worker} Started: Started container nginx May 12 14:28:05.807: INFO: At 2020-05-12 14:22:52 +0000 UTC - event for ss-0: {kubelet iruya-worker} PodFitsHostPorts: Predicate PodFitsHostPorts failed May 12 14:28:05.809: INFO: POD NODE PHASE GRACE CONDITIONS May 12 14:28:05.809: INFO: test-pod iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 14:22:48 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 14:22:51 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 14:22:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 14:22:48 +0000 UTC }] May 12 14:28:05.809: INFO: May 12 14:28:05.815: INFO: Logging node info for node iruya-control-plane May 12 14:28:05.817: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:iruya-control-plane,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/iruya-control-plane,UID:5b69a0f9-55ac-48be-a8d0-5e04b939b798,ResourceVersion:10498503,Generation:0,CreationTimestamp:2020-03-15 18:24:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: iruya-control-plane,kubernetes.io/os: linux,node-role.kubernetes.io/master: ,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[{node-role.kubernetes.io/master NoSchedule }],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{MemoryPressure False 2020-05-12 14:27:09 +0000 UTC 2020-03-15 18:24:20 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-05-12 14:27:09 +0000 UTC 2020-03-15 18:24:20 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-05-12 14:27:09 +0000 UTC 2020-03-15 18:24:20 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-05-12 14:27:09 +0000 UTC 2020-03-15 18:25:00 +0000 UTC KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 172.17.0.7} {Hostname iruya-control-plane}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:09f14f6f4d1640fcaab2243401c9f154,SystemUUID:7c6ca533-492e-400c-b058-c282f97a69ec,BootID:ca2aa731-f890-4956-92a1-ff8c7560d571,KernelVersion:4.15.0-88-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.2,KubeletVersion:v1.15.7,KubeProxyVersion:v1.15.7,OperatingSystem:linux,Architecture:amd64,},Images:[{[k8s.gcr.io/etcd:3.3.10] 258352566} {[k8s.gcr.io/kube-apiserver:v1.15.7] 249088818} {[k8s.gcr.io/kube-controller-manager:v1.15.7] 199886660} {[docker.io/kindest/kindnetd:0.5.4] 113207016} {[k8s.gcr.io/kube-proxy:v1.15.7] 97350830} {[k8s.gcr.io/kube-scheduler:v1.15.7] 96554801} {[k8s.gcr.io/debian-base:v2.0.0] 53884301} {[k8s.gcr.io/coredns:1.3.1] 40532446} {[docker.io/rancher/local-path-provisioner:v0.0.11] 36513375} {[k8s.gcr.io/pause:3.1] 746479}],VolumesInUse:[],VolumesAttached:[],Config:nil,},} May 12 14:28:05.817: INFO: Logging kubelet events for node iruya-control-plane May 12 14:28:05.819: INFO: Logging pods the kubelet thinks is on node iruya-control-plane May 12 14:28:05.825: INFO: local-path-provisioner-d4947b89c-72frh started at 2020-03-15 18:25:04 +0000 UTC (0+1 container statuses recorded) May 12 14:28:05.825: INFO: Container local-path-provisioner ready: true, restart count 0 May 12 14:28:05.825: INFO: kube-apiserver-iruya-control-plane started at 2020-03-15 18:24:08 +0000 UTC (0+1 container statuses recorded) May 12 14:28:05.825: INFO: Container kube-apiserver ready: true, restart count 0 May 12 14:28:05.825: INFO: kube-controller-manager-iruya-control-plane started at 2020-03-15 18:24:08 +0000 UTC (0+1 container statuses recorded) May 12 14:28:05.825: INFO: Container kube-controller-manager ready: true, restart count 0 May 12 14:28:05.826: INFO: kube-scheduler-iruya-control-plane started at 2020-03-15 18:24:08 +0000 UTC (0+1 container statuses recorded) May 12 14:28:05.826: INFO: Container kube-scheduler ready: true, restart count 0 May 12 14:28:05.826: INFO: etcd-iruya-control-plane started at 2020-03-15 18:24:08 +0000 UTC (0+1 container statuses recorded) May 12 14:28:05.826: INFO: Container etcd ready: true, restart count 0 May 12 14:28:05.826: INFO: kindnet-zn8sx started at 2020-03-15 18:24:40 +0000 UTC (0+1 container statuses recorded) May 12 14:28:05.826: INFO: Container kindnet-cni ready: true, restart count 0 May 12 14:28:05.826: INFO: kube-proxy-46nsr started at 2020-03-15 18:24:40 +0000 UTC (0+1 container statuses recorded) May 12 14:28:05.826: INFO: Container kube-proxy ready: true, restart count 0 W0512 14:28:05.828587 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 12 14:28:05.900: INFO: Latency metrics for node iruya-control-plane May 12 14:28:05.900: INFO: Logging node info for node iruya-worker May 12 14:28:05.903: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:iruya-worker,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/iruya-worker,UID:94e58020-6063-4274-b0bd-d7c4f772701c,ResourceVersion:10498532,Generation:0,CreationTimestamp:2020-03-15 18:24:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: iruya-worker,kubernetes.io/os: linux,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{MemoryPressure False 2020-05-12 14:27:23 +0000 UTC 2020-03-15 18:24:54 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-05-12 14:27:23 +0000 UTC 2020-03-15 18:24:54 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-05-12 14:27:23 +0000 UTC 2020-03-15 18:24:54 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-05-12 14:27:23 +0000 UTC 2020-03-15 18:25:15 +0000 UTC KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 172.17.0.6} {Hostname iruya-worker}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5332b21b7d0c4f35b2434f4fc8bea1cf,SystemUUID:92e1ff09-3c3c-490b-b499-0de27dc489ae,BootID:ca2aa731-f890-4956-92a1-ff8c7560d571,KernelVersion:4.15.0-88-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.2,KubeletVersion:v1.15.7,KubeProxyVersion:v1.15.7,OperatingSystem:linux,Architecture:amd64,},Images:[{[k8s.gcr.io/etcd:3.3.10] 258352566} {[k8s.gcr.io/kube-apiserver:v1.15.7] 249088818} {[k8s.gcr.io/kube-controller-manager:v1.15.7] 199886660} {[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6] 142444388} {[docker.io/kindest/kindnetd:0.5.4] 113207016} {[k8s.gcr.io/kube-proxy:v1.15.7] 97350830} {[k8s.gcr.io/kube-scheduler:v1.15.7] 96554801} {[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0] 85425365} {[k8s.gcr.io/debian-base:v2.0.0] 53884301} {[k8s.gcr.io/coredns:1.3.1] 40532446} {[gcr.io/google-samples/gb-redisslave@sha256:57730a481f97b3321138161ba2c8c9ca3b32df32ce9180e4029e6940446800ec gcr.io/google-samples/gb-redisslave:v3] 36655159} {[docker.io/rancher/local-path-provisioner:v0.0.11] 36513375} {[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10] 16222606} {[gcr.io/kubernetes-e2e-test-images/nettest@sha256:6aa91bc71993260a87513e31b672ec14ce84bc253cd5233406c6946d3a8f55a1 gcr.io/kubernetes-e2e-test-images/nettest:1.0] 7398578} {[docker.io/library/nginx@sha256:57a226fb6ab6823027c0704a9346a890ffb0cacde06bc19bbc234c8720673555 docker.io/library/nginx:1.15-alpine] 6999654} {[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine] 6978806} {[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1] 4331310} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 3854313} {[gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 gcr.io/kubernetes-e2e-test-images/redis:1.0] 2943605} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 2785431} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 2509546} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:71c3fc838e0637df570497febafa0ee73bf47176dfd43612de5c55a71230674e gcr.io/kubernetes-e2e-test-images/liveness:1.1] 2258365} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 1804628} {[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0] 1799936} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 1791163} {[gcr.io/kubernetes-e2e-test-images/porter@sha256:d6389405e453950618ae7749d9eee388f0eb32e0328a7e6583c41433aa5f2a77 gcr.io/kubernetes-e2e-test-images/porter:1.0] 1772917} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 1039914} {[k8s.gcr.io/pause:3.1] 746479} {[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29] 732685} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 599341} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 539309}],VolumesInUse:[],VolumesAttached:[],Config:nil,},} May 12 14:28:05.903: INFO: Logging kubelet events for node iruya-worker May 12 14:28:05.906: INFO: Logging pods the kubelet thinks is on node iruya-worker May 12 14:28:05.911: INFO: kube-proxy-pmz4p started at 2020-03-15 18:24:55 +0000 UTC (0+1 container statuses recorded) May 12 14:28:05.911: INFO: Container kube-proxy ready: true, restart count 0 May 12 14:28:05.911: INFO: kindnet-gwz5g started at 2020-03-15 18:24:55 +0000 UTC (0+1 container statuses recorded) May 12 14:28:05.911: INFO: Container kindnet-cni ready: true, restart count 0 May 12 14:28:05.911: INFO: test-pod started at 2020-05-12 14:22:48 +0000 UTC (0+1 container statuses recorded) May 12 14:28:05.911: INFO: Container nginx ready: true, restart count 0 W0512 14:28:05.914519 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 12 14:28:05.960: INFO: Latency metrics for node iruya-worker May 12 14:28:05.960: INFO: Logging node info for node iruya-worker2 May 12 14:28:05.963: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:iruya-worker2,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/iruya-worker2,UID:67dfdf76-d64a-45cb-a2a9-755b73c85644,ResourceVersion:10498528,Generation:0,CreationTimestamp:2020-03-15 18:24:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: iruya-worker2,kubernetes.io/os: linux,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{MemoryPressure False 2020-05-12 14:27:21 +0000 UTC 2020-03-15 18:24:41 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-05-12 14:27:21 +0000 UTC 2020-03-15 18:24:41 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-05-12 14:27:21 +0000 UTC 2020-03-15 18:24:41 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-05-12 14:27:21 +0000 UTC 2020-03-15 18:24:52 +0000 UTC KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 172.17.0.5} {Hostname iruya-worker2}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5fda03f0d02548b7a74f8a4b6cc8795b,SystemUUID:d8b7a3a5-76b4-4c0b-85d7-cdb97f2c8b1a,BootID:ca2aa731-f890-4956-92a1-ff8c7560d571,KernelVersion:4.15.0-88-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.2,KubeletVersion:v1.15.7,KubeProxyVersion:v1.15.7,OperatingSystem:linux,Architecture:amd64,},Images:[{[k8s.gcr.io/etcd:3.3.10] 258352566} {[k8s.gcr.io/kube-apiserver:v1.15.7] 249088818} {[k8s.gcr.io/kube-controller-manager:v1.15.7] 199886660} {[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6] 142444388} {[docker.io/kindest/kindnetd:0.5.4] 113207016} {[k8s.gcr.io/kube-proxy:v1.15.7] 97350830} {[k8s.gcr.io/kube-scheduler:v1.15.7] 96554801} {[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0] 85425365} {[k8s.gcr.io/debian-base:v2.0.0] 53884301} {[k8s.gcr.io/coredns:1.3.1] 40532446} {[gcr.io/google-samples/gb-redisslave@sha256:57730a481f97b3321138161ba2c8c9ca3b32df32ce9180e4029e6940446800ec gcr.io/google-samples/gb-redisslave:v3] 36655159} {[docker.io/rancher/local-path-provisioner:v0.0.11] 36513375} {[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10] 16222606} {[gcr.io/kubernetes-e2e-test-images/nettest@sha256:6aa91bc71993260a87513e31b672ec14ce84bc253cd5233406c6946d3a8f55a1 gcr.io/kubernetes-e2e-test-images/nettest:1.0] 7398578} {[docker.io/library/nginx@sha256:57a226fb6ab6823027c0704a9346a890ffb0cacde06bc19bbc234c8720673555 docker.io/library/nginx:1.15-alpine] 6999654} {[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine] 6978806} {[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1] 4331310} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 3854313} {[gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 gcr.io/kubernetes-e2e-test-images/redis:1.0] 2943605} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 2785431} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 2509546} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:71c3fc838e0637df570497febafa0ee73bf47176dfd43612de5c55a71230674e gcr.io/kubernetes-e2e-test-images/liveness:1.1] 2258365} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 1804628} {[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0] 1799936} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 1791163} {[gcr.io/kubernetes-e2e-test-images/porter@sha256:d6389405e453950618ae7749d9eee388f0eb32e0328a7e6583c41433aa5f2a77 gcr.io/kubernetes-e2e-test-images/porter:1.0] 1772917} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 1039914} {[k8s.gcr.io/pause:3.1] 746479} {[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29] 732685} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 599341} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 539309}],VolumesInUse:[],VolumesAttached:[],Config:nil,},} May 12 14:28:05.964: INFO: Logging kubelet events for node iruya-worker2 May 12 14:28:05.966: INFO: Logging pods the kubelet thinks is on node iruya-worker2 May 12 14:28:05.971: INFO: coredns-5d4dd4b4db-gm7vr started at 2020-03-15 18:24:52 +0000 UTC (0+1 container statuses recorded) May 12 14:28:05.971: INFO: Container coredns ready: true, restart count 0 May 12 14:28:05.971: INFO: coredns-5d4dd4b4db-6jcgz started at 2020-03-15 18:24:54 +0000 UTC (0+1 container statuses recorded) May 12 14:28:05.971: INFO: Container coredns ready: true, restart count 0 May 12 14:28:05.971: INFO: kube-proxy-vwbcj started at 2020-03-15 18:24:42 +0000 UTC (0+1 container statuses recorded) May 12 14:28:05.971: INFO: Container kube-proxy ready: true, restart count 0 May 12 14:28:05.971: INFO: kindnet-mgd8b started at 2020-03-15 18:24:43 +0000 UTC (0+1 container statuses recorded) May 12 14:28:05.971: INFO: Container kindnet-cni ready: true, restart count 0 W0512 14:28:05.974836 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 12 14:28:06.061: INFO: Latency metrics for node iruya-worker2 May 12 14:28:06.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4667" for this suite. May 12 14:28:28.074: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:28:28.141: INFO: namespace statefulset-4667 deletion completed in 22.076841526s • Failure [340.042 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 12 14:27:52.647: Pod ss-0 expected to be re-created at least once /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:769 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:28:28.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 12 14:28:28.226: INFO: Creating deployment "test-recreate-deployment" May 12 14:28:28.244: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 12 14:28:28.255: INFO: deployment "test-recreate-deployment" doesn't have the required revision set May 12 14:28:30.264: INFO: Waiting deployment "test-recreate-deployment" to complete May 12 14:28:30.267: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724890508, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724890508, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724890508, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724890508, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 14:28:32.272: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724890508, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724890508, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724890508, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724890508, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 14:28:34.271: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 12 14:28:34.277: INFO: Updating deployment test-recreate-deployment May 12 14:28:34.277: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 12 14:28:34.541: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-9399,SelfLink:/apis/apps/v1/namespaces/deployment-9399/deployments/test-recreate-deployment,UID:2835d071-5832-4f5c-8fcb-59e5c0041a1c,ResourceVersion:10498747,Generation:2,CreationTimestamp:2020-05-12 14:28:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-05-12 14:28:34 +0000 UTC 2020-05-12 14:28:34 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-05-12 14:28:34 +0000 UTC 2020-05-12 14:28:28 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} May 12 14:28:34.555: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-9399,SelfLink:/apis/apps/v1/namespaces/deployment-9399/replicasets/test-recreate-deployment-5c8c9cc69d,UID:da1fa687-5e59-425b-afc5-c1b5b5740a53,ResourceVersion:10498746,Generation:1,CreationTimestamp:2020-05-12 14:28:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 2835d071-5832-4f5c-8fcb-59e5c0041a1c 0xc00343e7c7 0xc00343e7c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 12 14:28:34.555: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 12 14:28:34.556: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-9399,SelfLink:/apis/apps/v1/namespaces/deployment-9399/replicasets/test-recreate-deployment-6df85df6b9,UID:cf92d71f-d92e-4d27-92e1-49c93607b2e3,ResourceVersion:10498736,Generation:2,CreationTimestamp:2020-05-12 14:28:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 2835d071-5832-4f5c-8fcb-59e5c0041a1c 0xc00343e897 0xc00343e898}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 12 14:28:34.558: INFO: Pod "test-recreate-deployment-5c8c9cc69d-m47gt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-m47gt,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-9399,SelfLink:/api/v1/namespaces/deployment-9399/pods/test-recreate-deployment-5c8c9cc69d-m47gt,UID:3efd6f1a-38be-402f-8a1e-4f96c0993bc8,ResourceVersion:10498748,Generation:0,CreationTimestamp:2020-05-12 14:28:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d da1fa687-5e59-425b-afc5-c1b5b5740a53 0xc002eff447 0xc002eff448}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-82868 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-82868,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-82868 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002eff4c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002eff4e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 14:28:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 14:28:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 14:28:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 14:28:34 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-12 14:28:34 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:28:34.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9399" for this suite. May 12 14:28:40.915: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:28:41.079: INFO: namespace deployment-9399 deletion completed in 6.518367612s • [SLOW TEST:12.939 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:28:41.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 12 14:28:42.247: INFO: Pod name wrapped-volume-race-4aae826e-7cc8-46ba-ad65-922438c398a6: Found 0 pods out of 5 May 12 14:28:47.255: INFO: Pod name wrapped-volume-race-4aae826e-7cc8-46ba-ad65-922438c398a6: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-4aae826e-7cc8-46ba-ad65-922438c398a6 in namespace emptydir-wrapper-605, will wait for the garbage collector to delete the pods May 12 14:29:03.718: INFO: Deleting ReplicationController wrapped-volume-race-4aae826e-7cc8-46ba-ad65-922438c398a6 took: 56.333966ms May 12 14:29:04.019: INFO: Terminating ReplicationController wrapped-volume-race-4aae826e-7cc8-46ba-ad65-922438c398a6 pods took: 300.280161ms STEP: Creating RC which spawns configmap-volume pods May 12 14:29:42.578: INFO: Pod name wrapped-volume-race-a72b97b3-fba3-4893-bf91-1ae731376edd: Found 0 pods out of 5 May 12 14:29:47.616: INFO: Pod name wrapped-volume-race-a72b97b3-fba3-4893-bf91-1ae731376edd: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-a72b97b3-fba3-4893-bf91-1ae731376edd in namespace emptydir-wrapper-605, will wait for the garbage collector to delete the pods May 12 14:30:03.697: INFO: Deleting ReplicationController wrapped-volume-race-a72b97b3-fba3-4893-bf91-1ae731376edd took: 7.310831ms May 12 14:30:03.997: INFO: Terminating ReplicationController wrapped-volume-race-a72b97b3-fba3-4893-bf91-1ae731376edd pods took: 300.304575ms STEP: Creating RC which spawns configmap-volume pods May 12 14:30:43.248: INFO: Pod name wrapped-volume-race-2fff96b7-0b74-407b-9fe5-f456be430910: Found 0 pods out of 5 May 12 14:30:48.258: INFO: Pod name wrapped-volume-race-2fff96b7-0b74-407b-9fe5-f456be430910: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-2fff96b7-0b74-407b-9fe5-f456be430910 in namespace emptydir-wrapper-605, will wait for the garbage collector to delete the pods May 12 14:31:05.179: INFO: Deleting ReplicationController wrapped-volume-race-2fff96b7-0b74-407b-9fe5-f456be430910 took: 77.159131ms May 12 14:31:05.880: INFO: Terminating ReplicationController wrapped-volume-race-2fff96b7-0b74-407b-9fe5-f456be430910 pods took: 700.224933ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:31:53.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-605" for this suite. May 12 14:32:03.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:32:03.528: INFO: namespace emptydir-wrapper-605 deletion completed in 10.073249149s • [SLOW TEST:202.448 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:32:03.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:32:09.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5007" for this suite. May 12 14:32:49.720: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:32:49.777: INFO: namespace kubelet-test-5007 deletion completed in 40.068255306s • [SLOW TEST:46.249 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:32:49.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-b44c6c46-7494-4dec-a8ef-aa95b007f3ac [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:32:49.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5350" for this suite. May 12 14:32:56.106: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:32:56.176: INFO: namespace secrets-5350 deletion completed in 6.164410758s • [SLOW TEST:6.398 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:32:56.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC May 12 14:32:56.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-266' May 12 14:32:56.927: INFO: stderr: "" May 12 14:32:56.927: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 12 14:32:57.931: INFO: Selector matched 1 pods for map[app:redis] May 12 14:32:57.931: INFO: Found 0 / 1 May 12 14:32:58.932: INFO: Selector matched 1 pods for map[app:redis] May 12 14:32:58.932: INFO: Found 0 / 1 May 12 14:32:59.932: INFO: Selector matched 1 pods for map[app:redis] May 12 14:32:59.932: INFO: Found 0 / 1 May 12 14:33:00.931: INFO: Selector matched 1 pods for map[app:redis] May 12 14:33:00.931: INFO: Found 0 / 1 May 12 14:33:01.930: INFO: Selector matched 1 pods for map[app:redis] May 12 14:33:01.930: INFO: Found 0 / 1 May 12 14:33:02.930: INFO: Selector matched 1 pods for map[app:redis] May 12 14:33:02.930: INFO: Found 1 / 1 May 12 14:33:02.930: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 12 14:33:02.932: INFO: Selector matched 1 pods for map[app:redis] May 12 14:33:02.932: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 12 14:33:02.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-tt2ht --namespace=kubectl-266 -p {"metadata":{"annotations":{"x":"y"}}}' May 12 14:33:03.024: INFO: stderr: "" May 12 14:33:03.024: INFO: stdout: "pod/redis-master-tt2ht patched\n" STEP: checking annotations May 12 14:33:03.064: INFO: Selector matched 1 pods for map[app:redis] May 12 14:33:03.064: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:33:03.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-266" for this suite. May 12 14:33:25.129: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:33:25.191: INFO: namespace kubectl-266 deletion completed in 22.12391586s • [SLOW TEST:29.014 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:33:25.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes May 12 14:33:29.553: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice May 12 14:33:44.652: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:33:44.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4666" for this suite. May 12 14:33:50.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:33:50.766: INFO: namespace pods-4666 deletion completed in 6.109350297s • [SLOW TEST:25.575 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:33:50.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 12 14:33:51.213: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0aba3a26-ffd8-4f5c-be8e-7d33c3061377" in namespace "projected-8445" to be "success or failure" May 12 14:33:51.240: INFO: Pod "downwardapi-volume-0aba3a26-ffd8-4f5c-be8e-7d33c3061377": Phase="Pending", Reason="", readiness=false. Elapsed: 27.307374ms May 12 14:33:53.245: INFO: Pod "downwardapi-volume-0aba3a26-ffd8-4f5c-be8e-7d33c3061377": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031782618s May 12 14:33:55.248: INFO: Pod "downwardapi-volume-0aba3a26-ffd8-4f5c-be8e-7d33c3061377": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035365943s May 12 14:33:57.252: INFO: Pod "downwardapi-volume-0aba3a26-ffd8-4f5c-be8e-7d33c3061377": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.039096557s STEP: Saw pod success May 12 14:33:57.252: INFO: Pod "downwardapi-volume-0aba3a26-ffd8-4f5c-be8e-7d33c3061377" satisfied condition "success or failure" May 12 14:33:57.254: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-0aba3a26-ffd8-4f5c-be8e-7d33c3061377 container client-container: STEP: delete the pod May 12 14:33:57.290: INFO: Waiting for pod downwardapi-volume-0aba3a26-ffd8-4f5c-be8e-7d33c3061377 to disappear May 12 14:33:57.586: INFO: Pod downwardapi-volume-0aba3a26-ffd8-4f5c-be8e-7d33c3061377 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:33:57.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8445" for this suite. May 12 14:34:03.607: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:34:03.763: INFO: namespace projected-8445 deletion completed in 6.169762834s • [SLOW TEST:12.996 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:34:03.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-77ac90e1-948e-4c36-aa97-29363eac03cb STEP: Creating a pod to test consume secrets May 12 14:34:04.411: INFO: Waiting up to 5m0s for pod "pod-secrets-0d142303-ea06-4c60-bcd8-ff101098861e" in namespace "secrets-780" to be "success or failure" May 12 14:34:04.438: INFO: Pod "pod-secrets-0d142303-ea06-4c60-bcd8-ff101098861e": Phase="Pending", Reason="", readiness=false. Elapsed: 27.47813ms May 12 14:34:06.503: INFO: Pod "pod-secrets-0d142303-ea06-4c60-bcd8-ff101098861e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092464466s May 12 14:34:08.551: INFO: Pod "pod-secrets-0d142303-ea06-4c60-bcd8-ff101098861e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.139957049s May 12 14:34:10.554: INFO: Pod "pod-secrets-0d142303-ea06-4c60-bcd8-ff101098861e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.142921408s STEP: Saw pod success May 12 14:34:10.554: INFO: Pod "pod-secrets-0d142303-ea06-4c60-bcd8-ff101098861e" satisfied condition "success or failure" May 12 14:34:10.556: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-0d142303-ea06-4c60-bcd8-ff101098861e container secret-volume-test: STEP: delete the pod May 12 14:34:10.609: INFO: Waiting for pod pod-secrets-0d142303-ea06-4c60-bcd8-ff101098861e to disappear May 12 14:34:10.636: INFO: Pod pod-secrets-0d142303-ea06-4c60-bcd8-ff101098861e no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:34:10.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-780" for this suite. May 12 14:34:16.651: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:34:16.747: INFO: namespace secrets-780 deletion completed in 6.108720895s STEP: Destroying namespace "secret-namespace-4416" for this suite. May 12 14:34:22.783: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:34:22.847: INFO: namespace secret-namespace-4416 deletion completed in 6.100071696s • [SLOW TEST:19.084 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:34:22.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-4ww5 STEP: Creating a pod to test atomic-volume-subpath May 12 14:34:23.014: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-4ww5" in namespace "subpath-6685" to be "success or failure" May 12 14:34:23.031: INFO: Pod "pod-subpath-test-secret-4ww5": Phase="Pending", Reason="", readiness=false. Elapsed: 17.511023ms May 12 14:34:25.072: INFO: Pod "pod-subpath-test-secret-4ww5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058001743s May 12 14:34:27.075: INFO: Pod "pod-subpath-test-secret-4ww5": Phase="Running", Reason="", readiness=true. Elapsed: 4.060930237s May 12 14:34:29.079: INFO: Pod "pod-subpath-test-secret-4ww5": Phase="Running", Reason="", readiness=true. Elapsed: 6.064723873s May 12 14:34:31.083: INFO: Pod "pod-subpath-test-secret-4ww5": Phase="Running", Reason="", readiness=true. Elapsed: 8.069189483s May 12 14:34:33.086: INFO: Pod "pod-subpath-test-secret-4ww5": Phase="Running", Reason="", readiness=true. Elapsed: 10.072022527s May 12 14:34:35.090: INFO: Pod "pod-subpath-test-secret-4ww5": Phase="Running", Reason="", readiness=true. Elapsed: 12.076106504s May 12 14:34:37.094: INFO: Pod "pod-subpath-test-secret-4ww5": Phase="Running", Reason="", readiness=true. Elapsed: 14.07991579s May 12 14:34:39.098: INFO: Pod "pod-subpath-test-secret-4ww5": Phase="Running", Reason="", readiness=true. Elapsed: 16.083808373s May 12 14:34:41.102: INFO: Pod "pod-subpath-test-secret-4ww5": Phase="Running", Reason="", readiness=true. Elapsed: 18.087768725s May 12 14:34:43.106: INFO: Pod "pod-subpath-test-secret-4ww5": Phase="Running", Reason="", readiness=true. Elapsed: 20.091674035s May 12 14:34:45.109: INFO: Pod "pod-subpath-test-secret-4ww5": Phase="Running", Reason="", readiness=true. Elapsed: 22.0954503s May 12 14:34:47.113: INFO: Pod "pod-subpath-test-secret-4ww5": Phase="Running", Reason="", readiness=true. Elapsed: 24.098901176s May 12 14:34:49.116: INFO: Pod "pod-subpath-test-secret-4ww5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.102147649s STEP: Saw pod success May 12 14:34:49.116: INFO: Pod "pod-subpath-test-secret-4ww5" satisfied condition "success or failure" May 12 14:34:49.119: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-secret-4ww5 container test-container-subpath-secret-4ww5: STEP: delete the pod May 12 14:34:49.152: INFO: Waiting for pod pod-subpath-test-secret-4ww5 to disappear May 12 14:34:49.191: INFO: Pod pod-subpath-test-secret-4ww5 no longer exists STEP: Deleting pod pod-subpath-test-secret-4ww5 May 12 14:34:49.191: INFO: Deleting pod "pod-subpath-test-secret-4ww5" in namespace "subpath-6685" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:34:49.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6685" for this suite. May 12 14:34:55.208: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:34:55.261: INFO: namespace subpath-6685 deletion completed in 6.064548324s • [SLOW TEST:32.413 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:34:55.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin May 12 14:34:55.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1610 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' May 12 14:34:59.227: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0512 14:34:59.177636 3349 log.go:172] (0xc000a8a210) (0xc0006c2820) Create stream\nI0512 14:34:59.177689 3349 log.go:172] (0xc000a8a210) (0xc0006c2820) Stream added, broadcasting: 1\nI0512 14:34:59.179506 3349 log.go:172] (0xc000a8a210) Reply frame received for 1\nI0512 14:34:59.179547 3349 log.go:172] (0xc000a8a210) (0xc000354000) Create stream\nI0512 14:34:59.179563 3349 log.go:172] (0xc000a8a210) (0xc000354000) Stream added, broadcasting: 3\nI0512 14:34:59.180518 3349 log.go:172] (0xc000a8a210) Reply frame received for 3\nI0512 14:34:59.180563 3349 log.go:172] (0xc000a8a210) (0xc00035c000) Create stream\nI0512 14:34:59.180579 3349 log.go:172] (0xc000a8a210) (0xc00035c000) Stream added, broadcasting: 5\nI0512 14:34:59.181586 3349 log.go:172] (0xc000a8a210) Reply frame received for 5\nI0512 14:34:59.181626 3349 log.go:172] (0xc000a8a210) (0xc00035c0a0) Create stream\nI0512 14:34:59.181638 3349 log.go:172] (0xc000a8a210) (0xc00035c0a0) Stream added, broadcasting: 7\nI0512 14:34:59.182577 3349 log.go:172] (0xc000a8a210) Reply frame received for 7\nI0512 14:34:59.182685 3349 log.go:172] (0xc000354000) (3) Writing data frame\nI0512 14:34:59.182745 3349 log.go:172] (0xc000354000) (3) Writing data frame\nI0512 14:34:59.183694 3349 log.go:172] (0xc000a8a210) Data frame received for 5\nI0512 14:34:59.183729 3349 log.go:172] (0xc00035c000) (5) Data frame handling\nI0512 14:34:59.183764 3349 log.go:172] (0xc00035c000) (5) Data frame sent\nI0512 14:34:59.184093 3349 log.go:172] (0xc000a8a210) Data frame received for 5\nI0512 14:34:59.184104 3349 log.go:172] (0xc00035c000) (5) Data frame handling\nI0512 14:34:59.184110 3349 log.go:172] (0xc00035c000) (5) Data frame sent\nI0512 14:34:59.214631 3349 log.go:172] (0xc000a8a210) Data frame received for 7\nI0512 14:34:59.214661 3349 log.go:172] (0xc00035c0a0) (7) Data frame handling\nI0512 14:34:59.214694 3349 log.go:172] (0xc000a8a210) Data frame received for 5\nI0512 14:34:59.214711 3349 log.go:172] (0xc00035c000) (5) Data frame handling\nI0512 14:34:59.214799 3349 log.go:172] (0xc000a8a210) Data frame received for 1\nI0512 14:34:59.214814 3349 log.go:172] (0xc0006c2820) (1) Data frame handling\nI0512 14:34:59.214832 3349 log.go:172] (0xc0006c2820) (1) Data frame sent\nI0512 14:34:59.214848 3349 log.go:172] (0xc000a8a210) (0xc0006c2820) Stream removed, broadcasting: 1\nI0512 14:34:59.214921 3349 log.go:172] (0xc000a8a210) (0xc0006c2820) Stream removed, broadcasting: 1\nI0512 14:34:59.214936 3349 log.go:172] (0xc000a8a210) (0xc000354000) Stream removed, broadcasting: 3\nI0512 14:34:59.214953 3349 log.go:172] (0xc000a8a210) (0xc00035c000) Stream removed, broadcasting: 5\nI0512 14:34:59.215028 3349 log.go:172] (0xc000a8a210) Go away received\nI0512 14:34:59.215066 3349 log.go:172] (0xc000a8a210) (0xc00035c0a0) Stream removed, broadcasting: 7\n" May 12 14:34:59.227: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:35:01.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1610" for this suite. May 12 14:35:13.262: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:35:13.323: INFO: namespace kubectl-1610 deletion completed in 12.089838392s • [SLOW TEST:18.062 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:35:13.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 12 14:35:13.466: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9ea74bab-18f1-47a6-84d2-1c529506ea70" in namespace "downward-api-2272" to be "success or failure" May 12 14:35:13.563: INFO: Pod "downwardapi-volume-9ea74bab-18f1-47a6-84d2-1c529506ea70": Phase="Pending", Reason="", readiness=false. Elapsed: 97.479993ms May 12 14:35:15.567: INFO: Pod "downwardapi-volume-9ea74bab-18f1-47a6-84d2-1c529506ea70": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101644281s May 12 14:35:17.571: INFO: Pod "downwardapi-volume-9ea74bab-18f1-47a6-84d2-1c529506ea70": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.105628074s STEP: Saw pod success May 12 14:35:17.571: INFO: Pod "downwardapi-volume-9ea74bab-18f1-47a6-84d2-1c529506ea70" satisfied condition "success or failure" May 12 14:35:17.574: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-9ea74bab-18f1-47a6-84d2-1c529506ea70 container client-container: STEP: delete the pod May 12 14:35:17.597: INFO: Waiting for pod downwardapi-volume-9ea74bab-18f1-47a6-84d2-1c529506ea70 to disappear May 12 14:35:17.616: INFO: Pod downwardapi-volume-9ea74bab-18f1-47a6-84d2-1c529506ea70 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:35:17.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2272" for this suite. May 12 14:35:23.627: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:35:23.713: INFO: namespace downward-api-2272 deletion completed in 6.094277445s • [SLOW TEST:10.390 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:35:23.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3218.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-3218.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3218.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3218.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-3218.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3218.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 12 14:35:31.912: INFO: DNS probes using dns-3218/dns-test-aa28c3a2-5944-4a75-a994-f6e4865c977d succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:35:31.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3218" for this suite. May 12 14:35:37.999: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:35:38.074: INFO: namespace dns-3218 deletion completed in 6.124941272s • [SLOW TEST:14.361 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:35:38.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2974.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-2974.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2974.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-2974.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 12 14:35:46.200: INFO: DNS probes using dns-test-7d5f4637-7726-4fd4-a2cb-fd36ad115ae7 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2974.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-2974.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2974.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-2974.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 12 14:35:54.376: INFO: File wheezy_udp@dns-test-service-3.dns-2974.svc.cluster.local from pod dns-2974/dns-test-86cc6c18-7a30-46a5-8acb-d638172afd2d contains 'foo.example.com. ' instead of 'bar.example.com.' May 12 14:35:54.379: INFO: File jessie_udp@dns-test-service-3.dns-2974.svc.cluster.local from pod dns-2974/dns-test-86cc6c18-7a30-46a5-8acb-d638172afd2d contains 'foo.example.com. ' instead of 'bar.example.com.' May 12 14:35:54.379: INFO: Lookups using dns-2974/dns-test-86cc6c18-7a30-46a5-8acb-d638172afd2d failed for: [wheezy_udp@dns-test-service-3.dns-2974.svc.cluster.local jessie_udp@dns-test-service-3.dns-2974.svc.cluster.local] May 12 14:35:59.383: INFO: File wheezy_udp@dns-test-service-3.dns-2974.svc.cluster.local from pod dns-2974/dns-test-86cc6c18-7a30-46a5-8acb-d638172afd2d contains 'foo.example.com. ' instead of 'bar.example.com.' May 12 14:35:59.385: INFO: File jessie_udp@dns-test-service-3.dns-2974.svc.cluster.local from pod dns-2974/dns-test-86cc6c18-7a30-46a5-8acb-d638172afd2d contains '' instead of 'bar.example.com.' May 12 14:35:59.385: INFO: Lookups using dns-2974/dns-test-86cc6c18-7a30-46a5-8acb-d638172afd2d failed for: [wheezy_udp@dns-test-service-3.dns-2974.svc.cluster.local jessie_udp@dns-test-service-3.dns-2974.svc.cluster.local] May 12 14:36:04.403: INFO: File wheezy_udp@dns-test-service-3.dns-2974.svc.cluster.local from pod dns-2974/dns-test-86cc6c18-7a30-46a5-8acb-d638172afd2d contains 'foo.example.com. ' instead of 'bar.example.com.' May 12 14:36:04.406: INFO: File jessie_udp@dns-test-service-3.dns-2974.svc.cluster.local from pod dns-2974/dns-test-86cc6c18-7a30-46a5-8acb-d638172afd2d contains 'foo.example.com. ' instead of 'bar.example.com.' May 12 14:36:04.406: INFO: Lookups using dns-2974/dns-test-86cc6c18-7a30-46a5-8acb-d638172afd2d failed for: [wheezy_udp@dns-test-service-3.dns-2974.svc.cluster.local jessie_udp@dns-test-service-3.dns-2974.svc.cluster.local] May 12 14:36:09.382: INFO: File wheezy_udp@dns-test-service-3.dns-2974.svc.cluster.local from pod dns-2974/dns-test-86cc6c18-7a30-46a5-8acb-d638172afd2d contains 'foo.example.com. ' instead of 'bar.example.com.' May 12 14:36:09.386: INFO: File jessie_udp@dns-test-service-3.dns-2974.svc.cluster.local from pod dns-2974/dns-test-86cc6c18-7a30-46a5-8acb-d638172afd2d contains 'foo.example.com. ' instead of 'bar.example.com.' May 12 14:36:09.386: INFO: Lookups using dns-2974/dns-test-86cc6c18-7a30-46a5-8acb-d638172afd2d failed for: [wheezy_udp@dns-test-service-3.dns-2974.svc.cluster.local jessie_udp@dns-test-service-3.dns-2974.svc.cluster.local] May 12 14:36:14.388: INFO: DNS probes using dns-test-86cc6c18-7a30-46a5-8acb-d638172afd2d succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2974.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-2974.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2974.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-2974.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 12 14:36:23.275: INFO: DNS probes using dns-test-4af8e17e-2883-4074-a0e1-013c4e3cd6ad succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:36:23.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2974" for this suite. May 12 14:36:29.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:36:29.448: INFO: namespace dns-2974 deletion completed in 6.091419381s • [SLOW TEST:51.374 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:36:29.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server May 12 14:36:29.590: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:36:29.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4720" for this suite. May 12 14:36:35.719: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:36:35.788: INFO: namespace kubectl-4720 deletion completed in 6.097085554s • [SLOW TEST:6.339 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:36:35.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-3180, will wait for the garbage collector to delete the pods May 12 14:36:40.182: INFO: Deleting Job.batch foo took: 7.795635ms May 12 14:36:40.482: INFO: Terminating Job.batch foo pods took: 300.232571ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:37:22.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3180" for this suite. May 12 14:37:28.449: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:37:28.564: INFO: namespace job-3180 deletion completed in 6.251527843s • [SLOW TEST:52.776 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:37:28.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 12 14:37:28.861: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"94297255-4ea3-4cea-bb89-9d9ae41b495e", Controller:(*bool)(0xc0018e17f2), BlockOwnerDeletion:(*bool)(0xc0018e17f3)}} May 12 14:37:28.924: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"a3bd5436-57fe-4415-9b33-426dedf4b2ae", Controller:(*bool)(0xc001c162b2), BlockOwnerDeletion:(*bool)(0xc001c162b3)}} May 12 14:37:28.970: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"78ddb572-140f-47a7-8e44-c5044a813502", Controller:(*bool)(0xc002c7774a), BlockOwnerDeletion:(*bool)(0xc002c7774b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:37:34.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5516" for this suite. May 12 14:37:40.116: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:37:40.173: INFO: namespace gc-5516 deletion completed in 6.101439295s • [SLOW TEST:11.609 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:37:40.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args May 12 14:37:40.702: INFO: Waiting up to 5m0s for pod "var-expansion-8826aee6-489f-480f-8e28-a58d07cd7689" in namespace "var-expansion-4249" to be "success or failure" May 12 14:37:40.712: INFO: Pod "var-expansion-8826aee6-489f-480f-8e28-a58d07cd7689": Phase="Pending", Reason="", readiness=false. Elapsed: 10.153241ms May 12 14:37:42.716: INFO: Pod "var-expansion-8826aee6-489f-480f-8e28-a58d07cd7689": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0145335s May 12 14:37:44.733: INFO: Pod "var-expansion-8826aee6-489f-480f-8e28-a58d07cd7689": Phase="Running", Reason="", readiness=true. Elapsed: 4.031558162s May 12 14:37:46.737: INFO: Pod "var-expansion-8826aee6-489f-480f-8e28-a58d07cd7689": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.03503229s STEP: Saw pod success May 12 14:37:46.737: INFO: Pod "var-expansion-8826aee6-489f-480f-8e28-a58d07cd7689" satisfied condition "success or failure" May 12 14:37:46.739: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-8826aee6-489f-480f-8e28-a58d07cd7689 container dapi-container: STEP: delete the pod May 12 14:37:46.754: INFO: Waiting for pod var-expansion-8826aee6-489f-480f-8e28-a58d07cd7689 to disappear May 12 14:37:46.840: INFO: Pod var-expansion-8826aee6-489f-480f-8e28-a58d07cd7689 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:37:46.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4249" for this suite. May 12 14:37:52.852: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:37:52.905: INFO: namespace var-expansion-4249 deletion completed in 6.06176206s • [SLOW TEST:12.731 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:37:52.905: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 12 14:37:53.067: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:37:57.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-254" for this suite. May 12 14:38:47.321: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:38:47.379: INFO: namespace pods-254 deletion completed in 50.087818909s • [SLOW TEST:54.474 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:38:47.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:39:13.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-6251" for this suite. May 12 14:39:19.854: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:39:19.933: INFO: namespace namespaces-6251 deletion completed in 6.252113204s STEP: Destroying namespace "nsdeletetest-1520" for this suite. May 12 14:39:19.935: INFO: Namespace nsdeletetest-1520 was already deleted STEP: Destroying namespace "nsdeletetest-5160" for this suite. May 12 14:39:26.074: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:39:26.526: INFO: namespace nsdeletetest-5160 deletion completed in 6.591173082s • [SLOW TEST:39.147 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:39:26.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command May 12 14:39:27.139: INFO: Waiting up to 5m0s for pod "var-expansion-614420ee-b180-4cc6-915b-c75ad7e09711" in namespace "var-expansion-7438" to be "success or failure" May 12 14:39:27.180: INFO: Pod "var-expansion-614420ee-b180-4cc6-915b-c75ad7e09711": Phase="Pending", Reason="", readiness=false. Elapsed: 41.219366ms May 12 14:39:29.183: INFO: Pod "var-expansion-614420ee-b180-4cc6-915b-c75ad7e09711": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043700631s May 12 14:39:31.188: INFO: Pod "var-expansion-614420ee-b180-4cc6-915b-c75ad7e09711": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048777274s STEP: Saw pod success May 12 14:39:31.188: INFO: Pod "var-expansion-614420ee-b180-4cc6-915b-c75ad7e09711" satisfied condition "success or failure" May 12 14:39:31.190: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-614420ee-b180-4cc6-915b-c75ad7e09711 container dapi-container: STEP: delete the pod May 12 14:39:31.408: INFO: Waiting for pod var-expansion-614420ee-b180-4cc6-915b-c75ad7e09711 to disappear May 12 14:39:31.449: INFO: Pod var-expansion-614420ee-b180-4cc6-915b-c75ad7e09711 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:39:31.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7438" for this suite. May 12 14:39:37.488: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:39:37.542: INFO: namespace var-expansion-7438 deletion completed in 6.090820642s • [SLOW TEST:11.016 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:39:37.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-5503 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5503 to expose endpoints map[] May 12 14:39:37.666: INFO: Get endpoints failed (13.377394ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 12 14:39:38.669: INFO: successfully validated that service endpoint-test2 in namespace services-5503 exposes endpoints map[] (1.015953767s elapsed) STEP: Creating pod pod1 in namespace services-5503 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5503 to expose endpoints map[pod1:[80]] May 12 14:39:41.890: INFO: successfully validated that service endpoint-test2 in namespace services-5503 exposes endpoints map[pod1:[80]] (3.217535246s elapsed) STEP: Creating pod pod2 in namespace services-5503 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5503 to expose endpoints map[pod1:[80] pod2:[80]] May 12 14:39:44.960: INFO: successfully validated that service endpoint-test2 in namespace services-5503 exposes endpoints map[pod1:[80] pod2:[80]] (3.066531117s elapsed) STEP: Deleting pod pod1 in namespace services-5503 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5503 to expose endpoints map[pod2:[80]] May 12 14:39:46.019: INFO: successfully validated that service endpoint-test2 in namespace services-5503 exposes endpoints map[pod2:[80]] (1.054276997s elapsed) STEP: Deleting pod pod2 in namespace services-5503 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5503 to expose endpoints map[] May 12 14:39:47.032: INFO: successfully validated that service endpoint-test2 in namespace services-5503 exposes endpoints map[] (1.005617345s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:39:47.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5503" for this suite. May 12 14:39:53.210: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:39:53.269: INFO: namespace services-5503 deletion completed in 6.104504092s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:15.725 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:39:53.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 12 14:39:58.201: INFO: Successfully updated pod "pod-update-484d5711-00f3-4962-8478-54fb27ce79f7" STEP: verifying the updated pod is in kubernetes May 12 14:39:58.219: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:39:58.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4267" for this suite. May 12 14:40:20.243: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:40:20.296: INFO: namespace pods-4267 deletion completed in 22.073783205s • [SLOW TEST:27.027 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:40:20.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-30e884f8-53e7-48b8-8724-3f927505a35a STEP: Creating a pod to test consume configMaps May 12 14:40:20.446: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-662c14ff-2e8a-4416-92e5-90a39a6b7600" in namespace "projected-9835" to be "success or failure" May 12 14:40:20.480: INFO: Pod "pod-projected-configmaps-662c14ff-2e8a-4416-92e5-90a39a6b7600": Phase="Pending", Reason="", readiness=false. Elapsed: 34.281775ms May 12 14:40:22.484: INFO: Pod "pod-projected-configmaps-662c14ff-2e8a-4416-92e5-90a39a6b7600": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037869093s May 12 14:40:24.488: INFO: Pod "pod-projected-configmaps-662c14ff-2e8a-4416-92e5-90a39a6b7600": Phase="Running", Reason="", readiness=true. Elapsed: 4.041423682s May 12 14:40:26.507: INFO: Pod "pod-projected-configmaps-662c14ff-2e8a-4416-92e5-90a39a6b7600": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.061248455s STEP: Saw pod success May 12 14:40:26.508: INFO: Pod "pod-projected-configmaps-662c14ff-2e8a-4416-92e5-90a39a6b7600" satisfied condition "success or failure" May 12 14:40:26.510: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-662c14ff-2e8a-4416-92e5-90a39a6b7600 container projected-configmap-volume-test: STEP: delete the pod May 12 14:40:26.553: INFO: Waiting for pod pod-projected-configmaps-662c14ff-2e8a-4416-92e5-90a39a6b7600 to disappear May 12 14:40:26.595: INFO: Pod pod-projected-configmaps-662c14ff-2e8a-4416-92e5-90a39a6b7600 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:40:26.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9835" for this suite. May 12 14:40:32.630: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:40:32.707: INFO: namespace projected-9835 deletion completed in 6.108179583s • [SLOW TEST:12.411 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:40:32.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-7nt5 STEP: Creating a pod to test atomic-volume-subpath May 12 14:40:32.783: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-7nt5" in namespace "subpath-8338" to be "success or failure" May 12 14:40:32.788: INFO: Pod "pod-subpath-test-configmap-7nt5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.269426ms May 12 14:40:34.791: INFO: Pod "pod-subpath-test-configmap-7nt5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0082092s May 12 14:40:36.794: INFO: Pod "pod-subpath-test-configmap-7nt5": Phase="Running", Reason="", readiness=true. Elapsed: 4.011234663s May 12 14:40:38.798: INFO: Pod "pod-subpath-test-configmap-7nt5": Phase="Running", Reason="", readiness=true. Elapsed: 6.015209252s May 12 14:40:40.802: INFO: Pod "pod-subpath-test-configmap-7nt5": Phase="Running", Reason="", readiness=true. Elapsed: 8.018638998s May 12 14:40:42.805: INFO: Pod "pod-subpath-test-configmap-7nt5": Phase="Running", Reason="", readiness=true. Elapsed: 10.021599357s May 12 14:40:44.809: INFO: Pod "pod-subpath-test-configmap-7nt5": Phase="Running", Reason="", readiness=true. Elapsed: 12.025615061s May 12 14:40:46.813: INFO: Pod "pod-subpath-test-configmap-7nt5": Phase="Running", Reason="", readiness=true. Elapsed: 14.029314389s May 12 14:40:48.816: INFO: Pod "pod-subpath-test-configmap-7nt5": Phase="Running", Reason="", readiness=true. Elapsed: 16.033014575s May 12 14:40:50.821: INFO: Pod "pod-subpath-test-configmap-7nt5": Phase="Running", Reason="", readiness=true. Elapsed: 18.037756282s May 12 14:40:52.825: INFO: Pod "pod-subpath-test-configmap-7nt5": Phase="Running", Reason="", readiness=true. Elapsed: 20.041797499s May 12 14:40:54.828: INFO: Pod "pod-subpath-test-configmap-7nt5": Phase="Running", Reason="", readiness=true. Elapsed: 22.04473978s May 12 14:40:56.832: INFO: Pod "pod-subpath-test-configmap-7nt5": Phase="Running", Reason="", readiness=true. Elapsed: 24.048834652s May 12 14:40:58.836: INFO: Pod "pod-subpath-test-configmap-7nt5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.052721716s STEP: Saw pod success May 12 14:40:58.836: INFO: Pod "pod-subpath-test-configmap-7nt5" satisfied condition "success or failure" May 12 14:40:58.838: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-configmap-7nt5 container test-container-subpath-configmap-7nt5: STEP: delete the pod May 12 14:40:58.859: INFO: Waiting for pod pod-subpath-test-configmap-7nt5 to disappear May 12 14:40:58.875: INFO: Pod pod-subpath-test-configmap-7nt5 no longer exists STEP: Deleting pod pod-subpath-test-configmap-7nt5 May 12 14:40:58.875: INFO: Deleting pod "pod-subpath-test-configmap-7nt5" in namespace "subpath-8338" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:40:58.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8338" for this suite. May 12 14:41:04.890: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:41:04.951: INFO: namespace subpath-8338 deletion completed in 6.070358626s • [SLOW TEST:32.244 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:41:04.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 12 14:41:05.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' May 12 14:41:05.141: INFO: stderr: "" May 12 14:41:05.141: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.11\", GitCommit:\"d94a81c724ea8e1ccc9002d89b7fe81d58f89ede\", GitTreeState:\"clean\", BuildDate:\"2020-05-02T15:37:43Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:28:37Z\", GoVersion:\"go1.12.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:41:05.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7029" for this suite. May 12 14:41:11.161: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:41:11.232: INFO: namespace kubectl-7029 deletion completed in 6.087645755s • [SLOW TEST:6.280 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:41:11.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-1502 I0512 14:41:11.503644 6 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-1502, replica count: 1 I0512 14:41:12.554157 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 14:41:13.554338 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 14:41:14.554560 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 14:41:15.554746 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 12 14:41:15.724: INFO: Created: latency-svc-psqqj May 12 14:41:15.728: INFO: Got endpoints: latency-svc-psqqj [73.289673ms] May 12 14:41:15.785: INFO: Created: latency-svc-bhs9l May 12 14:41:15.801: INFO: Got endpoints: latency-svc-bhs9l [73.091355ms] May 12 14:41:15.874: INFO: Created: latency-svc-7wf5q May 12 14:41:15.889: INFO: Got endpoints: latency-svc-7wf5q [161.510624ms] May 12 14:41:15.944: INFO: Created: latency-svc-pg44j May 12 14:41:16.053: INFO: Got endpoints: latency-svc-pg44j [325.296073ms] May 12 14:41:16.076: INFO: Created: latency-svc-ffpjh May 12 14:41:16.102: INFO: Got endpoints: latency-svc-ffpjh [373.567151ms] May 12 14:41:16.135: INFO: Created: latency-svc-xm2kz May 12 14:41:16.203: INFO: Got endpoints: latency-svc-xm2kz [474.503222ms] May 12 14:41:16.205: INFO: Created: latency-svc-gd4kq May 12 14:41:16.216: INFO: Got endpoints: latency-svc-gd4kq [487.71169ms] May 12 14:41:16.247: INFO: Created: latency-svc-xrzfj May 12 14:41:16.264: INFO: Got endpoints: latency-svc-xrzfj [536.272032ms] May 12 14:41:16.338: INFO: Created: latency-svc-5k6km May 12 14:41:16.338: INFO: Got endpoints: latency-svc-5k6km [609.368258ms] May 12 14:41:16.399: INFO: Created: latency-svc-db66v May 12 14:41:16.417: INFO: Got endpoints: latency-svc-db66v [688.467564ms] May 12 14:41:16.478: INFO: Created: latency-svc-r4lp4 May 12 14:41:16.481: INFO: Got endpoints: latency-svc-r4lp4 [752.941122ms] May 12 14:41:16.511: INFO: Created: latency-svc-smgxb May 12 14:41:16.525: INFO: Got endpoints: latency-svc-smgxb [796.583646ms] May 12 14:41:16.549: INFO: Created: latency-svc-qzcm6 May 12 14:41:16.562: INFO: Got endpoints: latency-svc-qzcm6 [833.963606ms] May 12 14:41:16.616: INFO: Created: latency-svc-5xqk5 May 12 14:41:16.619: INFO: Got endpoints: latency-svc-5xqk5 [890.660556ms] May 12 14:41:16.640: INFO: Created: latency-svc-g2sjh May 12 14:41:16.652: INFO: Got endpoints: latency-svc-g2sjh [923.16894ms] May 12 14:41:16.674: INFO: Created: latency-svc-9j275 May 12 14:41:16.697: INFO: Got endpoints: latency-svc-9j275 [968.654379ms] May 12 14:41:16.820: INFO: Created: latency-svc-tx6vl May 12 14:41:16.824: INFO: Got endpoints: latency-svc-tx6vl [1.022569616s] May 12 14:41:16.884: INFO: Created: latency-svc-n6q44 May 12 14:41:17.006: INFO: Got endpoints: latency-svc-n6q44 [1.116400334s] May 12 14:41:17.009: INFO: Created: latency-svc-f9vxx May 12 14:41:17.059: INFO: Got endpoints: latency-svc-f9vxx [1.005387026s] May 12 14:41:17.090: INFO: Created: latency-svc-ttvg7 May 12 14:41:17.173: INFO: Got endpoints: latency-svc-ttvg7 [1.071597608s] May 12 14:41:17.175: INFO: Created: latency-svc-55jqt May 12 14:41:17.197: INFO: Got endpoints: latency-svc-55jqt [993.985984ms] May 12 14:41:17.226: INFO: Created: latency-svc-g2z8l May 12 14:41:17.266: INFO: Got endpoints: latency-svc-g2z8l [1.049673835s] May 12 14:41:17.342: INFO: Created: latency-svc-wptcf May 12 14:41:17.365: INFO: Got endpoints: latency-svc-wptcf [1.100888834s] May 12 14:41:17.420: INFO: Created: latency-svc-qmw8g May 12 14:41:17.484: INFO: Got endpoints: latency-svc-qmw8g [1.146482414s] May 12 14:41:17.508: INFO: Created: latency-svc-fk4c2 May 12 14:41:17.534: INFO: Got endpoints: latency-svc-fk4c2 [1.116860444s] May 12 14:41:17.568: INFO: Created: latency-svc-l4fjx May 12 14:41:17.677: INFO: Got endpoints: latency-svc-l4fjx [1.196057917s] May 12 14:41:17.707: INFO: Created: latency-svc-ssksw May 12 14:41:17.726: INFO: Got endpoints: latency-svc-ssksw [1.200983124s] May 12 14:41:17.887: INFO: Created: latency-svc-2grvt May 12 14:41:17.890: INFO: Got endpoints: latency-svc-2grvt [1.327829919s] May 12 14:41:17.938: INFO: Created: latency-svc-chhrb May 12 14:41:17.984: INFO: Got endpoints: latency-svc-chhrb [1.364851937s] May 12 14:41:18.043: INFO: Created: latency-svc-x697w May 12 14:41:18.046: INFO: Got endpoints: latency-svc-x697w [1.394265057s] May 12 14:41:18.078: INFO: Created: latency-svc-84fm4 May 12 14:41:18.101: INFO: Got endpoints: latency-svc-84fm4 [1.403674474s] May 12 14:41:18.174: INFO: Created: latency-svc-lczrx May 12 14:41:18.176: INFO: Got endpoints: latency-svc-lczrx [1.352789656s] May 12 14:41:18.200: INFO: Created: latency-svc-8rhhk May 12 14:41:18.215: INFO: Got endpoints: latency-svc-8rhhk [1.20962104s] May 12 14:41:18.247: INFO: Created: latency-svc-6vtb5 May 12 14:41:18.271: INFO: Got endpoints: latency-svc-6vtb5 [1.211995144s] May 12 14:41:18.371: INFO: Created: latency-svc-ldxjm May 12 14:41:18.372: INFO: Got endpoints: latency-svc-ldxjm [1.198601293s] May 12 14:41:18.416: INFO: Created: latency-svc-tdq6h May 12 14:41:18.446: INFO: Got endpoints: latency-svc-tdq6h [1.249507288s] May 12 14:41:18.509: INFO: Created: latency-svc-ztvhd May 12 14:41:18.512: INFO: Got endpoints: latency-svc-ztvhd [1.245814611s] May 12 14:41:18.576: INFO: Created: latency-svc-dp9pw May 12 14:41:18.588: INFO: Got endpoints: latency-svc-dp9pw [1.222768801s] May 12 14:41:18.647: INFO: Created: latency-svc-rcbxw May 12 14:41:18.650: INFO: Got endpoints: latency-svc-rcbxw [1.165658112s] May 12 14:41:18.674: INFO: Created: latency-svc-tb67d May 12 14:41:18.691: INFO: Got endpoints: latency-svc-tb67d [1.156818364s] May 12 14:41:18.710: INFO: Created: latency-svc-b5t22 May 12 14:41:18.727: INFO: Got endpoints: latency-svc-b5t22 [1.049845951s] May 12 14:41:18.822: INFO: Created: latency-svc-zpqcn May 12 14:41:18.835: INFO: Got endpoints: latency-svc-zpqcn [1.109058776s] May 12 14:41:18.864: INFO: Created: latency-svc-hqtmt May 12 14:41:18.881: INFO: Got endpoints: latency-svc-hqtmt [991.046943ms] May 12 14:41:18.958: INFO: Created: latency-svc-nw567 May 12 14:41:18.967: INFO: Got endpoints: latency-svc-nw567 [983.417213ms] May 12 14:41:18.998: INFO: Created: latency-svc-t8jdx May 12 14:41:19.008: INFO: Got endpoints: latency-svc-t8jdx [961.474742ms] May 12 14:41:19.032: INFO: Created: latency-svc-q5rwv May 12 14:41:19.050: INFO: Got endpoints: latency-svc-q5rwv [948.884285ms] May 12 14:41:19.114: INFO: Created: latency-svc-nn54b May 12 14:41:19.134: INFO: Got endpoints: latency-svc-nn54b [957.343048ms] May 12 14:41:19.160: INFO: Created: latency-svc-r88lk May 12 14:41:19.170: INFO: Got endpoints: latency-svc-r88lk [954.133705ms] May 12 14:41:19.197: INFO: Created: latency-svc-7v88s May 12 14:41:19.200: INFO: Got endpoints: latency-svc-7v88s [928.777844ms] May 12 14:41:19.263: INFO: Created: latency-svc-6j79v May 12 14:41:19.267: INFO: Got endpoints: latency-svc-6j79v [894.538922ms] May 12 14:41:19.290: INFO: Created: latency-svc-fxxsd May 12 14:41:19.308: INFO: Got endpoints: latency-svc-fxxsd [861.726442ms] May 12 14:41:19.327: INFO: Created: latency-svc-hqnj5 May 12 14:41:19.345: INFO: Got endpoints: latency-svc-hqnj5 [833.199694ms] May 12 14:41:19.431: INFO: Created: latency-svc-kkrd4 May 12 14:41:19.433: INFO: Got endpoints: latency-svc-kkrd4 [845.260189ms] May 12 14:41:19.476: INFO: Created: latency-svc-gm2jn May 12 14:41:19.495: INFO: Got endpoints: latency-svc-gm2jn [844.966811ms] May 12 14:41:19.519: INFO: Created: latency-svc-r24b2 May 12 14:41:19.528: INFO: Got endpoints: latency-svc-r24b2 [837.268417ms] May 12 14:41:19.578: INFO: Created: latency-svc-k774w May 12 14:41:19.633: INFO: Got endpoints: latency-svc-k774w [905.785166ms] May 12 14:41:19.638: INFO: Created: latency-svc-4bp6k May 12 14:41:19.706: INFO: Got endpoints: latency-svc-4bp6k [871.357341ms] May 12 14:41:19.722: INFO: Created: latency-svc-tpgnq May 12 14:41:19.740: INFO: Got endpoints: latency-svc-tpgnq [858.477363ms] May 12 14:41:19.770: INFO: Created: latency-svc-6c2mm May 12 14:41:19.781: INFO: Got endpoints: latency-svc-6c2mm [814.278076ms] May 12 14:41:19.856: INFO: Created: latency-svc-pd2pr May 12 14:41:19.859: INFO: Got endpoints: latency-svc-pd2pr [851.487254ms] May 12 14:41:19.886: INFO: Created: latency-svc-v4xk7 May 12 14:41:19.902: INFO: Got endpoints: latency-svc-v4xk7 [851.662498ms] May 12 14:41:19.925: INFO: Created: latency-svc-cjbdx May 12 14:41:19.950: INFO: Got endpoints: latency-svc-cjbdx [816.067956ms] May 12 14:41:20.060: INFO: Created: latency-svc-46mrj May 12 14:41:20.138: INFO: Got endpoints: latency-svc-46mrj [968.071003ms] May 12 14:41:20.155: INFO: Created: latency-svc-wbr5v May 12 14:41:20.226: INFO: Got endpoints: latency-svc-wbr5v [1.026760876s] May 12 14:41:20.364: INFO: Created: latency-svc-vst7c May 12 14:41:20.382: INFO: Got endpoints: latency-svc-vst7c [1.115790243s] May 12 14:41:20.551: INFO: Created: latency-svc-kstj9 May 12 14:41:20.587: INFO: Got endpoints: latency-svc-kstj9 [1.278467502s] May 12 14:41:20.612: INFO: Created: latency-svc-v9tjt May 12 14:41:20.629: INFO: Got endpoints: latency-svc-v9tjt [1.283974385s] May 12 14:41:20.719: INFO: Created: latency-svc-xqdd7 May 12 14:41:20.727: INFO: Got endpoints: latency-svc-xqdd7 [1.293683543s] May 12 14:41:20.754: INFO: Created: latency-svc-ndpx6 May 12 14:41:20.770: INFO: Got endpoints: latency-svc-ndpx6 [1.27468859s] May 12 14:41:20.793: INFO: Created: latency-svc-dkplh May 12 14:41:20.806: INFO: Got endpoints: latency-svc-dkplh [1.278031145s] May 12 14:41:20.856: INFO: Created: latency-svc-hg2m4 May 12 14:41:20.860: INFO: Got endpoints: latency-svc-hg2m4 [1.226827242s] May 12 14:41:20.892: INFO: Created: latency-svc-7l8v4 May 12 14:41:20.922: INFO: Got endpoints: latency-svc-7l8v4 [1.215714046s] May 12 14:41:21.002: INFO: Created: latency-svc-pxs7r May 12 14:41:21.026: INFO: Got endpoints: latency-svc-pxs7r [1.286381341s] May 12 14:41:21.044: INFO: Created: latency-svc-fpb7w May 12 14:41:21.058: INFO: Got endpoints: latency-svc-fpb7w [1.276869693s] May 12 14:41:21.096: INFO: Created: latency-svc-sqtpx May 12 14:41:21.191: INFO: Got endpoints: latency-svc-sqtpx [1.33223298s] May 12 14:41:21.193: INFO: Created: latency-svc-j7b74 May 12 14:41:21.198: INFO: Got endpoints: latency-svc-j7b74 [1.296350205s] May 12 14:41:21.248: INFO: Created: latency-svc-h8rcf May 12 14:41:21.276: INFO: Got endpoints: latency-svc-h8rcf [1.325846387s] May 12 14:41:21.352: INFO: Created: latency-svc-wftgw May 12 14:41:21.355: INFO: Got endpoints: latency-svc-wftgw [1.216989374s] May 12 14:41:21.390: INFO: Created: latency-svc-6h27s May 12 14:41:21.426: INFO: Got endpoints: latency-svc-6h27s [1.199828633s] May 12 14:41:21.491: INFO: Created: latency-svc-xjdn8 May 12 14:41:21.518: INFO: Got endpoints: latency-svc-xjdn8 [1.13571108s] May 12 14:41:21.519: INFO: Created: latency-svc-mqqzr May 12 14:41:21.532: INFO: Got endpoints: latency-svc-mqqzr [944.985384ms] May 12 14:41:21.560: INFO: Created: latency-svc-cdm8k May 12 14:41:21.587: INFO: Got endpoints: latency-svc-cdm8k [958.089587ms] May 12 14:41:21.641: INFO: Created: latency-svc-j9brq May 12 14:41:21.644: INFO: Got endpoints: latency-svc-j9brq [916.29049ms] May 12 14:41:21.722: INFO: Created: latency-svc-86wxj May 12 14:41:21.736: INFO: Got endpoints: latency-svc-86wxj [966.698588ms] May 12 14:41:21.797: INFO: Created: latency-svc-k5g5n May 12 14:41:21.800: INFO: Got endpoints: latency-svc-k5g5n [993.272053ms] May 12 14:41:21.825: INFO: Created: latency-svc-zqhlb May 12 14:41:21.839: INFO: Got endpoints: latency-svc-zqhlb [979.258775ms] May 12 14:41:21.870: INFO: Created: latency-svc-jz8l6 May 12 14:41:21.887: INFO: Got endpoints: latency-svc-jz8l6 [964.890279ms] May 12 14:41:21.941: INFO: Created: latency-svc-wsrjk May 12 14:41:21.953: INFO: Got endpoints: latency-svc-wsrjk [927.099273ms] May 12 14:41:21.974: INFO: Created: latency-svc-j8tvh May 12 14:41:21.990: INFO: Got endpoints: latency-svc-j8tvh [931.456455ms] May 12 14:41:22.010: INFO: Created: latency-svc-l8p7c May 12 14:41:22.029: INFO: Got endpoints: latency-svc-l8p7c [837.739095ms] May 12 14:41:22.078: INFO: Created: latency-svc-2rncc May 12 14:41:22.080: INFO: Got endpoints: latency-svc-2rncc [881.912158ms] May 12 14:41:22.136: INFO: Created: latency-svc-6zkwq May 12 14:41:22.251: INFO: Got endpoints: latency-svc-6zkwq [975.293026ms] May 12 14:41:22.255: INFO: Created: latency-svc-95v98 May 12 14:41:22.260: INFO: Got endpoints: latency-svc-95v98 [905.441649ms] May 12 14:41:22.395: INFO: Created: latency-svc-pbjzv May 12 14:41:22.400: INFO: Got endpoints: latency-svc-pbjzv [973.979198ms] May 12 14:41:22.440: INFO: Created: latency-svc-mzwhb May 12 14:41:22.479: INFO: Got endpoints: latency-svc-mzwhb [961.012134ms] May 12 14:41:22.533: INFO: Created: latency-svc-lhjb2 May 12 14:41:22.535: INFO: Got endpoints: latency-svc-lhjb2 [1.003544317s] May 12 14:41:22.580: INFO: Created: latency-svc-24cmk May 12 14:41:22.598: INFO: Got endpoints: latency-svc-24cmk [1.010771987s] May 12 14:41:22.677: INFO: Created: latency-svc-xxgnl May 12 14:41:22.679: INFO: Got endpoints: latency-svc-xxgnl [1.035790859s] May 12 14:41:22.710: INFO: Created: latency-svc-np7fc May 12 14:41:22.724: INFO: Got endpoints: latency-svc-np7fc [987.116889ms] May 12 14:41:22.746: INFO: Created: latency-svc-rfc7n May 12 14:41:22.754: INFO: Got endpoints: latency-svc-rfc7n [954.521352ms] May 12 14:41:22.814: INFO: Created: latency-svc-25gn4 May 12 14:41:22.837: INFO: Got endpoints: latency-svc-25gn4 [997.849685ms] May 12 14:41:22.838: INFO: Created: latency-svc-ng29b May 12 14:41:22.851: INFO: Got endpoints: latency-svc-ng29b [963.517071ms] May 12 14:41:22.874: INFO: Created: latency-svc-7zsf4 May 12 14:41:22.887: INFO: Got endpoints: latency-svc-7zsf4 [933.521074ms] May 12 14:41:22.910: INFO: Created: latency-svc-wtqll May 12 14:41:22.963: INFO: Got endpoints: latency-svc-wtqll [973.517026ms] May 12 14:41:22.973: INFO: Created: latency-svc-c8z4v May 12 14:41:22.984: INFO: Got endpoints: latency-svc-c8z4v [954.731439ms] May 12 14:41:23.016: INFO: Created: latency-svc-7ql8w May 12 14:41:23.026: INFO: Got endpoints: latency-svc-7ql8w [946.012037ms] May 12 14:41:23.054: INFO: Created: latency-svc-mcqmz May 12 14:41:23.107: INFO: Got endpoints: latency-svc-mcqmz [855.887689ms] May 12 14:41:23.109: INFO: Created: latency-svc-sjn2w May 12 14:41:23.129: INFO: Got endpoints: latency-svc-sjn2w [868.808374ms] May 12 14:41:23.151: INFO: Created: latency-svc-4t4n7 May 12 14:41:23.177: INFO: Got endpoints: latency-svc-4t4n7 [776.131187ms] May 12 14:41:23.202: INFO: Created: latency-svc-ftb49 May 12 14:41:23.233: INFO: Got endpoints: latency-svc-ftb49 [753.989863ms] May 12 14:41:23.251: INFO: Created: latency-svc-kwdws May 12 14:41:23.267: INFO: Got endpoints: latency-svc-kwdws [731.69607ms] May 12 14:41:23.287: INFO: Created: latency-svc-j5cgj May 12 14:41:23.304: INFO: Got endpoints: latency-svc-j5cgj [705.825153ms] May 12 14:41:23.324: INFO: Created: latency-svc-hn4fq May 12 14:41:23.375: INFO: Got endpoints: latency-svc-hn4fq [695.919818ms] May 12 14:41:23.407: INFO: Created: latency-svc-jsf5z May 12 14:41:23.436: INFO: Got endpoints: latency-svc-jsf5z [712.360376ms] May 12 14:41:23.581: INFO: Created: latency-svc-lfgzf May 12 14:41:23.584: INFO: Got endpoints: latency-svc-lfgzf [829.923535ms] May 12 14:41:23.748: INFO: Created: latency-svc-4vsmw May 12 14:41:23.754: INFO: Got endpoints: latency-svc-4vsmw [916.579539ms] May 12 14:41:23.810: INFO: Created: latency-svc-pn2lr May 12 14:41:23.827: INFO: Got endpoints: latency-svc-pn2lr [975.869339ms] May 12 14:41:23.988: INFO: Created: latency-svc-mczbr May 12 14:41:23.990: INFO: Got endpoints: latency-svc-mczbr [1.103040387s] May 12 14:41:24.060: INFO: Created: latency-svc-m4m6p May 12 14:41:24.125: INFO: Got endpoints: latency-svc-m4m6p [1.161769541s] May 12 14:41:24.151: INFO: Created: latency-svc-9kxx9 May 12 14:41:24.205: INFO: Got endpoints: latency-svc-9kxx9 [1.220841711s] May 12 14:41:24.335: INFO: Created: latency-svc-p5z99 May 12 14:41:24.339: INFO: Got endpoints: latency-svc-p5z99 [1.313026708s] May 12 14:41:24.391: INFO: Created: latency-svc-54v6z May 12 14:41:24.428: INFO: Got endpoints: latency-svc-54v6z [1.321327542s] May 12 14:41:24.557: INFO: Created: latency-svc-d9q2h May 12 14:41:24.561: INFO: Got endpoints: latency-svc-d9q2h [1.431994287s] May 12 14:41:24.584: INFO: Created: latency-svc-7cjlq May 12 14:41:24.597: INFO: Got endpoints: latency-svc-7cjlq [1.420286884s] May 12 14:41:24.614: INFO: Created: latency-svc-szkgq May 12 14:41:24.639: INFO: Got endpoints: latency-svc-szkgq [1.405702214s] May 12 14:41:24.695: INFO: Created: latency-svc-fcdw2 May 12 14:41:24.705: INFO: Got endpoints: latency-svc-fcdw2 [1.438254303s] May 12 14:41:24.726: INFO: Created: latency-svc-2d9w5 May 12 14:41:24.742: INFO: Got endpoints: latency-svc-2d9w5 [1.437843476s] May 12 14:41:24.782: INFO: Created: latency-svc-fzzxc May 12 14:41:24.850: INFO: Got endpoints: latency-svc-fzzxc [1.474874878s] May 12 14:41:24.873: INFO: Created: latency-svc-zwmcf May 12 14:41:24.886: INFO: Got endpoints: latency-svc-zwmcf [1.449892807s] May 12 14:41:24.913: INFO: Created: latency-svc-kgrc6 May 12 14:41:24.928: INFO: Got endpoints: latency-svc-kgrc6 [1.344260779s] May 12 14:41:24.976: INFO: Created: latency-svc-bmr7b May 12 14:41:24.983: INFO: Got endpoints: latency-svc-bmr7b [1.228880804s] May 12 14:41:25.002: INFO: Created: latency-svc-v64gr May 12 14:41:25.040: INFO: Got endpoints: latency-svc-v64gr [1.21311568s] May 12 14:41:25.123: INFO: Created: latency-svc-64pnv May 12 14:41:25.123: INFO: Got endpoints: latency-svc-64pnv [1.133621418s] May 12 14:41:25.177: INFO: Created: latency-svc-jcmhn May 12 14:41:25.199: INFO: Got endpoints: latency-svc-jcmhn [1.074073243s] May 12 14:41:25.218: INFO: Created: latency-svc-z4tpt May 12 14:41:25.257: INFO: Got endpoints: latency-svc-z4tpt [1.052412277s] May 12 14:41:25.274: INFO: Created: latency-svc-9tft7 May 12 14:41:25.290: INFO: Got endpoints: latency-svc-9tft7 [950.645183ms] May 12 14:41:25.310: INFO: Created: latency-svc-7z67g May 12 14:41:25.320: INFO: Got endpoints: latency-svc-7z67g [891.577976ms] May 12 14:41:25.346: INFO: Created: latency-svc-6fdq9 May 12 14:41:25.352: INFO: Got endpoints: latency-svc-6fdq9 [790.347994ms] May 12 14:41:25.401: INFO: Created: latency-svc-zrldp May 12 14:41:25.404: INFO: Got endpoints: latency-svc-zrldp [807.221196ms] May 12 14:41:25.434: INFO: Created: latency-svc-55ks8 May 12 14:41:25.447: INFO: Got endpoints: latency-svc-55ks8 [807.872311ms] May 12 14:41:25.476: INFO: Created: latency-svc-wkzgb May 12 14:41:25.526: INFO: Got endpoints: latency-svc-wkzgb [820.96503ms] May 12 14:41:25.550: INFO: Created: latency-svc-l2xfc May 12 14:41:25.561: INFO: Got endpoints: latency-svc-l2xfc [819.533726ms] May 12 14:41:25.580: INFO: Created: latency-svc-2fzzl May 12 14:41:25.592: INFO: Got endpoints: latency-svc-2fzzl [741.498116ms] May 12 14:41:25.694: INFO: Created: latency-svc-gblft May 12 14:41:25.698: INFO: Got endpoints: latency-svc-gblft [812.15444ms] May 12 14:41:25.742: INFO: Created: latency-svc-dk9jp May 12 14:41:25.754: INFO: Got endpoints: latency-svc-dk9jp [825.658265ms] May 12 14:41:25.772: INFO: Created: latency-svc-d4d9h May 12 14:41:25.880: INFO: Got endpoints: latency-svc-d4d9h [897.181686ms] May 12 14:41:25.882: INFO: Created: latency-svc-v22w2 May 12 14:41:25.899: INFO: Got endpoints: latency-svc-v22w2 [858.714007ms] May 12 14:41:25.950: INFO: Created: latency-svc-6snkw May 12 14:41:25.964: INFO: Got endpoints: latency-svc-6snkw [840.894183ms] May 12 14:41:26.032: INFO: Created: latency-svc-czvfg May 12 14:41:26.037: INFO: Got endpoints: latency-svc-czvfg [837.478793ms] May 12 14:41:26.060: INFO: Created: latency-svc-6mx9m May 12 14:41:26.076: INFO: Got endpoints: latency-svc-6mx9m [818.827518ms] May 12 14:41:26.099: INFO: Created: latency-svc-bffgp May 12 14:41:26.116: INFO: Got endpoints: latency-svc-bffgp [825.773262ms] May 12 14:41:26.174: INFO: Created: latency-svc-dnngg May 12 14:41:26.177: INFO: Got endpoints: latency-svc-dnngg [857.237699ms] May 12 14:41:26.208: INFO: Created: latency-svc-mh2bv May 12 14:41:26.224: INFO: Got endpoints: latency-svc-mh2bv [872.510355ms] May 12 14:41:26.252: INFO: Created: latency-svc-zmbgc May 12 14:41:26.260: INFO: Got endpoints: latency-svc-zmbgc [855.678826ms] May 12 14:41:26.354: INFO: Created: latency-svc-xtxhw May 12 14:41:26.388: INFO: Got endpoints: latency-svc-xtxhw [940.724647ms] May 12 14:41:26.432: INFO: Created: latency-svc-xgr2z May 12 14:41:26.447: INFO: Got endpoints: latency-svc-xgr2z [920.019522ms] May 12 14:41:26.522: INFO: Created: latency-svc-czmj4 May 12 14:41:26.549: INFO: Got endpoints: latency-svc-czmj4 [987.370732ms] May 12 14:41:26.594: INFO: Created: latency-svc-hlnxq May 12 14:41:26.706: INFO: Got endpoints: latency-svc-hlnxq [1.11416766s] May 12 14:41:26.738: INFO: Created: latency-svc-j5v64 May 12 14:41:26.753: INFO: Got endpoints: latency-svc-j5v64 [1.054822598s] May 12 14:41:26.923: INFO: Created: latency-svc-mw8xt May 12 14:41:26.925: INFO: Got endpoints: latency-svc-mw8xt [1.17126851s] May 12 14:41:27.001: INFO: Created: latency-svc-m5nzd May 12 14:41:27.017: INFO: Got endpoints: latency-svc-m5nzd [1.137461355s] May 12 14:41:27.084: INFO: Created: latency-svc-fb798 May 12 14:41:27.104: INFO: Got endpoints: latency-svc-fb798 [1.204931672s] May 12 14:41:27.139: INFO: Created: latency-svc-d2nf8 May 12 14:41:27.156: INFO: Got endpoints: latency-svc-d2nf8 [1.191157686s] May 12 14:41:27.174: INFO: Created: latency-svc-9mx7m May 12 14:41:27.252: INFO: Got endpoints: latency-svc-9mx7m [1.21470716s] May 12 14:41:27.255: INFO: Created: latency-svc-z92tk May 12 14:41:27.270: INFO: Got endpoints: latency-svc-z92tk [1.193774707s] May 12 14:41:27.332: INFO: Created: latency-svc-2nvsq May 12 14:41:27.389: INFO: Got endpoints: latency-svc-2nvsq [1.273712565s] May 12 14:41:27.408: INFO: Created: latency-svc-5zg8q May 12 14:41:27.440: INFO: Got endpoints: latency-svc-5zg8q [1.263015294s] May 12 14:41:27.444: INFO: Created: latency-svc-fvzvf May 12 14:41:27.463: INFO: Got endpoints: latency-svc-fvzvf [1.23838424s] May 12 14:41:27.483: INFO: Created: latency-svc-k8vzf May 12 14:41:27.514: INFO: Got endpoints: latency-svc-k8vzf [1.254205311s] May 12 14:41:27.528: INFO: Created: latency-svc-dxddb May 12 14:41:27.542: INFO: Got endpoints: latency-svc-dxddb [1.15419139s] May 12 14:41:27.565: INFO: Created: latency-svc-z4vcl May 12 14:41:27.595: INFO: Got endpoints: latency-svc-z4vcl [1.14800185s] May 12 14:41:27.652: INFO: Created: latency-svc-dpkzb May 12 14:41:27.657: INFO: Got endpoints: latency-svc-dpkzb [1.108398622s] May 12 14:41:27.724: INFO: Created: latency-svc-kfvsr May 12 14:41:27.740: INFO: Got endpoints: latency-svc-kfvsr [1.033825862s] May 12 14:41:27.802: INFO: Created: latency-svc-zrgb6 May 12 14:41:27.805: INFO: Got endpoints: latency-svc-zrgb6 [1.052167945s] May 12 14:41:27.834: INFO: Created: latency-svc-m4r9x May 12 14:41:27.848: INFO: Got endpoints: latency-svc-m4r9x [922.817948ms] May 12 14:41:27.870: INFO: Created: latency-svc-4m9kg May 12 14:41:27.952: INFO: Got endpoints: latency-svc-4m9kg [934.873057ms] May 12 14:41:27.962: INFO: Created: latency-svc-c5rw9 May 12 14:41:27.981: INFO: Got endpoints: latency-svc-c5rw9 [877.243965ms] May 12 14:41:28.008: INFO: Created: latency-svc-mnshq May 12 14:41:28.017: INFO: Got endpoints: latency-svc-mnshq [861.560204ms] May 12 14:41:28.038: INFO: Created: latency-svc-qcfxm May 12 14:41:28.047: INFO: Got endpoints: latency-svc-qcfxm [795.793503ms] May 12 14:41:28.096: INFO: Created: latency-svc-4s4rz May 12 14:41:28.102: INFO: Got endpoints: latency-svc-4s4rz [831.699203ms] May 12 14:41:28.124: INFO: Created: latency-svc-j6rdm May 12 14:41:28.138: INFO: Got endpoints: latency-svc-j6rdm [748.428306ms] May 12 14:41:28.162: INFO: Created: latency-svc-wr4gq May 12 14:41:28.174: INFO: Got endpoints: latency-svc-wr4gq [733.587044ms] May 12 14:41:28.248: INFO: Created: latency-svc-cj8kd May 12 14:41:28.248: INFO: Got endpoints: latency-svc-cj8kd [785.621431ms] May 12 14:41:28.291: INFO: Created: latency-svc-xwfxd May 12 14:41:28.307: INFO: Got endpoints: latency-svc-xwfxd [792.614956ms] May 12 14:41:28.326: INFO: Created: latency-svc-9gmhk May 12 14:41:28.407: INFO: Got endpoints: latency-svc-9gmhk [864.930729ms] May 12 14:41:28.434: INFO: Created: latency-svc-chltc May 12 14:41:28.451: INFO: Got endpoints: latency-svc-chltc [856.508014ms] May 12 14:41:28.494: INFO: Created: latency-svc-2bl6g May 12 14:41:28.551: INFO: Got endpoints: latency-svc-2bl6g [893.676619ms] May 12 14:41:28.592: INFO: Created: latency-svc-xjdm9 May 12 14:41:28.637: INFO: Got endpoints: latency-svc-xjdm9 [897.45785ms] May 12 14:41:28.694: INFO: Created: latency-svc-xl2bf May 12 14:41:28.698: INFO: Got endpoints: latency-svc-xl2bf [892.141387ms] May 12 14:41:28.716: INFO: Created: latency-svc-jqffj May 12 14:41:28.735: INFO: Got endpoints: latency-svc-jqffj [887.22874ms] May 12 14:41:28.832: INFO: Created: latency-svc-sshx8 May 12 14:41:28.854: INFO: Created: latency-svc-xsx2m May 12 14:41:28.854: INFO: Got endpoints: latency-svc-sshx8 [901.91007ms] May 12 14:41:28.872: INFO: Got endpoints: latency-svc-xsx2m [891.053107ms] May 12 14:41:28.896: INFO: Created: latency-svc-vkxfc May 12 14:41:28.914: INFO: Got endpoints: latency-svc-vkxfc [897.09451ms] May 12 14:41:28.932: INFO: Created: latency-svc-s6f9r May 12 14:41:29.001: INFO: Created: latency-svc-jpw6h May 12 14:41:29.004: INFO: Got endpoints: latency-svc-s6f9r [956.873714ms] May 12 14:41:29.005: INFO: Got endpoints: latency-svc-jpw6h [903.430405ms] May 12 14:41:29.030: INFO: Created: latency-svc-5h9vl May 12 14:41:29.064: INFO: Got endpoints: latency-svc-5h9vl [925.858636ms] May 12 14:41:29.094: INFO: Created: latency-svc-lkqvk May 12 14:41:29.162: INFO: Got endpoints: latency-svc-lkqvk [987.862711ms] May 12 14:41:29.162: INFO: Created: latency-svc-c5tgf May 12 14:41:29.174: INFO: Got endpoints: latency-svc-c5tgf [925.472834ms] May 12 14:41:29.198: INFO: Created: latency-svc-2v8c5 May 12 14:41:29.211: INFO: Got endpoints: latency-svc-2v8c5 [903.514775ms] May 12 14:41:29.228: INFO: Created: latency-svc-hzn2z May 12 14:41:29.241: INFO: Got endpoints: latency-svc-hzn2z [833.707075ms] May 12 14:41:29.258: INFO: Created: latency-svc-gbb77 May 12 14:41:29.311: INFO: Got endpoints: latency-svc-gbb77 [859.51882ms] May 12 14:41:29.311: INFO: Latencies: [73.091355ms 161.510624ms 325.296073ms 373.567151ms 474.503222ms 487.71169ms 536.272032ms 609.368258ms 688.467564ms 695.919818ms 705.825153ms 712.360376ms 731.69607ms 733.587044ms 741.498116ms 748.428306ms 752.941122ms 753.989863ms 776.131187ms 785.621431ms 790.347994ms 792.614956ms 795.793503ms 796.583646ms 807.221196ms 807.872311ms 812.15444ms 814.278076ms 816.067956ms 818.827518ms 819.533726ms 820.96503ms 825.658265ms 825.773262ms 829.923535ms 831.699203ms 833.199694ms 833.707075ms 833.963606ms 837.268417ms 837.478793ms 837.739095ms 840.894183ms 844.966811ms 845.260189ms 851.487254ms 851.662498ms 855.678826ms 855.887689ms 856.508014ms 857.237699ms 858.477363ms 858.714007ms 859.51882ms 861.560204ms 861.726442ms 864.930729ms 868.808374ms 871.357341ms 872.510355ms 877.243965ms 881.912158ms 887.22874ms 890.660556ms 891.053107ms 891.577976ms 892.141387ms 893.676619ms 894.538922ms 897.09451ms 897.181686ms 897.45785ms 901.91007ms 903.430405ms 903.514775ms 905.441649ms 905.785166ms 916.29049ms 916.579539ms 920.019522ms 922.817948ms 923.16894ms 925.472834ms 925.858636ms 927.099273ms 928.777844ms 931.456455ms 933.521074ms 934.873057ms 940.724647ms 944.985384ms 946.012037ms 948.884285ms 950.645183ms 954.133705ms 954.521352ms 954.731439ms 956.873714ms 957.343048ms 958.089587ms 961.012134ms 961.474742ms 963.517071ms 964.890279ms 966.698588ms 968.071003ms 968.654379ms 973.517026ms 973.979198ms 975.293026ms 975.869339ms 979.258775ms 983.417213ms 987.116889ms 987.370732ms 987.862711ms 991.046943ms 993.272053ms 993.985984ms 997.849685ms 1.003544317s 1.005387026s 1.010771987s 1.022569616s 1.026760876s 1.033825862s 1.035790859s 1.049673835s 1.049845951s 1.052167945s 1.052412277s 1.054822598s 1.071597608s 1.074073243s 1.100888834s 1.103040387s 1.108398622s 1.109058776s 1.11416766s 1.115790243s 1.116400334s 1.116860444s 1.133621418s 1.13571108s 1.137461355s 1.146482414s 1.14800185s 1.15419139s 1.156818364s 1.161769541s 1.165658112s 1.17126851s 1.191157686s 1.193774707s 1.196057917s 1.198601293s 1.199828633s 1.200983124s 1.204931672s 1.20962104s 1.211995144s 1.21311568s 1.21470716s 1.215714046s 1.216989374s 1.220841711s 1.222768801s 1.226827242s 1.228880804s 1.23838424s 1.245814611s 1.249507288s 1.254205311s 1.263015294s 1.273712565s 1.27468859s 1.276869693s 1.278031145s 1.278467502s 1.283974385s 1.286381341s 1.293683543s 1.296350205s 1.313026708s 1.321327542s 1.325846387s 1.327829919s 1.33223298s 1.344260779s 1.352789656s 1.364851937s 1.394265057s 1.403674474s 1.405702214s 1.420286884s 1.431994287s 1.437843476s 1.438254303s 1.449892807s 1.474874878s] May 12 14:41:29.311: INFO: 50 %ile: 961.012134ms May 12 14:41:29.311: INFO: 90 %ile: 1.286381341s May 12 14:41:29.311: INFO: 99 %ile: 1.449892807s May 12 14:41:29.311: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:41:29.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-1502" for this suite. May 12 14:42:05.323: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:42:05.388: INFO: namespace svc-latency-1502 deletion completed in 36.072673402s • [SLOW TEST:54.155 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:42:05.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-131 STEP: creating a selector STEP: Creating the service pods in kubernetes May 12 14:42:05.545: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 12 14:42:31.732: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.163:8080/dial?request=hostName&protocol=http&host=10.244.2.162&port=8080&tries=1'] Namespace:pod-network-test-131 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 14:42:31.732: INFO: >>> kubeConfig: /root/.kube/config I0512 14:42:31.762372 6 log.go:172] (0xc000c0f810) (0xc001f23220) Create stream I0512 14:42:31.762410 6 log.go:172] (0xc000c0f810) (0xc001f23220) Stream added, broadcasting: 1 I0512 14:42:31.764884 6 log.go:172] (0xc000c0f810) Reply frame received for 1 I0512 14:42:31.764922 6 log.go:172] (0xc000c0f810) (0xc001f232c0) Create stream I0512 14:42:31.764934 6 log.go:172] (0xc000c0f810) (0xc001f232c0) Stream added, broadcasting: 3 I0512 14:42:31.766185 6 log.go:172] (0xc000c0f810) Reply frame received for 3 I0512 14:42:31.766235 6 log.go:172] (0xc000c0f810) (0xc002cc6dc0) Create stream I0512 14:42:31.766251 6 log.go:172] (0xc000c0f810) (0xc002cc6dc0) Stream added, broadcasting: 5 I0512 14:42:31.767389 6 log.go:172] (0xc000c0f810) Reply frame received for 5 I0512 14:42:31.828309 6 log.go:172] (0xc000c0f810) Data frame received for 3 I0512 14:42:31.828337 6 log.go:172] (0xc001f232c0) (3) Data frame handling I0512 14:42:31.828356 6 log.go:172] (0xc001f232c0) (3) Data frame sent I0512 14:42:31.828971 6 log.go:172] (0xc000c0f810) Data frame received for 5 I0512 14:42:31.828998 6 log.go:172] (0xc002cc6dc0) (5) Data frame handling I0512 14:42:31.829313 6 log.go:172] (0xc000c0f810) Data frame received for 3 I0512 14:42:31.829357 6 log.go:172] (0xc001f232c0) (3) Data frame handling I0512 14:42:31.831149 6 log.go:172] (0xc000c0f810) Data frame received for 1 I0512 14:42:31.831166 6 log.go:172] (0xc001f23220) (1) Data frame handling I0512 14:42:31.831172 6 log.go:172] (0xc001f23220) (1) Data frame sent I0512 14:42:31.831183 6 log.go:172] (0xc000c0f810) (0xc001f23220) Stream removed, broadcasting: 1 I0512 14:42:31.831250 6 log.go:172] (0xc000c0f810) (0xc001f23220) Stream removed, broadcasting: 1 I0512 14:42:31.831267 6 log.go:172] (0xc000c0f810) (0xc001f232c0) Stream removed, broadcasting: 3 I0512 14:42:31.831277 6 log.go:172] (0xc000c0f810) (0xc002cc6dc0) Stream removed, broadcasting: 5 May 12 14:42:31.831: INFO: Waiting for endpoints: map[] I0512 14:42:31.831629 6 log.go:172] (0xc000c0f810) Go away received May 12 14:42:31.834: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.163:8080/dial?request=hostName&protocol=http&host=10.244.1.144&port=8080&tries=1'] Namespace:pod-network-test-131 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 14:42:31.834: INFO: >>> kubeConfig: /root/.kube/config I0512 14:42:31.869415 6 log.go:172] (0xc00132a420) (0xc001f23680) Create stream I0512 14:42:31.869448 6 log.go:172] (0xc00132a420) (0xc001f23680) Stream added, broadcasting: 1 I0512 14:42:31.872375 6 log.go:172] (0xc00132a420) Reply frame received for 1 I0512 14:42:31.872415 6 log.go:172] (0xc00132a420) (0xc003297e00) Create stream I0512 14:42:31.872439 6 log.go:172] (0xc00132a420) (0xc003297e00) Stream added, broadcasting: 3 I0512 14:42:31.873634 6 log.go:172] (0xc00132a420) Reply frame received for 3 I0512 14:42:31.873663 6 log.go:172] (0xc00132a420) (0xc002b04d20) Create stream I0512 14:42:31.873675 6 log.go:172] (0xc00132a420) (0xc002b04d20) Stream added, broadcasting: 5 I0512 14:42:31.874663 6 log.go:172] (0xc00132a420) Reply frame received for 5 I0512 14:42:31.943054 6 log.go:172] (0xc00132a420) Data frame received for 3 I0512 14:42:31.943074 6 log.go:172] (0xc003297e00) (3) Data frame handling I0512 14:42:31.943090 6 log.go:172] (0xc003297e00) (3) Data frame sent I0512 14:42:31.943314 6 log.go:172] (0xc00132a420) Data frame received for 3 I0512 14:42:31.943326 6 log.go:172] (0xc003297e00) (3) Data frame handling I0512 14:42:31.943493 6 log.go:172] (0xc00132a420) Data frame received for 5 I0512 14:42:31.943508 6 log.go:172] (0xc002b04d20) (5) Data frame handling I0512 14:42:31.944638 6 log.go:172] (0xc00132a420) Data frame received for 1 I0512 14:42:31.944652 6 log.go:172] (0xc001f23680) (1) Data frame handling I0512 14:42:31.944664 6 log.go:172] (0xc001f23680) (1) Data frame sent I0512 14:42:31.944674 6 log.go:172] (0xc00132a420) (0xc001f23680) Stream removed, broadcasting: 1 I0512 14:42:31.944740 6 log.go:172] (0xc00132a420) (0xc001f23680) Stream removed, broadcasting: 1 I0512 14:42:31.944754 6 log.go:172] (0xc00132a420) (0xc003297e00) Stream removed, broadcasting: 3 I0512 14:42:31.944788 6 log.go:172] (0xc00132a420) Go away received I0512 14:42:31.944875 6 log.go:172] (0xc00132a420) (0xc002b04d20) Stream removed, broadcasting: 5 May 12 14:42:31.944: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:42:31.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-131" for this suite. May 12 14:42:55.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:42:56.021: INFO: namespace pod-network-test-131 deletion completed in 24.073429551s • [SLOW TEST:50.633 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:42:56.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 12 14:42:56.090: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:42:57.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2760" for this suite. May 12 14:43:03.406: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:43:03.471: INFO: namespace custom-resource-definition-2760 deletion completed in 6.24204022s • [SLOW TEST:7.449 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:43:03.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-9b2a1dc5-2c29-429e-99ba-43649ac6669d STEP: Creating a pod to test consume secrets May 12 14:43:03.576: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b888a909-cce6-4da6-82fa-4ce9772d0a34" in namespace "projected-2643" to be "success or failure" May 12 14:43:03.593: INFO: Pod "pod-projected-secrets-b888a909-cce6-4da6-82fa-4ce9772d0a34": Phase="Pending", Reason="", readiness=false. Elapsed: 17.0653ms May 12 14:43:05.858: INFO: Pod "pod-projected-secrets-b888a909-cce6-4da6-82fa-4ce9772d0a34": Phase="Pending", Reason="", readiness=false. Elapsed: 2.282247351s May 12 14:43:07.862: INFO: Pod "pod-projected-secrets-b888a909-cce6-4da6-82fa-4ce9772d0a34": Phase="Running", Reason="", readiness=true. Elapsed: 4.286210157s May 12 14:43:09.866: INFO: Pod "pod-projected-secrets-b888a909-cce6-4da6-82fa-4ce9772d0a34": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.289764866s STEP: Saw pod success May 12 14:43:09.866: INFO: Pod "pod-projected-secrets-b888a909-cce6-4da6-82fa-4ce9772d0a34" satisfied condition "success or failure" May 12 14:43:09.868: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-b888a909-cce6-4da6-82fa-4ce9772d0a34 container secret-volume-test: STEP: delete the pod May 12 14:43:09.899: INFO: Waiting for pod pod-projected-secrets-b888a909-cce6-4da6-82fa-4ce9772d0a34 to disappear May 12 14:43:09.911: INFO: Pod pod-projected-secrets-b888a909-cce6-4da6-82fa-4ce9772d0a34 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:43:09.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2643" for this suite. May 12 14:43:15.944: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:43:16.018: INFO: namespace projected-2643 deletion completed in 6.103845569s • [SLOW TEST:12.547 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:43:16.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 12 14:43:21.131: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:43:22.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-1384" for this suite. May 12 14:43:44.201: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:43:44.319: INFO: namespace replicaset-1384 deletion completed in 22.143064276s • [SLOW TEST:28.300 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:43:44.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 12 14:43:44.375: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 12 14:43:44.451: INFO: Pod name sample-pod: Found 0 pods out of 1 May 12 14:43:49.455: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 12 14:43:49.455: INFO: Creating deployment "test-rolling-update-deployment" May 12 14:43:49.459: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 12 14:43:49.470: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 12 14:43:51.478: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 12 14:43:51.522: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724891429, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724891429, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724891429, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724891429, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 14:43:53.525: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 12 14:43:53.533: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-497,SelfLink:/apis/apps/v1/namespaces/deployment-497/deployments/test-rolling-update-deployment,UID:775cadd4-ae90-4516-9502-9d3aa08af4d2,ResourceVersion:10503591,Generation:1,CreationTimestamp:2020-05-12 14:43:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-12 14:43:49 +0000 UTC 2020-05-12 14:43:49 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-12 14:43:52 +0000 UTC 2020-05-12 14:43:49 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 12 14:43:53.535: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-497,SelfLink:/apis/apps/v1/namespaces/deployment-497/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:ea0cca64-86dc-4db6-a928-4ab9feb15485,ResourceVersion:10503580,Generation:1,CreationTimestamp:2020-05-12 14:43:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 775cadd4-ae90-4516-9502-9d3aa08af4d2 0xc001518df7 0xc001518df8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 12 14:43:53.535: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 12 14:43:53.536: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-497,SelfLink:/apis/apps/v1/namespaces/deployment-497/replicasets/test-rolling-update-controller,UID:ecefba0e-f5bd-4a6a-90d5-389aa21cc134,ResourceVersion:10503590,Generation:2,CreationTimestamp:2020-05-12 14:43:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 775cadd4-ae90-4516-9502-9d3aa08af4d2 0xc001518b97 0xc001518b98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 12 14:43:53.538: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-t2xtg" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-t2xtg,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-497,SelfLink:/api/v1/namespaces/deployment-497/pods/test-rolling-update-deployment-79f6b9d75c-t2xtg,UID:b3026e7b-ebe2-40c4-867d-3c0dbc415f0a,ResourceVersion:10503579,Generation:0,CreationTimestamp:2020-05-12 14:43:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c ea0cca64-86dc-4db6-a928-4ab9feb15485 0xc000b518a7 0xc000b518a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lz946 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lz946,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-lz946 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b51970} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b51990}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 14:43:49 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 14:43:52 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 14:43:52 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 14:43:49 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.166,StartTime:2020-05-12 14:43:49 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-12 14:43:52 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://ecbcfad69bafcea6f0a20943a18e09460c54d70683dd34b7d9fbde2cb48c4e98}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:43:53.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-497" for this suite. May 12 14:43:59.574: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:43:59.635: INFO: namespace deployment-497 deletion completed in 6.094067259s • [SLOW TEST:15.316 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:43:59.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0512 14:44:40.684891 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 12 14:44:40.684: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:44:40.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1032" for this suite. May 12 14:44:52.713: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:44:52.772: INFO: namespace gc-1032 deletion completed in 12.083981513s • [SLOW TEST:53.137 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:44:52.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments May 12 14:44:52.890: INFO: Waiting up to 5m0s for pod "client-containers-68fa4d69-1e5a-4495-a8c8-c66e7f68b2b0" in namespace "containers-3754" to be "success or failure" May 12 14:44:53.033: INFO: Pod "client-containers-68fa4d69-1e5a-4495-a8c8-c66e7f68b2b0": Phase="Pending", Reason="", readiness=false. Elapsed: 142.701318ms May 12 14:44:55.036: INFO: Pod "client-containers-68fa4d69-1e5a-4495-a8c8-c66e7f68b2b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.145958247s May 12 14:44:57.041: INFO: Pod "client-containers-68fa4d69-1e5a-4495-a8c8-c66e7f68b2b0": Phase="Running", Reason="", readiness=true. Elapsed: 4.150146005s May 12 14:44:59.044: INFO: Pod "client-containers-68fa4d69-1e5a-4495-a8c8-c66e7f68b2b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.153356291s STEP: Saw pod success May 12 14:44:59.044: INFO: Pod "client-containers-68fa4d69-1e5a-4495-a8c8-c66e7f68b2b0" satisfied condition "success or failure" May 12 14:44:59.046: INFO: Trying to get logs from node iruya-worker2 pod client-containers-68fa4d69-1e5a-4495-a8c8-c66e7f68b2b0 container test-container: STEP: delete the pod May 12 14:44:59.089: INFO: Waiting for pod client-containers-68fa4d69-1e5a-4495-a8c8-c66e7f68b2b0 to disappear May 12 14:44:59.141: INFO: Pod client-containers-68fa4d69-1e5a-4495-a8c8-c66e7f68b2b0 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:44:59.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3754" for this suite. May 12 14:45:05.199: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:45:05.359: INFO: namespace containers-3754 deletion completed in 6.21449946s • [SLOW TEST:12.586 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:45:05.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-2467d192-8a55-4901-96d3-39e693ec6580 STEP: Creating a pod to test consume configMaps May 12 14:45:05.632: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cf0b94db-11cc-4f20-98fe-680a5fd2f96d" in namespace "projected-2996" to be "success or failure" May 12 14:45:05.686: INFO: Pod "pod-projected-configmaps-cf0b94db-11cc-4f20-98fe-680a5fd2f96d": Phase="Pending", Reason="", readiness=false. Elapsed: 53.712772ms May 12 14:45:07.690: INFO: Pod "pod-projected-configmaps-cf0b94db-11cc-4f20-98fe-680a5fd2f96d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057661533s May 12 14:45:09.692: INFO: Pod "pod-projected-configmaps-cf0b94db-11cc-4f20-98fe-680a5fd2f96d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060332507s STEP: Saw pod success May 12 14:45:09.692: INFO: Pod "pod-projected-configmaps-cf0b94db-11cc-4f20-98fe-680a5fd2f96d" satisfied condition "success or failure" May 12 14:45:09.695: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-cf0b94db-11cc-4f20-98fe-680a5fd2f96d container projected-configmap-volume-test: STEP: delete the pod May 12 14:45:09.715: INFO: Waiting for pod pod-projected-configmaps-cf0b94db-11cc-4f20-98fe-680a5fd2f96d to disappear May 12 14:45:09.756: INFO: Pod pod-projected-configmaps-cf0b94db-11cc-4f20-98fe-680a5fd2f96d no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:45:09.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2996" for this suite. May 12 14:45:15.783: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:45:15.839: INFO: namespace projected-2996 deletion completed in 6.080502029s • [SLOW TEST:10.480 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:45:15.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-788a9451-0cfd-492c-b849-040062094fbd in namespace container-probe-2955 May 12 14:45:19.909: INFO: Started pod busybox-788a9451-0cfd-492c-b849-040062094fbd in namespace container-probe-2955 STEP: checking the pod's current state and verifying that restartCount is present May 12 14:45:19.911: INFO: Initial restart count of pod busybox-788a9451-0cfd-492c-b849-040062094fbd is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:49:21.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2955" for this suite. May 12 14:49:27.104: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:49:27.184: INFO: namespace container-probe-2955 deletion completed in 6.097789479s • [SLOW TEST:251.345 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:49:27.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller May 12 14:49:27.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3105' May 12 14:49:30.429: INFO: stderr: "" May 12 14:49:30.429: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 12 14:49:30.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3105' May 12 14:49:30.629: INFO: stderr: "" May 12 14:49:30.629: INFO: stdout: "update-demo-nautilus-5flsk update-demo-nautilus-j4zxq " May 12 14:49:30.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5flsk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3105' May 12 14:49:30.781: INFO: stderr: "" May 12 14:49:30.781: INFO: stdout: "" May 12 14:49:30.781: INFO: update-demo-nautilus-5flsk is created but not running May 12 14:49:35.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3105' May 12 14:49:35.877: INFO: stderr: "" May 12 14:49:35.877: INFO: stdout: "update-demo-nautilus-5flsk update-demo-nautilus-j4zxq " May 12 14:49:35.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5flsk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3105' May 12 14:49:35.970: INFO: stderr: "" May 12 14:49:35.970: INFO: stdout: "true" May 12 14:49:35.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5flsk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3105' May 12 14:49:36.072: INFO: stderr: "" May 12 14:49:36.072: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 12 14:49:36.072: INFO: validating pod update-demo-nautilus-5flsk May 12 14:49:36.076: INFO: got data: { "image": "nautilus.jpg" } May 12 14:49:36.076: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 12 14:49:36.076: INFO: update-demo-nautilus-5flsk is verified up and running May 12 14:49:36.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j4zxq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3105' May 12 14:49:36.173: INFO: stderr: "" May 12 14:49:36.173: INFO: stdout: "true" May 12 14:49:36.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j4zxq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3105' May 12 14:49:36.272: INFO: stderr: "" May 12 14:49:36.272: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 12 14:49:36.272: INFO: validating pod update-demo-nautilus-j4zxq May 12 14:49:36.275: INFO: got data: { "image": "nautilus.jpg" } May 12 14:49:36.275: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 12 14:49:36.275: INFO: update-demo-nautilus-j4zxq is verified up and running STEP: rolling-update to new replication controller May 12 14:49:36.277: INFO: scanned /root for discovery docs: May 12 14:49:36.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-3105' May 12 14:49:58.814: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 12 14:49:58.814: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. May 12 14:49:58.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3105' May 12 14:49:58.905: INFO: stderr: "" May 12 14:49:58.905: INFO: stdout: "update-demo-kitten-c5hcl update-demo-kitten-d8mzf " May 12 14:49:58.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-c5hcl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3105' May 12 14:49:59.000: INFO: stderr: "" May 12 14:49:59.000: INFO: stdout: "true" May 12 14:49:59.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-c5hcl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3105' May 12 14:49:59.101: INFO: stderr: "" May 12 14:49:59.101: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 12 14:49:59.101: INFO: validating pod update-demo-kitten-c5hcl May 12 14:49:59.105: INFO: got data: { "image": "kitten.jpg" } May 12 14:49:59.105: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 12 14:49:59.105: INFO: update-demo-kitten-c5hcl is verified up and running May 12 14:49:59.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-d8mzf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3105' May 12 14:49:59.223: INFO: stderr: "" May 12 14:49:59.223: INFO: stdout: "true" May 12 14:49:59.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-d8mzf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3105' May 12 14:49:59.317: INFO: stderr: "" May 12 14:49:59.317: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 12 14:49:59.317: INFO: validating pod update-demo-kitten-d8mzf May 12 14:49:59.320: INFO: got data: { "image": "kitten.jpg" } May 12 14:49:59.320: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 12 14:49:59.320: INFO: update-demo-kitten-d8mzf is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:49:59.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3105" for this suite. May 12 14:50:21.422: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:50:21.502: INFO: namespace kubectl-3105 deletion completed in 22.179097887s • [SLOW TEST:54.317 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:50:21.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 12 14:50:21.654: INFO: Waiting up to 5m0s for pod "downwardapi-volume-edde0357-40ed-4f46-8c86-9faa2b9e7996" in namespace "projected-4734" to be "success or failure" May 12 14:50:21.663: INFO: Pod "downwardapi-volume-edde0357-40ed-4f46-8c86-9faa2b9e7996": Phase="Pending", Reason="", readiness=false. Elapsed: 9.502142ms May 12 14:50:23.668: INFO: Pod "downwardapi-volume-edde0357-40ed-4f46-8c86-9faa2b9e7996": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013789534s May 12 14:50:25.672: INFO: Pod "downwardapi-volume-edde0357-40ed-4f46-8c86-9faa2b9e7996": Phase="Running", Reason="", readiness=true. Elapsed: 4.017780688s May 12 14:50:27.675: INFO: Pod "downwardapi-volume-edde0357-40ed-4f46-8c86-9faa2b9e7996": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021292632s STEP: Saw pod success May 12 14:50:27.675: INFO: Pod "downwardapi-volume-edde0357-40ed-4f46-8c86-9faa2b9e7996" satisfied condition "success or failure" May 12 14:50:27.677: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-edde0357-40ed-4f46-8c86-9faa2b9e7996 container client-container: STEP: delete the pod May 12 14:50:27.695: INFO: Waiting for pod downwardapi-volume-edde0357-40ed-4f46-8c86-9faa2b9e7996 to disappear May 12 14:50:27.700: INFO: Pod downwardapi-volume-edde0357-40ed-4f46-8c86-9faa2b9e7996 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:50:27.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4734" for this suite. May 12 14:50:33.708: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:50:33.768: INFO: namespace projected-4734 deletion completed in 6.065637116s • [SLOW TEST:12.265 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:50:33.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod May 12 14:50:40.409: INFO: Successfully updated pod "annotationupdatee95ffd37-3fac-4812-b226-8193bae45311" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:50:42.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2516" for this suite. May 12 14:51:04.560: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:51:04.640: INFO: namespace downward-api-2516 deletion completed in 22.131789836s • [SLOW TEST:30.873 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:51:04.641: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs May 12 14:51:04.829: INFO: Waiting up to 5m0s for pod "pod-e9d6dea2-e2ed-4c14-8bbe-c58277f3b9f1" in namespace "emptydir-9199" to be "success or failure" May 12 14:51:04.832: INFO: Pod "pod-e9d6dea2-e2ed-4c14-8bbe-c58277f3b9f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.579266ms May 12 14:51:06.837: INFO: Pod "pod-e9d6dea2-e2ed-4c14-8bbe-c58277f3b9f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007221134s May 12 14:51:08.840: INFO: Pod "pod-e9d6dea2-e2ed-4c14-8bbe-c58277f3b9f1": Phase="Running", Reason="", readiness=true. Elapsed: 4.010347106s May 12 14:51:10.843: INFO: Pod "pod-e9d6dea2-e2ed-4c14-8bbe-c58277f3b9f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013997651s STEP: Saw pod success May 12 14:51:10.843: INFO: Pod "pod-e9d6dea2-e2ed-4c14-8bbe-c58277f3b9f1" satisfied condition "success or failure" May 12 14:51:10.846: INFO: Trying to get logs from node iruya-worker pod pod-e9d6dea2-e2ed-4c14-8bbe-c58277f3b9f1 container test-container: STEP: delete the pod May 12 14:51:10.891: INFO: Waiting for pod pod-e9d6dea2-e2ed-4c14-8bbe-c58277f3b9f1 to disappear May 12 14:51:10.915: INFO: Pod pod-e9d6dea2-e2ed-4c14-8bbe-c58277f3b9f1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:51:10.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9199" for this suite. May 12 14:51:16.964: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:51:17.022: INFO: namespace emptydir-9199 deletion completed in 6.102716396s • [SLOW TEST:12.381 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:51:17.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 12 14:51:21.598: INFO: Successfully updated pod "pod-update-activedeadlineseconds-a504d86d-997d-473a-870f-ec776cebe80c" May 12 14:51:21.598: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-a504d86d-997d-473a-870f-ec776cebe80c" in namespace "pods-9862" to be "terminated due to deadline exceeded" May 12 14:51:21.626: INFO: Pod "pod-update-activedeadlineseconds-a504d86d-997d-473a-870f-ec776cebe80c": Phase="Running", Reason="", readiness=true. Elapsed: 27.950142ms May 12 14:51:23.630: INFO: Pod "pod-update-activedeadlineseconds-a504d86d-997d-473a-870f-ec776cebe80c": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.032515155s May 12 14:51:23.630: INFO: Pod "pod-update-activedeadlineseconds-a504d86d-997d-473a-870f-ec776cebe80c" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:51:23.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9862" for this suite. May 12 14:51:29.666: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:51:29.765: INFO: namespace pods-9862 deletion completed in 6.130492467s • [SLOW TEST:12.743 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:51:29.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 12 14:51:29.874: INFO: Waiting up to 5m0s for pod "downwardapi-volume-88cc7a1b-4f6c-44ab-96e1-a399483d64f0" in namespace "projected-4866" to be "success or failure" May 12 14:51:29.877: INFO: Pod "downwardapi-volume-88cc7a1b-4f6c-44ab-96e1-a399483d64f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.83999ms May 12 14:51:32.075: INFO: Pod "downwardapi-volume-88cc7a1b-4f6c-44ab-96e1-a399483d64f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.200777753s May 12 14:51:34.079: INFO: Pod "downwardapi-volume-88cc7a1b-4f6c-44ab-96e1-a399483d64f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.204121476s STEP: Saw pod success May 12 14:51:34.079: INFO: Pod "downwardapi-volume-88cc7a1b-4f6c-44ab-96e1-a399483d64f0" satisfied condition "success or failure" May 12 14:51:34.081: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-88cc7a1b-4f6c-44ab-96e1-a399483d64f0 container client-container: STEP: delete the pod May 12 14:51:34.463: INFO: Waiting for pod downwardapi-volume-88cc7a1b-4f6c-44ab-96e1-a399483d64f0 to disappear May 12 14:51:34.467: INFO: Pod downwardapi-volume-88cc7a1b-4f6c-44ab-96e1-a399483d64f0 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:51:34.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4866" for this suite. May 12 14:51:40.482: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:51:40.592: INFO: namespace projected-4866 deletion completed in 6.121995735s • [SLOW TEST:10.827 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:51:40.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 12 14:51:48.571: INFO: 10 pods remaining May 12 14:51:48.571: INFO: 0 pods has nil DeletionTimestamp May 12 14:51:48.571: INFO: May 12 14:51:49.737: INFO: 0 pods remaining May 12 14:51:49.737: INFO: 0 pods has nil DeletionTimestamp May 12 14:51:49.737: INFO: May 12 14:51:50.218: INFO: 0 pods remaining May 12 14:51:50.218: INFO: 0 pods has nil DeletionTimestamp May 12 14:51:50.218: INFO: STEP: Gathering metrics W0512 14:51:52.194950 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 12 14:51:52.194: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:51:52.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7188" for this suite. May 12 14:52:00.817: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:52:00.899: INFO: namespace gc-7188 deletion completed in 8.651795634s • [SLOW TEST:20.306 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:52:00.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod test-webserver-5ae89fc3-1970-48fd-91a5-ff8696e4761c in namespace container-probe-3386 May 12 14:52:04.974: INFO: Started pod test-webserver-5ae89fc3-1970-48fd-91a5-ff8696e4761c in namespace container-probe-3386 STEP: checking the pod's current state and verifying that restartCount is present May 12 14:52:04.975: INFO: Initial restart count of pod test-webserver-5ae89fc3-1970-48fd-91a5-ff8696e4761c is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:56:06.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3386" for this suite. May 12 14:56:12.260: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:56:12.359: INFO: namespace container-probe-3386 deletion completed in 6.119754226s • [SLOW TEST:251.460 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 14:56:12.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-24s8 STEP: Creating a pod to test atomic-volume-subpath May 12 14:56:12.427: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-24s8" in namespace "subpath-2734" to be "success or failure" May 12 14:56:12.481: INFO: Pod "pod-subpath-test-configmap-24s8": Phase="Pending", Reason="", readiness=false. Elapsed: 53.37565ms May 12 14:56:14.484: INFO: Pod "pod-subpath-test-configmap-24s8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056986657s May 12 14:56:16.589: INFO: Pod "pod-subpath-test-configmap-24s8": Phase="Running", Reason="", readiness=true. Elapsed: 4.161714731s May 12 14:56:18.592: INFO: Pod "pod-subpath-test-configmap-24s8": Phase="Running", Reason="", readiness=true. Elapsed: 6.164938316s May 12 14:56:20.769: INFO: Pod "pod-subpath-test-configmap-24s8": Phase="Running", Reason="", readiness=true. Elapsed: 8.341909417s May 12 14:56:22.773: INFO: Pod "pod-subpath-test-configmap-24s8": Phase="Running", Reason="", readiness=true. Elapsed: 10.345858888s May 12 14:56:24.843: INFO: Pod "pod-subpath-test-configmap-24s8": Phase="Running", Reason="", readiness=true. Elapsed: 12.415039539s May 12 14:56:26.846: INFO: Pod "pod-subpath-test-configmap-24s8": Phase="Running", Reason="", readiness=true. Elapsed: 14.418766099s May 12 14:56:28.850: INFO: Pod "pod-subpath-test-configmap-24s8": Phase="Running", Reason="", readiness=true. Elapsed: 16.422934807s May 12 14:56:30.854: INFO: Pod "pod-subpath-test-configmap-24s8": Phase="Running", Reason="", readiness=true. Elapsed: 18.426181924s May 12 14:56:32.856: INFO: Pod "pod-subpath-test-configmap-24s8": Phase="Running", Reason="", readiness=true. Elapsed: 20.428862387s May 12 14:56:34.860: INFO: Pod "pod-subpath-test-configmap-24s8": Phase="Running", Reason="", readiness=true. Elapsed: 22.43250776s May 12 14:56:36.863: INFO: Pod "pod-subpath-test-configmap-24s8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.435698742s STEP: Saw pod success May 12 14:56:36.863: INFO: Pod "pod-subpath-test-configmap-24s8" satisfied condition "success or failure" May 12 14:56:36.865: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-configmap-24s8 container test-container-subpath-configmap-24s8: STEP: delete the pod May 12 14:56:37.237: INFO: Waiting for pod pod-subpath-test-configmap-24s8 to disappear May 12 14:56:37.472: INFO: Pod pod-subpath-test-configmap-24s8 no longer exists STEP: Deleting pod pod-subpath-test-configmap-24s8 May 12 14:56:37.472: INFO: Deleting pod "pod-subpath-test-configmap-24s8" in namespace "subpath-2734" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 14:56:37.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2734" for this suite. May 12 14:56:43.559: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 14:56:43.644: INFO: namespace subpath-2734 deletion completed in 6.111031852s • [SLOW TEST:31.285 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSMay 12 14:56:43.645: INFO: Running AfterSuite actions on all nodes May 12 14:56:43.645: INFO: Running AfterSuite actions on node 1 May 12 14:56:43.645: INFO: Skipping dumping logs from cluster Summarizing 1 Failure: [Fail] [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:769 Ran 215 of 4412 Specs in 7247.010 seconds FAIL! -- 214 Passed | 1 Failed | 0 Pending | 4197 Skipped --- FAIL: TestE2E (7247.24s) FAIL