I0325 12:55:43.897612 6 e2e.go:243] Starting e2e run "42411b76-a88b-4a2b-9d8e-e1e6be74f281" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1585140943 - Will randomize all specs Will run 215 of 4412 specs Mar 25 12:55:44.074: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:55:44.079: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Mar 25 12:55:44.102: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 25 12:55:44.137: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Mar 25 12:55:44.137: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 25 12:55:44.137: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Mar 25 12:55:44.145: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Mar 25 12:55:44.145: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Mar 25 12:55:44.145: INFO: e2e test version: v1.15.10 Mar 25 12:55:44.147: INFO: kube-apiserver version: v1.15.7 SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 12:55:44.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime Mar 25 12:55:44.209: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 25 12:55:47.241: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 12:55:47.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2145" for this suite. Mar 25 12:55:53.393: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 12:55:53.474: INFO: namespace container-runtime-2145 deletion completed in 6.094073875s • [SLOW TEST:9.328 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 12:55:53.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 25 12:55:53.598: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 12:55:53.611: INFO: Number of nodes with available pods: 0 Mar 25 12:55:53.611: INFO: Node iruya-worker is running more than one daemon pod Mar 25 12:55:54.616: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 12:55:54.619: INFO: Number of nodes with available pods: 0 Mar 25 12:55:54.619: INFO: Node iruya-worker is running more than one daemon pod Mar 25 12:55:55.616: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 12:55:55.619: INFO: Number of nodes with available pods: 0 Mar 25 12:55:55.619: INFO: Node iruya-worker is running more than one daemon pod Mar 25 12:55:56.616: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 12:55:56.620: INFO: Number of nodes with available pods: 1 Mar 25 12:55:56.620: INFO: Node iruya-worker is running more than one daemon pod Mar 25 12:55:57.616: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 12:55:57.620: INFO: Number of nodes with available pods: 2 Mar 25 12:55:57.620: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Mar 25 12:55:57.649: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 12:55:57.672: INFO: Number of nodes with available pods: 2 Mar 25 12:55:57.672: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2015, will wait for the garbage collector to delete the pods Mar 25 12:55:58.744: INFO: Deleting DaemonSet.extensions daemon-set took: 5.925988ms Mar 25 12:55:59.045: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.306571ms Mar 25 12:56:11.948: INFO: Number of nodes with available pods: 0 Mar 25 12:56:11.948: INFO: Number of running nodes: 0, number of available pods: 0 Mar 25 12:56:11.953: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2015/daemonsets","resourceVersion":"1771217"},"items":null} Mar 25 12:56:11.956: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2015/pods","resourceVersion":"1771217"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 12:56:11.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2015" for this suite. Mar 25 12:56:17.979: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 12:56:18.073: INFO: namespace daemonsets-2015 deletion completed in 6.103470917s • [SLOW TEST:24.598 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 12:56:18.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 25 12:56:18.146: INFO: Waiting up to 5m0s for pod "pod-efc947f6-aace-410d-ba01-feb646ddbeb6" in namespace "emptydir-7313" to be "success or failure" Mar 25 12:56:18.150: INFO: Pod "pod-efc947f6-aace-410d-ba01-feb646ddbeb6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.962811ms Mar 25 12:56:20.154: INFO: Pod "pod-efc947f6-aace-410d-ba01-feb646ddbeb6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00746358s Mar 25 12:56:22.158: INFO: Pod "pod-efc947f6-aace-410d-ba01-feb646ddbeb6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011530654s STEP: Saw pod success Mar 25 12:56:22.158: INFO: Pod "pod-efc947f6-aace-410d-ba01-feb646ddbeb6" satisfied condition "success or failure" Mar 25 12:56:22.160: INFO: Trying to get logs from node iruya-worker pod pod-efc947f6-aace-410d-ba01-feb646ddbeb6 container test-container: STEP: delete the pod Mar 25 12:56:22.181: INFO: Waiting for pod pod-efc947f6-aace-410d-ba01-feb646ddbeb6 to disappear Mar 25 12:56:22.192: INFO: Pod pod-efc947f6-aace-410d-ba01-feb646ddbeb6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 12:56:22.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7313" for this suite. Mar 25 12:56:28.220: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 12:56:28.291: INFO: namespace emptydir-7313 deletion completed in 6.095972328s • [SLOW TEST:10.218 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 12:56:28.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 25 12:56:28.362: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f201bcbc-64aa-489e-9876-d14f6ba434ca" in namespace "projected-4590" to be "success or failure" Mar 25 12:56:28.375: INFO: Pod "downwardapi-volume-f201bcbc-64aa-489e-9876-d14f6ba434ca": Phase="Pending", Reason="", readiness=false. Elapsed: 12.63306ms Mar 25 12:56:30.379: INFO: Pod "downwardapi-volume-f201bcbc-64aa-489e-9876-d14f6ba434ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016259026s Mar 25 12:56:32.383: INFO: Pod "downwardapi-volume-f201bcbc-64aa-489e-9876-d14f6ba434ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020436497s STEP: Saw pod success Mar 25 12:56:32.383: INFO: Pod "downwardapi-volume-f201bcbc-64aa-489e-9876-d14f6ba434ca" satisfied condition "success or failure" Mar 25 12:56:32.386: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-f201bcbc-64aa-489e-9876-d14f6ba434ca container client-container: STEP: delete the pod Mar 25 12:56:32.411: INFO: Waiting for pod downwardapi-volume-f201bcbc-64aa-489e-9876-d14f6ba434ca to disappear Mar 25 12:56:32.435: INFO: Pod downwardapi-volume-f201bcbc-64aa-489e-9876-d14f6ba434ca no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 12:56:32.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4590" for this suite. Mar 25 12:56:38.454: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 12:56:38.567: INFO: namespace projected-4590 deletion completed in 6.128188129s • [SLOW TEST:10.275 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 12:56:38.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Mar 25 12:56:38.608: INFO: Waiting up to 5m0s for pod "downward-api-40c2031d-cc53-4e86-942e-4c5f85e00dab" in namespace "downward-api-4918" to be "success or failure" Mar 25 12:56:38.623: INFO: Pod "downward-api-40c2031d-cc53-4e86-942e-4c5f85e00dab": Phase="Pending", Reason="", readiness=false. Elapsed: 14.571619ms Mar 25 12:56:40.627: INFO: Pod "downward-api-40c2031d-cc53-4e86-942e-4c5f85e00dab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0187702s Mar 25 12:56:42.631: INFO: Pod "downward-api-40c2031d-cc53-4e86-942e-4c5f85e00dab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022873345s STEP: Saw pod success Mar 25 12:56:42.631: INFO: Pod "downward-api-40c2031d-cc53-4e86-942e-4c5f85e00dab" satisfied condition "success or failure" Mar 25 12:56:42.634: INFO: Trying to get logs from node iruya-worker pod downward-api-40c2031d-cc53-4e86-942e-4c5f85e00dab container dapi-container: STEP: delete the pod Mar 25 12:56:42.683: INFO: Waiting for pod downward-api-40c2031d-cc53-4e86-942e-4c5f85e00dab to disappear Mar 25 12:56:42.702: INFO: Pod downward-api-40c2031d-cc53-4e86-942e-4c5f85e00dab no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 12:56:42.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4918" for this suite. Mar 25 12:56:48.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 12:56:48.823: INFO: namespace downward-api-4918 deletion completed in 6.117288526s • [SLOW TEST:10.255 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 12:56:48.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 25 12:56:56.936: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 25 12:56:56.941: INFO: Pod pod-with-poststart-exec-hook still exists Mar 25 12:56:58.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 25 12:56:58.946: INFO: Pod pod-with-poststart-exec-hook still exists Mar 25 12:57:00.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 25 12:57:00.946: INFO: Pod pod-with-poststart-exec-hook still exists Mar 25 12:57:02.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 25 12:57:02.946: INFO: Pod pod-with-poststart-exec-hook still exists Mar 25 12:57:04.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 25 12:57:04.946: INFO: Pod pod-with-poststart-exec-hook still exists Mar 25 12:57:06.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 25 12:57:06.945: INFO: Pod pod-with-poststart-exec-hook still exists Mar 25 12:57:08.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 25 12:57:08.946: INFO: Pod pod-with-poststart-exec-hook still exists Mar 25 12:57:10.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 25 12:57:10.945: INFO: Pod pod-with-poststart-exec-hook still exists Mar 25 12:57:12.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 25 12:57:12.945: INFO: Pod pod-with-poststart-exec-hook still exists Mar 25 12:57:14.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 25 12:57:14.946: INFO: Pod pod-with-poststart-exec-hook still exists Mar 25 12:57:16.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 25 12:57:16.946: INFO: Pod pod-with-poststart-exec-hook still exists Mar 25 12:57:18.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 25 12:57:18.945: INFO: Pod pod-with-poststart-exec-hook still exists Mar 25 12:57:20.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 25 12:57:20.946: INFO: Pod pod-with-poststart-exec-hook still exists Mar 25 12:57:22.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 25 12:57:22.946: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 12:57:22.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6267" for this suite. Mar 25 12:57:44.969: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 12:57:45.042: INFO: namespace container-lifecycle-hook-6267 deletion completed in 22.091530257s • [SLOW TEST:56.219 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 12:57:45.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 12:58:15.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-704" for this suite. Mar 25 12:58:21.539: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 12:58:21.614: INFO: namespace container-runtime-704 deletion completed in 6.088471423s • [SLOW TEST:36.573 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 12:58:21.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-09f9e330-d2bb-4a8e-9ea1-6bf6dd979536 STEP: Creating a pod to test consume configMaps Mar 25 12:58:21.671: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d819a0cb-1ca8-4a61-a7e5-ef4bb0b71941" in namespace "projected-692" to be "success or failure" Mar 25 12:58:21.685: INFO: Pod "pod-projected-configmaps-d819a0cb-1ca8-4a61-a7e5-ef4bb0b71941": Phase="Pending", Reason="", readiness=false. Elapsed: 13.67598ms Mar 25 12:58:23.698: INFO: Pod "pod-projected-configmaps-d819a0cb-1ca8-4a61-a7e5-ef4bb0b71941": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02714362s Mar 25 12:58:25.702: INFO: Pod "pod-projected-configmaps-d819a0cb-1ca8-4a61-a7e5-ef4bb0b71941": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031172708s STEP: Saw pod success Mar 25 12:58:25.702: INFO: Pod "pod-projected-configmaps-d819a0cb-1ca8-4a61-a7e5-ef4bb0b71941" satisfied condition "success or failure" Mar 25 12:58:25.706: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-d819a0cb-1ca8-4a61-a7e5-ef4bb0b71941 container projected-configmap-volume-test: STEP: delete the pod Mar 25 12:58:25.737: INFO: Waiting for pod pod-projected-configmaps-d819a0cb-1ca8-4a61-a7e5-ef4bb0b71941 to disappear Mar 25 12:58:25.750: INFO: Pod pod-projected-configmaps-d819a0cb-1ca8-4a61-a7e5-ef4bb0b71941 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 12:58:25.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-692" for this suite. Mar 25 12:58:31.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 12:58:31.849: INFO: namespace projected-692 deletion completed in 6.095306301s • [SLOW TEST:10.235 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 12:58:31.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Mar 25 12:58:31.906: INFO: PodSpec: initContainers in spec.initContainers Mar 25 12:59:24.800: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-5e9fb52d-ad12-433d-aa5b-72e32583ba5a", GenerateName:"", Namespace:"init-container-3883", SelfLink:"/api/v1/namespaces/init-container-3883/pods/pod-init-5e9fb52d-ad12-433d-aa5b-72e32583ba5a", UID:"914b51d1-dd0e-417f-aa77-f33e9b814619", ResourceVersion:"1771859", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63720737911, loc:(*time.Location)(0x7ea78c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"906712747"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-g8ctd", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002329980), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-g8ctd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-g8ctd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-g8ctd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0026255f8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002b98ae0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002625680)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0026256a0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0026256a8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0026256ac), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720737912, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720737912, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720737912, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720737911, loc:(*time.Location)(0x7ea78c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.5", PodIP:"10.244.1.176", StartTime:(*v1.Time)(0xc00203de80), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001f91260)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001f912d0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://37b4ffc8c9100b3ecbe38c56b429f5e52baca7464c25f47f2372c2fcdcfce49a"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00203dec0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00203dea0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 12:59:24.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3883" for this suite. Mar 25 12:59:46.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 12:59:46.892: INFO: namespace init-container-3883 deletion completed in 22.085925983s • [SLOW TEST:75.042 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 12:59:46.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7140.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7140.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7140.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7140.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 25 12:59:53.002: INFO: DNS probes using dns-test-293c2f2b-c2cf-4d02-9629-b2e2d6ae3811 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7140.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7140.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7140.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7140.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 25 12:59:59.090: INFO: File wheezy_udp@dns-test-service-3.dns-7140.svc.cluster.local from pod dns-7140/dns-test-1a63763f-2188-4455-8e28-d26fcab4074b contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 25 12:59:59.093: INFO: File jessie_udp@dns-test-service-3.dns-7140.svc.cluster.local from pod dns-7140/dns-test-1a63763f-2188-4455-8e28-d26fcab4074b contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 25 12:59:59.093: INFO: Lookups using dns-7140/dns-test-1a63763f-2188-4455-8e28-d26fcab4074b failed for: [wheezy_udp@dns-test-service-3.dns-7140.svc.cluster.local jessie_udp@dns-test-service-3.dns-7140.svc.cluster.local] Mar 25 13:00:04.098: INFO: File wheezy_udp@dns-test-service-3.dns-7140.svc.cluster.local from pod dns-7140/dns-test-1a63763f-2188-4455-8e28-d26fcab4074b contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 25 13:00:04.102: INFO: File jessie_udp@dns-test-service-3.dns-7140.svc.cluster.local from pod dns-7140/dns-test-1a63763f-2188-4455-8e28-d26fcab4074b contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 25 13:00:04.102: INFO: Lookups using dns-7140/dns-test-1a63763f-2188-4455-8e28-d26fcab4074b failed for: [wheezy_udp@dns-test-service-3.dns-7140.svc.cluster.local jessie_udp@dns-test-service-3.dns-7140.svc.cluster.local] Mar 25 13:00:09.098: INFO: File wheezy_udp@dns-test-service-3.dns-7140.svc.cluster.local from pod dns-7140/dns-test-1a63763f-2188-4455-8e28-d26fcab4074b contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 25 13:00:09.102: INFO: File jessie_udp@dns-test-service-3.dns-7140.svc.cluster.local from pod dns-7140/dns-test-1a63763f-2188-4455-8e28-d26fcab4074b contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 25 13:00:09.102: INFO: Lookups using dns-7140/dns-test-1a63763f-2188-4455-8e28-d26fcab4074b failed for: [wheezy_udp@dns-test-service-3.dns-7140.svc.cluster.local jessie_udp@dns-test-service-3.dns-7140.svc.cluster.local] Mar 25 13:00:14.098: INFO: File wheezy_udp@dns-test-service-3.dns-7140.svc.cluster.local from pod dns-7140/dns-test-1a63763f-2188-4455-8e28-d26fcab4074b contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 25 13:00:14.102: INFO: File jessie_udp@dns-test-service-3.dns-7140.svc.cluster.local from pod dns-7140/dns-test-1a63763f-2188-4455-8e28-d26fcab4074b contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 25 13:00:14.102: INFO: Lookups using dns-7140/dns-test-1a63763f-2188-4455-8e28-d26fcab4074b failed for: [wheezy_udp@dns-test-service-3.dns-7140.svc.cluster.local jessie_udp@dns-test-service-3.dns-7140.svc.cluster.local] Mar 25 13:00:19.097: INFO: File wheezy_udp@dns-test-service-3.dns-7140.svc.cluster.local from pod dns-7140/dns-test-1a63763f-2188-4455-8e28-d26fcab4074b contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 25 13:00:19.100: INFO: File jessie_udp@dns-test-service-3.dns-7140.svc.cluster.local from pod dns-7140/dns-test-1a63763f-2188-4455-8e28-d26fcab4074b contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 25 13:00:19.100: INFO: Lookups using dns-7140/dns-test-1a63763f-2188-4455-8e28-d26fcab4074b failed for: [wheezy_udp@dns-test-service-3.dns-7140.svc.cluster.local jessie_udp@dns-test-service-3.dns-7140.svc.cluster.local] Mar 25 13:00:24.102: INFO: DNS probes using dns-test-1a63763f-2188-4455-8e28-d26fcab4074b succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7140.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-7140.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7140.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-7140.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 25 13:00:30.552: INFO: DNS probes using dns-test-e72f4738-49e8-4bd9-86dc-2164de20ff70 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:00:30.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7140" for this suite. Mar 25 13:00:36.648: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:00:36.762: INFO: namespace dns-7140 deletion completed in 6.13966275s • [SLOW TEST:49.870 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:00:36.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args Mar 25 13:00:36.841: INFO: Waiting up to 5m0s for pod "var-expansion-5e3a18f2-fc60-4628-b1f7-ba59d752ce95" in namespace "var-expansion-3428" to be "success or failure" Mar 25 13:00:36.851: INFO: Pod "var-expansion-5e3a18f2-fc60-4628-b1f7-ba59d752ce95": Phase="Pending", Reason="", readiness=false. Elapsed: 9.471935ms Mar 25 13:00:38.855: INFO: Pod "var-expansion-5e3a18f2-fc60-4628-b1f7-ba59d752ce95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013495073s Mar 25 13:00:40.859: INFO: Pod "var-expansion-5e3a18f2-fc60-4628-b1f7-ba59d752ce95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01724421s STEP: Saw pod success Mar 25 13:00:40.859: INFO: Pod "var-expansion-5e3a18f2-fc60-4628-b1f7-ba59d752ce95" satisfied condition "success or failure" Mar 25 13:00:40.861: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-5e3a18f2-fc60-4628-b1f7-ba59d752ce95 container dapi-container: STEP: delete the pod Mar 25 13:00:40.882: INFO: Waiting for pod var-expansion-5e3a18f2-fc60-4628-b1f7-ba59d752ce95 to disappear Mar 25 13:00:40.908: INFO: Pod var-expansion-5e3a18f2-fc60-4628-b1f7-ba59d752ce95 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:00:40.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3428" for this suite. Mar 25 13:00:46.927: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:00:47.006: INFO: namespace var-expansion-3428 deletion completed in 6.09497171s • [SLOW TEST:10.243 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:00:47.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 25 13:00:47.089: INFO: Waiting up to 5m0s for pod "pod-6e23325b-4f20-4afd-9504-7bfca77a7693" in namespace "emptydir-9706" to be "success or failure" Mar 25 13:00:47.097: INFO: Pod "pod-6e23325b-4f20-4afd-9504-7bfca77a7693": Phase="Pending", Reason="", readiness=false. Elapsed: 7.800158ms Mar 25 13:00:49.102: INFO: Pod "pod-6e23325b-4f20-4afd-9504-7bfca77a7693": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012369381s Mar 25 13:00:51.106: INFO: Pod "pod-6e23325b-4f20-4afd-9504-7bfca77a7693": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016946718s STEP: Saw pod success Mar 25 13:00:51.106: INFO: Pod "pod-6e23325b-4f20-4afd-9504-7bfca77a7693" satisfied condition "success or failure" Mar 25 13:00:51.109: INFO: Trying to get logs from node iruya-worker pod pod-6e23325b-4f20-4afd-9504-7bfca77a7693 container test-container: STEP: delete the pod Mar 25 13:00:51.144: INFO: Waiting for pod pod-6e23325b-4f20-4afd-9504-7bfca77a7693 to disappear Mar 25 13:00:51.150: INFO: Pod pod-6e23325b-4f20-4afd-9504-7bfca77a7693 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:00:51.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9706" for this suite. Mar 25 13:00:57.180: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:00:57.275: INFO: namespace emptydir-9706 deletion completed in 6.121223907s • [SLOW TEST:10.268 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:00:57.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 25 13:00:57.349: INFO: Waiting up to 5m0s for pod "downwardapi-volume-858b7946-96b5-4eca-89b2-fd3bdefe76e4" in namespace "projected-8328" to be "success or failure" Mar 25 13:00:57.351: INFO: Pod "downwardapi-volume-858b7946-96b5-4eca-89b2-fd3bdefe76e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.507663ms Mar 25 13:00:59.355: INFO: Pod "downwardapi-volume-858b7946-96b5-4eca-89b2-fd3bdefe76e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006367319s Mar 25 13:01:01.358: INFO: Pod "downwardapi-volume-858b7946-96b5-4eca-89b2-fd3bdefe76e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009344369s STEP: Saw pod success Mar 25 13:01:01.358: INFO: Pod "downwardapi-volume-858b7946-96b5-4eca-89b2-fd3bdefe76e4" satisfied condition "success or failure" Mar 25 13:01:01.361: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-858b7946-96b5-4eca-89b2-fd3bdefe76e4 container client-container: STEP: delete the pod Mar 25 13:01:01.377: INFO: Waiting for pod downwardapi-volume-858b7946-96b5-4eca-89b2-fd3bdefe76e4 to disappear Mar 25 13:01:01.394: INFO: Pod downwardapi-volume-858b7946-96b5-4eca-89b2-fd3bdefe76e4 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:01:01.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8328" for this suite. Mar 25 13:01:07.409: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:01:07.487: INFO: namespace projected-8328 deletion completed in 6.089378832s • [SLOW TEST:10.212 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:01:07.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command Mar 25 13:01:07.565: INFO: Waiting up to 5m0s for pod "client-containers-60a7f540-1864-4772-959b-02ae4e2c68d9" in namespace "containers-8474" to be "success or failure" Mar 25 13:01:07.567: INFO: Pod "client-containers-60a7f540-1864-4772-959b-02ae4e2c68d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.837297ms Mar 25 13:01:09.572: INFO: Pod "client-containers-60a7f540-1864-4772-959b-02ae4e2c68d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007249576s Mar 25 13:01:11.576: INFO: Pod "client-containers-60a7f540-1864-4772-959b-02ae4e2c68d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011221976s STEP: Saw pod success Mar 25 13:01:11.576: INFO: Pod "client-containers-60a7f540-1864-4772-959b-02ae4e2c68d9" satisfied condition "success or failure" Mar 25 13:01:11.579: INFO: Trying to get logs from node iruya-worker pod client-containers-60a7f540-1864-4772-959b-02ae4e2c68d9 container test-container: STEP: delete the pod Mar 25 13:01:11.599: INFO: Waiting for pod client-containers-60a7f540-1864-4772-959b-02ae4e2c68d9 to disappear Mar 25 13:01:11.603: INFO: Pod client-containers-60a7f540-1864-4772-959b-02ae4e2c68d9 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:01:11.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8474" for this suite. Mar 25 13:01:17.624: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:01:17.727: INFO: namespace containers-8474 deletion completed in 6.120026632s • [SLOW TEST:10.239 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:01:17.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-7051/configmap-test-a325b215-980d-4e88-b5ae-3bfeea4bb945 STEP: Creating a pod to test consume configMaps Mar 25 13:01:17.814: INFO: Waiting up to 5m0s for pod "pod-configmaps-f08fbe2a-8cf6-4ff8-b7ac-77b3e2b9f21b" in namespace "configmap-7051" to be "success or failure" Mar 25 13:01:17.817: INFO: Pod "pod-configmaps-f08fbe2a-8cf6-4ff8-b7ac-77b3e2b9f21b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213754ms Mar 25 13:01:19.821: INFO: Pod "pod-configmaps-f08fbe2a-8cf6-4ff8-b7ac-77b3e2b9f21b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00625276s Mar 25 13:01:21.825: INFO: Pod "pod-configmaps-f08fbe2a-8cf6-4ff8-b7ac-77b3e2b9f21b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010442299s STEP: Saw pod success Mar 25 13:01:21.825: INFO: Pod "pod-configmaps-f08fbe2a-8cf6-4ff8-b7ac-77b3e2b9f21b" satisfied condition "success or failure" Mar 25 13:01:21.827: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-f08fbe2a-8cf6-4ff8-b7ac-77b3e2b9f21b container env-test: STEP: delete the pod Mar 25 13:01:21.877: INFO: Waiting for pod pod-configmaps-f08fbe2a-8cf6-4ff8-b7ac-77b3e2b9f21b to disappear Mar 25 13:01:21.900: INFO: Pod pod-configmaps-f08fbe2a-8cf6-4ff8-b7ac-77b3e2b9f21b no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:01:21.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7051" for this suite. Mar 25 13:01:27.963: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:01:28.040: INFO: namespace configmap-7051 deletion completed in 6.106783722s • [SLOW TEST:10.313 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:01:28.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 25 13:01:28.147: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c3e9cb7b-2b2f-469f-b122-16aeeeac40ec" in namespace "downward-api-6547" to be "success or failure" Mar 25 13:01:28.164: INFO: Pod "downwardapi-volume-c3e9cb7b-2b2f-469f-b122-16aeeeac40ec": Phase="Pending", Reason="", readiness=false. Elapsed: 17.207558ms Mar 25 13:01:30.168: INFO: Pod "downwardapi-volume-c3e9cb7b-2b2f-469f-b122-16aeeeac40ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021199655s Mar 25 13:01:32.172: INFO: Pod "downwardapi-volume-c3e9cb7b-2b2f-469f-b122-16aeeeac40ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025583731s STEP: Saw pod success Mar 25 13:01:32.172: INFO: Pod "downwardapi-volume-c3e9cb7b-2b2f-469f-b122-16aeeeac40ec" satisfied condition "success or failure" Mar 25 13:01:32.176: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-c3e9cb7b-2b2f-469f-b122-16aeeeac40ec container client-container: STEP: delete the pod Mar 25 13:01:32.212: INFO: Waiting for pod downwardapi-volume-c3e9cb7b-2b2f-469f-b122-16aeeeac40ec to disappear Mar 25 13:01:32.226: INFO: Pod downwardapi-volume-c3e9cb7b-2b2f-469f-b122-16aeeeac40ec no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:01:32.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6547" for this suite. Mar 25 13:01:38.242: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:01:38.318: INFO: namespace downward-api-6547 deletion completed in 6.088411237s • [SLOW TEST:10.277 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:01:38.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-55758564-149d-4ca6-867c-b1c923e7aa44 STEP: Creating a pod to test consume configMaps Mar 25 13:01:38.420: INFO: Waiting up to 5m0s for pod "pod-configmaps-b3db710d-8f83-4500-beb6-9a067fc8bef3" in namespace "configmap-5209" to be "success or failure" Mar 25 13:01:38.439: INFO: Pod "pod-configmaps-b3db710d-8f83-4500-beb6-9a067fc8bef3": Phase="Pending", Reason="", readiness=false. Elapsed: 18.92998ms Mar 25 13:01:40.444: INFO: Pod "pod-configmaps-b3db710d-8f83-4500-beb6-9a067fc8bef3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0235s Mar 25 13:01:42.448: INFO: Pod "pod-configmaps-b3db710d-8f83-4500-beb6-9a067fc8bef3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02828553s STEP: Saw pod success Mar 25 13:01:42.448: INFO: Pod "pod-configmaps-b3db710d-8f83-4500-beb6-9a067fc8bef3" satisfied condition "success or failure" Mar 25 13:01:42.452: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-b3db710d-8f83-4500-beb6-9a067fc8bef3 container configmap-volume-test: STEP: delete the pod Mar 25 13:01:42.468: INFO: Waiting for pod pod-configmaps-b3db710d-8f83-4500-beb6-9a067fc8bef3 to disappear Mar 25 13:01:42.520: INFO: Pod pod-configmaps-b3db710d-8f83-4500-beb6-9a067fc8bef3 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:01:42.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5209" for this suite. Mar 25 13:01:48.566: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:01:48.638: INFO: namespace configmap-5209 deletion completed in 6.113885897s • [SLOW TEST:10.319 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:01:48.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-6e747062-8cda-4685-8757-84706d95f036 STEP: Creating secret with name secret-projected-all-test-volume-5a6da0f0-996b-4700-9a02-2772964861eb STEP: Creating a pod to test Check all projections for projected volume plugin Mar 25 13:01:48.758: INFO: Waiting up to 5m0s for pod "projected-volume-5bcf7c20-960e-4db5-96dd-43fc702e646f" in namespace "projected-4120" to be "success or failure" Mar 25 13:01:48.772: INFO: Pod "projected-volume-5bcf7c20-960e-4db5-96dd-43fc702e646f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.489735ms Mar 25 13:01:50.776: INFO: Pod "projected-volume-5bcf7c20-960e-4db5-96dd-43fc702e646f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018276257s Mar 25 13:01:52.780: INFO: Pod "projected-volume-5bcf7c20-960e-4db5-96dd-43fc702e646f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022830683s STEP: Saw pod success Mar 25 13:01:52.780: INFO: Pod "projected-volume-5bcf7c20-960e-4db5-96dd-43fc702e646f" satisfied condition "success or failure" Mar 25 13:01:52.784: INFO: Trying to get logs from node iruya-worker2 pod projected-volume-5bcf7c20-960e-4db5-96dd-43fc702e646f container projected-all-volume-test: STEP: delete the pod Mar 25 13:01:52.812: INFO: Waiting for pod projected-volume-5bcf7c20-960e-4db5-96dd-43fc702e646f to disappear Mar 25 13:01:52.823: INFO: Pod projected-volume-5bcf7c20-960e-4db5-96dd-43fc702e646f no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:01:52.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4120" for this suite. Mar 25 13:01:58.859: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:01:58.940: INFO: namespace projected-4120 deletion completed in 6.114648907s • [SLOW TEST:10.302 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:01:58.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 25 13:01:59.015: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Mar 25 13:01:59.026: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 13:01:59.041: INFO: Number of nodes with available pods: 0 Mar 25 13:01:59.041: INFO: Node iruya-worker is running more than one daemon pod Mar 25 13:02:00.090: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 13:02:00.094: INFO: Number of nodes with available pods: 0 Mar 25 13:02:00.094: INFO: Node iruya-worker is running more than one daemon pod Mar 25 13:02:01.046: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 13:02:01.050: INFO: Number of nodes with available pods: 0 Mar 25 13:02:01.050: INFO: Node iruya-worker is running more than one daemon pod Mar 25 13:02:02.046: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 13:02:02.051: INFO: Number of nodes with available pods: 0 Mar 25 13:02:02.051: INFO: Node iruya-worker is running more than one daemon pod Mar 25 13:02:03.046: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 13:02:03.055: INFO: Number of nodes with available pods: 2 Mar 25 13:02:03.055: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Mar 25 13:02:03.111: INFO: Wrong image for pod: daemon-set-2tml6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 25 13:02:03.111: INFO: Wrong image for pod: daemon-set-rwkpn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 25 13:02:03.128: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 13:02:04.131: INFO: Wrong image for pod: daemon-set-2tml6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 25 13:02:04.132: INFO: Wrong image for pod: daemon-set-rwkpn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 25 13:02:04.135: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 13:02:05.133: INFO: Wrong image for pod: daemon-set-2tml6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 25 13:02:05.133: INFO: Wrong image for pod: daemon-set-rwkpn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 25 13:02:05.137: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 13:02:06.132: INFO: Wrong image for pod: daemon-set-2tml6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 25 13:02:06.132: INFO: Wrong image for pod: daemon-set-rwkpn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 25 13:02:06.132: INFO: Pod daemon-set-rwkpn is not available Mar 25 13:02:06.136: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 13:02:07.132: INFO: Wrong image for pod: daemon-set-2tml6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 25 13:02:07.132: INFO: Pod daemon-set-wf67c is not available Mar 25 13:02:07.136: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 13:02:08.150: INFO: Wrong image for pod: daemon-set-2tml6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 25 13:02:08.150: INFO: Pod daemon-set-wf67c is not available Mar 25 13:02:08.154: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 13:02:09.133: INFO: Wrong image for pod: daemon-set-2tml6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 25 13:02:09.133: INFO: Pod daemon-set-wf67c is not available Mar 25 13:02:09.137: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 13:02:10.132: INFO: Wrong image for pod: daemon-set-2tml6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 25 13:02:10.138: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 13:02:11.132: INFO: Wrong image for pod: daemon-set-2tml6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 25 13:02:11.132: INFO: Pod daemon-set-2tml6 is not available Mar 25 13:02:11.136: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 13:02:12.132: INFO: Wrong image for pod: daemon-set-2tml6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 25 13:02:12.132: INFO: Pod daemon-set-2tml6 is not available Mar 25 13:02:12.136: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 13:02:13.134: INFO: Wrong image for pod: daemon-set-2tml6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 25 13:02:13.134: INFO: Pod daemon-set-2tml6 is not available Mar 25 13:02:13.138: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 13:02:14.137: INFO: Wrong image for pod: daemon-set-2tml6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 25 13:02:14.137: INFO: Pod daemon-set-2tml6 is not available Mar 25 13:02:14.140: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 13:02:15.132: INFO: Wrong image for pod: daemon-set-2tml6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 25 13:02:15.132: INFO: Pod daemon-set-2tml6 is not available Mar 25 13:02:15.136: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 13:02:16.132: INFO: Wrong image for pod: daemon-set-2tml6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 25 13:02:16.132: INFO: Pod daemon-set-2tml6 is not available Mar 25 13:02:16.136: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 13:02:17.133: INFO: Wrong image for pod: daemon-set-2tml6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 25 13:02:17.133: INFO: Pod daemon-set-2tml6 is not available Mar 25 13:02:17.137: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 13:02:18.193: INFO: Wrong image for pod: daemon-set-2tml6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 25 13:02:18.194: INFO: Pod daemon-set-2tml6 is not available Mar 25 13:02:18.197: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 13:02:19.133: INFO: Wrong image for pod: daemon-set-2tml6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 25 13:02:19.133: INFO: Pod daemon-set-2tml6 is not available Mar 25 13:02:19.138: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 13:02:20.132: INFO: Wrong image for pod: daemon-set-2tml6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 25 13:02:20.132: INFO: Pod daemon-set-2tml6 is not available Mar 25 13:02:20.136: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 13:02:21.133: INFO: Wrong image for pod: daemon-set-2tml6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 25 13:02:21.133: INFO: Pod daemon-set-2tml6 is not available Mar 25 13:02:21.138: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 13:02:22.138: INFO: Wrong image for pod: daemon-set-2tml6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 25 13:02:22.138: INFO: Pod daemon-set-2tml6 is not available Mar 25 13:02:22.142: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 13:02:23.133: INFO: Pod daemon-set-s45w4 is not available Mar 25 13:02:23.136: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Mar 25 13:02:23.139: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 13:02:23.142: INFO: Number of nodes with available pods: 1 Mar 25 13:02:23.142: INFO: Node iruya-worker is running more than one daemon pod Mar 25 13:02:24.171: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 13:02:24.174: INFO: Number of nodes with available pods: 1 Mar 25 13:02:24.174: INFO: Node iruya-worker is running more than one daemon pod Mar 25 13:02:25.147: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 13:02:25.151: INFO: Number of nodes with available pods: 1 Mar 25 13:02:25.151: INFO: Node iruya-worker is running more than one daemon pod Mar 25 13:02:26.151: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 13:02:26.161: INFO: Number of nodes with available pods: 2 Mar 25 13:02:26.161: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3866, will wait for the garbage collector to delete the pods Mar 25 13:02:26.231: INFO: Deleting DaemonSet.extensions daemon-set took: 6.106048ms Mar 25 13:02:26.532: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.240534ms Mar 25 13:02:32.235: INFO: Number of nodes with available pods: 0 Mar 25 13:02:32.235: INFO: Number of running nodes: 0, number of available pods: 0 Mar 25 13:02:32.238: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3866/daemonsets","resourceVersion":"1772627"},"items":null} Mar 25 13:02:32.241: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3866/pods","resourceVersion":"1772627"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:02:32.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3866" for this suite. Mar 25 13:02:38.282: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:02:38.390: INFO: namespace daemonsets-3866 deletion completed in 6.119566269s • [SLOW TEST:39.449 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:02:38.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 25 13:02:38.473: INFO: Waiting up to 5m0s for pod "pod-a2f46d16-dab8-4b6e-811d-365f6f319383" in namespace "emptydir-5585" to be "success or failure" Mar 25 13:02:38.493: INFO: Pod "pod-a2f46d16-dab8-4b6e-811d-365f6f319383": Phase="Pending", Reason="", readiness=false. Elapsed: 20.337732ms Mar 25 13:02:40.522: INFO: Pod "pod-a2f46d16-dab8-4b6e-811d-365f6f319383": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048649542s Mar 25 13:02:42.525: INFO: Pod "pod-a2f46d16-dab8-4b6e-811d-365f6f319383": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052450378s STEP: Saw pod success Mar 25 13:02:42.525: INFO: Pod "pod-a2f46d16-dab8-4b6e-811d-365f6f319383" satisfied condition "success or failure" Mar 25 13:02:42.529: INFO: Trying to get logs from node iruya-worker2 pod pod-a2f46d16-dab8-4b6e-811d-365f6f319383 container test-container: STEP: delete the pod Mar 25 13:02:42.544: INFO: Waiting for pod pod-a2f46d16-dab8-4b6e-811d-365f6f319383 to disappear Mar 25 13:02:42.548: INFO: Pod pod-a2f46d16-dab8-4b6e-811d-365f6f319383 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:02:42.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5585" for this suite. Mar 25 13:02:48.564: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:02:48.641: INFO: namespace emptydir-5585 deletion completed in 6.089213711s • [SLOW TEST:10.251 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:02:48.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating server pod server in namespace prestop-4632 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-4632 STEP: Deleting pre-stop pod Mar 25 13:03:01.732: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:03:01.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-4632" for this suite. Mar 25 13:03:39.795: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:03:39.868: INFO: namespace prestop-4632 deletion completed in 38.110748137s • [SLOW TEST:51.226 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:03:39.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 25 13:03:39.941: INFO: Waiting up to 5m0s for pod "pod-a240533b-37d7-4c16-a10d-b426bc4c774b" in namespace "emptydir-5521" to be "success or failure" Mar 25 13:03:39.944: INFO: Pod "pod-a240533b-37d7-4c16-a10d-b426bc4c774b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.166234ms Mar 25 13:03:41.948: INFO: Pod "pod-a240533b-37d7-4c16-a10d-b426bc4c774b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006930429s Mar 25 13:03:43.952: INFO: Pod "pod-a240533b-37d7-4c16-a10d-b426bc4c774b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010554812s STEP: Saw pod success Mar 25 13:03:43.952: INFO: Pod "pod-a240533b-37d7-4c16-a10d-b426bc4c774b" satisfied condition "success or failure" Mar 25 13:03:43.955: INFO: Trying to get logs from node iruya-worker pod pod-a240533b-37d7-4c16-a10d-b426bc4c774b container test-container: STEP: delete the pod Mar 25 13:03:43.988: INFO: Waiting for pod pod-a240533b-37d7-4c16-a10d-b426bc4c774b to disappear Mar 25 13:03:44.001: INFO: Pod pod-a240533b-37d7-4c16-a10d-b426bc4c774b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:03:44.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5521" for this suite. Mar 25 13:03:50.035: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:03:50.160: INFO: namespace emptydir-5521 deletion completed in 6.156049797s • [SLOW TEST:10.292 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:03:50.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Mar 25 13:03:50.217: INFO: Waiting up to 5m0s for pod "downward-api-4bdda80c-6e9b-4192-b698-2ba05ec61e3d" in namespace "downward-api-7062" to be "success or failure" Mar 25 13:03:50.229: INFO: Pod "downward-api-4bdda80c-6e9b-4192-b698-2ba05ec61e3d": Phase="Pending", Reason="", readiness=false. Elapsed: 11.810438ms Mar 25 13:03:52.255: INFO: Pod "downward-api-4bdda80c-6e9b-4192-b698-2ba05ec61e3d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037755269s Mar 25 13:03:54.259: INFO: Pod "downward-api-4bdda80c-6e9b-4192-b698-2ba05ec61e3d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041887775s STEP: Saw pod success Mar 25 13:03:54.259: INFO: Pod "downward-api-4bdda80c-6e9b-4192-b698-2ba05ec61e3d" satisfied condition "success or failure" Mar 25 13:03:54.262: INFO: Trying to get logs from node iruya-worker2 pod downward-api-4bdda80c-6e9b-4192-b698-2ba05ec61e3d container dapi-container: STEP: delete the pod Mar 25 13:03:54.279: INFO: Waiting for pod downward-api-4bdda80c-6e9b-4192-b698-2ba05ec61e3d to disappear Mar 25 13:03:54.298: INFO: Pod downward-api-4bdda80c-6e9b-4192-b698-2ba05ec61e3d no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:03:54.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7062" for this suite. Mar 25 13:04:00.322: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:04:00.396: INFO: namespace downward-api-7062 deletion completed in 6.094347639s • [SLOW TEST:10.235 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:04:00.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5900.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-5900.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5900.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5900.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-5900.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5900.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 25 13:04:06.548: INFO: DNS probes using dns-5900/dns-test-f2282d64-7fcf-4e89-8a66-1bae589fd681 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:04:06.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5900" for this suite. Mar 25 13:04:12.727: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:04:12.828: INFO: namespace dns-5900 deletion completed in 6.240665183s • [SLOW TEST:12.432 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:04:12.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 25 13:04:12.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-2796' Mar 25 13:04:15.165: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 25 13:04:15.165: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 Mar 25 13:04:17.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-2796' Mar 25 13:04:17.364: INFO: stderr: "" Mar 25 13:04:17.364: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:04:17.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2796" for this suite. Mar 25 13:05:45.383: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:05:45.466: INFO: namespace kubectl-2796 deletion completed in 1m28.099434836s • [SLOW TEST:92.637 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:05:45.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 25 13:05:45.506: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:05:49.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7853" for this suite. Mar 25 13:06:35.665: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:06:35.748: INFO: namespace pods-7853 deletion completed in 46.096499304s • [SLOW TEST:50.282 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:06:35.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-8844fd11-d32e-48d0-8958-26b2ae82e1d1 STEP: Creating a pod to test consume configMaps Mar 25 13:06:35.807: INFO: Waiting up to 5m0s for pod "pod-configmaps-4732427d-7f8d-4549-91da-ecb1780eb441" in namespace "configmap-1235" to be "success or failure" Mar 25 13:06:35.810: INFO: Pod "pod-configmaps-4732427d-7f8d-4549-91da-ecb1780eb441": Phase="Pending", Reason="", readiness=false. Elapsed: 3.45125ms Mar 25 13:06:37.815: INFO: Pod "pod-configmaps-4732427d-7f8d-4549-91da-ecb1780eb441": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007907261s Mar 25 13:06:39.819: INFO: Pod "pod-configmaps-4732427d-7f8d-4549-91da-ecb1780eb441": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012070623s STEP: Saw pod success Mar 25 13:06:39.819: INFO: Pod "pod-configmaps-4732427d-7f8d-4549-91da-ecb1780eb441" satisfied condition "success or failure" Mar 25 13:06:39.823: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-4732427d-7f8d-4549-91da-ecb1780eb441 container configmap-volume-test: STEP: delete the pod Mar 25 13:06:39.844: INFO: Waiting for pod pod-configmaps-4732427d-7f8d-4549-91da-ecb1780eb441 to disappear Mar 25 13:06:39.848: INFO: Pod pod-configmaps-4732427d-7f8d-4549-91da-ecb1780eb441 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:06:39.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1235" for this suite. Mar 25 13:06:45.863: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:06:45.943: INFO: namespace configmap-1235 deletion completed in 6.091554665s • [SLOW TEST:10.194 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:06:45.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-c9ffc4eb-545e-4e58-b954-04e8efa22b29 STEP: Creating a pod to test consume secrets Mar 25 13:06:46.030: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-138786fb-28f9-448b-93bd-239449daf549" in namespace "projected-5865" to be "success or failure" Mar 25 13:06:46.035: INFO: Pod "pod-projected-secrets-138786fb-28f9-448b-93bd-239449daf549": Phase="Pending", Reason="", readiness=false. Elapsed: 4.588655ms Mar 25 13:06:48.039: INFO: Pod "pod-projected-secrets-138786fb-28f9-448b-93bd-239449daf549": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009054451s Mar 25 13:06:50.043: INFO: Pod "pod-projected-secrets-138786fb-28f9-448b-93bd-239449daf549": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013371928s STEP: Saw pod success Mar 25 13:06:50.043: INFO: Pod "pod-projected-secrets-138786fb-28f9-448b-93bd-239449daf549" satisfied condition "success or failure" Mar 25 13:06:50.047: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-138786fb-28f9-448b-93bd-239449daf549 container projected-secret-volume-test: STEP: delete the pod Mar 25 13:06:50.064: INFO: Waiting for pod pod-projected-secrets-138786fb-28f9-448b-93bd-239449daf549 to disappear Mar 25 13:06:50.069: INFO: Pod pod-projected-secrets-138786fb-28f9-448b-93bd-239449daf549 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:06:50.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5865" for this suite. Mar 25 13:06:56.079: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:06:56.170: INFO: namespace projected-5865 deletion completed in 6.098718023s • [SLOW TEST:10.227 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:06:56.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 25 13:06:56.228: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c80c966d-37e9-46ee-8e33-7dcc9384fdd1" in namespace "projected-1414" to be "success or failure" Mar 25 13:06:56.231: INFO: Pod "downwardapi-volume-c80c966d-37e9-46ee-8e33-7dcc9384fdd1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.11835ms Mar 25 13:06:58.235: INFO: Pod "downwardapi-volume-c80c966d-37e9-46ee-8e33-7dcc9384fdd1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006974831s Mar 25 13:07:00.239: INFO: Pod "downwardapi-volume-c80c966d-37e9-46ee-8e33-7dcc9384fdd1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011227017s STEP: Saw pod success Mar 25 13:07:00.239: INFO: Pod "downwardapi-volume-c80c966d-37e9-46ee-8e33-7dcc9384fdd1" satisfied condition "success or failure" Mar 25 13:07:00.243: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-c80c966d-37e9-46ee-8e33-7dcc9384fdd1 container client-container: STEP: delete the pod Mar 25 13:07:00.323: INFO: Waiting for pod downwardapi-volume-c80c966d-37e9-46ee-8e33-7dcc9384fdd1 to disappear Mar 25 13:07:00.332: INFO: Pod downwardapi-volume-c80c966d-37e9-46ee-8e33-7dcc9384fdd1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:07:00.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1414" for this suite. Mar 25 13:07:06.347: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:07:06.428: INFO: namespace projected-1414 deletion completed in 6.093522506s • [SLOW TEST:10.257 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:07:06.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-1496 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Mar 25 13:07:06.552: INFO: Found 0 stateful pods, waiting for 3 Mar 25 13:07:16.557: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 25 13:07:16.557: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 25 13:07:16.557: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false Mar 25 13:07:26.557: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 25 13:07:26.557: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 25 13:07:26.557: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Mar 25 13:07:26.584: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Mar 25 13:07:36.678: INFO: Updating stateful set ss2 Mar 25 13:07:36.705: INFO: Waiting for Pod statefulset-1496/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Mar 25 13:07:46.855: INFO: Found 2 stateful pods, waiting for 3 Mar 25 13:07:56.860: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 25 13:07:56.860: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 25 13:07:56.860: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Mar 25 13:07:56.885: INFO: Updating stateful set ss2 Mar 25 13:07:56.897: INFO: Waiting for Pod statefulset-1496/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 25 13:08:06.924: INFO: Updating stateful set ss2 Mar 25 13:08:06.930: INFO: Waiting for StatefulSet statefulset-1496/ss2 to complete update Mar 25 13:08:06.930: INFO: Waiting for Pod statefulset-1496/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Mar 25 13:08:16.938: INFO: Deleting all statefulset in ns statefulset-1496 Mar 25 13:08:16.940: INFO: Scaling statefulset ss2 to 0 Mar 25 13:08:36.955: INFO: Waiting for statefulset status.replicas updated to 0 Mar 25 13:08:36.959: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:08:36.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1496" for this suite. Mar 25 13:08:42.985: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:08:43.060: INFO: namespace statefulset-1496 deletion completed in 6.084861956s • [SLOW TEST:96.632 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:08:43.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 25 13:08:43.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5708' Mar 25 13:08:43.449: INFO: stderr: "" Mar 25 13:08:43.449: INFO: stdout: "replicationcontroller/redis-master created\n" Mar 25 13:08:43.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5708' Mar 25 13:08:43.718: INFO: stderr: "" Mar 25 13:08:43.718: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Mar 25 13:08:44.722: INFO: Selector matched 1 pods for map[app:redis] Mar 25 13:08:44.722: INFO: Found 0 / 1 Mar 25 13:08:45.722: INFO: Selector matched 1 pods for map[app:redis] Mar 25 13:08:45.722: INFO: Found 0 / 1 Mar 25 13:08:46.723: INFO: Selector matched 1 pods for map[app:redis] Mar 25 13:08:46.723: INFO: Found 1 / 1 Mar 25 13:08:46.723: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 25 13:08:46.728: INFO: Selector matched 1 pods for map[app:redis] Mar 25 13:08:46.728: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 25 13:08:46.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-cxptg --namespace=kubectl-5708' Mar 25 13:08:46.851: INFO: stderr: "" Mar 25 13:08:46.851: INFO: stdout: "Name: redis-master-cxptg\nNamespace: kubectl-5708\nPriority: 0\nNode: iruya-worker/172.17.0.6\nStart Time: Wed, 25 Mar 2020 13:08:43 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.95\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://ac2abec3610d5c27c0074cbdabb600500e23b7a3785332a579a7c02772145704\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Wed, 25 Mar 2020 13:08:45 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-bz66s (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-bz66s:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-bz66s\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 3s default-scheduler Successfully assigned kubectl-5708/redis-master-cxptg to iruya-worker\n Normal Pulled 2s kubelet, iruya-worker Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 1s kubelet, iruya-worker Created container redis-master\n Normal Started 1s kubelet, iruya-worker Started container redis-master\n" Mar 25 13:08:46.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-5708' Mar 25 13:08:46.998: INFO: stderr: "" Mar 25 13:08:46.998: INFO: stdout: "Name: redis-master\nNamespace: kubectl-5708\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 3s replication-controller Created pod: redis-master-cxptg\n" Mar 25 13:08:46.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-5708' Mar 25 13:08:47.099: INFO: stderr: "" Mar 25 13:08:47.099: INFO: stdout: "Name: redis-master\nNamespace: kubectl-5708\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.96.77.105\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.2.95:6379\nSession Affinity: None\nEvents: \n" Mar 25 13:08:47.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-control-plane' Mar 25 13:08:47.226: INFO: stderr: "" Mar 25 13:08:47.226: INFO: stdout: "Name: iruya-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:24:20 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Wed, 25 Mar 2020 13:08:17 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Wed, 25 Mar 2020 13:08:17 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Wed, 25 Mar 2020 13:08:17 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Wed, 25 Mar 2020 13:08:17 +0000 Sun, 15 Mar 2020 18:25:00 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.7\n Hostname: iruya-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 09f14f6f4d1640fcaab2243401c9f154\n System UUID: 7c6ca533-492e-400c-b058-c282f97a69ec\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.15.7\n Kube-Proxy Version: v1.15.7\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-iruya-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9d\n kube-system kindnet-zn8sx 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 9d\n kube-system kube-apiserver-iruya-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 9d\n kube-system kube-controller-manager-iruya-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 9d\n kube-system kube-proxy-46nsr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9d\n kube-system kube-scheduler-iruya-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 9d\n local-path-storage local-path-provisioner-d4947b89c-72frh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Mar 25 13:08:47.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-5708' Mar 25 13:08:47.336: INFO: stderr: "" Mar 25 13:08:47.336: INFO: stdout: "Name: kubectl-5708\nLabels: e2e-framework=kubectl\n e2e-run=42411b76-a88b-4a2b-9d8e-e1e6be74f281\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:08:47.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5708" for this suite. Mar 25 13:09:09.349: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:09:09.424: INFO: namespace kubectl-5708 deletion completed in 22.085024126s • [SLOW TEST:26.364 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:09:09.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server Mar 25 13:09:09.470: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:09:09.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6073" for this suite. Mar 25 13:09:15.578: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:09:15.664: INFO: namespace kubectl-6073 deletion completed in 6.09758025s • [SLOW TEST:6.239 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:09:15.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-cd6292ed-8543-4210-a340-945c9aa921cd STEP: Creating secret with name s-test-opt-upd-20fcb08c-a3fc-48a9-beb1-d4e58860f236 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-cd6292ed-8543-4210-a340-945c9aa921cd STEP: Updating secret s-test-opt-upd-20fcb08c-a3fc-48a9-beb1-d4e58860f236 STEP: Creating secret with name s-test-opt-create-afd510da-2637-4dc7-9546-d6bdefbbaa71 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:10:40.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4349" for this suite. Mar 25 13:11:02.262: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:11:02.339: INFO: namespace secrets-4349 deletion completed in 22.089010311s • [SLOW TEST:106.676 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:11:02.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-b2c8d92e-b590-412f-862a-871d7abda3b0 STEP: Creating a pod to test consume secrets Mar 25 13:11:02.400: INFO: Waiting up to 5m0s for pod "pod-secrets-2cebb650-60d3-41e7-8105-4eb0e20b1768" in namespace "secrets-2256" to be "success or failure" Mar 25 13:11:02.403: INFO: Pod "pod-secrets-2cebb650-60d3-41e7-8105-4eb0e20b1768": Phase="Pending", Reason="", readiness=false. Elapsed: 2.723482ms Mar 25 13:11:04.407: INFO: Pod "pod-secrets-2cebb650-60d3-41e7-8105-4eb0e20b1768": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007092751s Mar 25 13:11:06.411: INFO: Pod "pod-secrets-2cebb650-60d3-41e7-8105-4eb0e20b1768": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011155494s STEP: Saw pod success Mar 25 13:11:06.411: INFO: Pod "pod-secrets-2cebb650-60d3-41e7-8105-4eb0e20b1768" satisfied condition "success or failure" Mar 25 13:11:06.414: INFO: Trying to get logs from node iruya-worker pod pod-secrets-2cebb650-60d3-41e7-8105-4eb0e20b1768 container secret-volume-test: STEP: delete the pod Mar 25 13:11:06.451: INFO: Waiting for pod pod-secrets-2cebb650-60d3-41e7-8105-4eb0e20b1768 to disappear Mar 25 13:11:06.463: INFO: Pod pod-secrets-2cebb650-60d3-41e7-8105-4eb0e20b1768 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:11:06.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2256" for this suite. Mar 25 13:11:12.479: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:11:12.557: INFO: namespace secrets-2256 deletion completed in 6.090182654s • [SLOW TEST:10.217 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:11:12.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod Mar 25 13:11:16.629: INFO: Pod pod-hostip-d8af4e10-e601-4cff-a8b3-247abdd510dc has hostIP: 172.17.0.5 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:11:16.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3014" for this suite. Mar 25 13:11:38.646: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:11:38.717: INFO: namespace pods-3014 deletion completed in 22.083656978s • [SLOW TEST:26.160 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:11:38.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 25 13:11:38.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-7508' Mar 25 13:11:38.870: INFO: stderr: "" Mar 25 13:11:38.870: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Mar 25 13:11:43.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-7508 -o json' Mar 25 13:11:44.010: INFO: stderr: "" Mar 25 13:11:44.010: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-03-25T13:11:38Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-7508\",\n \"resourceVersion\": \"1774434\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-7508/pods/e2e-test-nginx-pod\",\n \"uid\": \"e3576aa6-058f-49ce-a80c-c788dfc8db2b\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-5rg8j\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-5rg8j\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-5rg8j\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-25T13:11:38Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-25T13:11:41Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-25T13:11:41Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-25T13:11:38Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://9372a69c57698d16063ff07a07e7f52592fae738e91ea7b89e10fb4e4a66ba92\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-03-25T13:11:40Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.6\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.97\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-03-25T13:11:38Z\"\n }\n}\n" STEP: replace the image in the pod Mar 25 13:11:44.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-7508' Mar 25 13:11:44.251: INFO: stderr: "" Mar 25 13:11:44.251: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 Mar 25 13:11:44.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-7508' Mar 25 13:11:52.172: INFO: stderr: "" Mar 25 13:11:52.172: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:11:52.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7508" for this suite. Mar 25 13:11:58.242: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:11:58.322: INFO: namespace kubectl-7508 deletion completed in 6.145358877s • [SLOW TEST:19.605 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:11:58.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-8adefddd-5ac4-4313-98b0-b92b8a023bda STEP: Creating a pod to test consume secrets Mar 25 13:11:58.419: INFO: Waiting up to 5m0s for pod "pod-secrets-71d19219-1e05-4ece-97d7-7f9e9071776b" in namespace "secrets-1966" to be "success or failure" Mar 25 13:11:58.434: INFO: Pod "pod-secrets-71d19219-1e05-4ece-97d7-7f9e9071776b": Phase="Pending", Reason="", readiness=false. Elapsed: 15.296127ms Mar 25 13:12:00.475: INFO: Pod "pod-secrets-71d19219-1e05-4ece-97d7-7f9e9071776b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056370704s Mar 25 13:12:02.480: INFO: Pod "pod-secrets-71d19219-1e05-4ece-97d7-7f9e9071776b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060867362s STEP: Saw pod success Mar 25 13:12:02.480: INFO: Pod "pod-secrets-71d19219-1e05-4ece-97d7-7f9e9071776b" satisfied condition "success or failure" Mar 25 13:12:02.483: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-71d19219-1e05-4ece-97d7-7f9e9071776b container secret-env-test: STEP: delete the pod Mar 25 13:12:02.502: INFO: Waiting for pod pod-secrets-71d19219-1e05-4ece-97d7-7f9e9071776b to disappear Mar 25 13:12:02.507: INFO: Pod pod-secrets-71d19219-1e05-4ece-97d7-7f9e9071776b no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:12:02.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1966" for this suite. Mar 25 13:12:08.522: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:12:08.605: INFO: namespace secrets-1966 deletion completed in 6.093655505s • [SLOW TEST:10.282 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:12:08.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions Mar 25 13:12:08.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Mar 25 13:12:08.853: INFO: stderr: "" Mar 25 13:12:08.853: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:12:08.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7128" for this suite. Mar 25 13:12:14.938: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:12:15.016: INFO: namespace kubectl-7128 deletion completed in 6.11155211s • [SLOW TEST:6.411 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:12:15.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 25 13:12:15.092: INFO: Waiting up to 5m0s for pod "pod-2075ba48-b7b1-45fb-a09f-a7648cc3a926" in namespace "emptydir-3558" to be "success or failure" Mar 25 13:12:15.094: INFO: Pod "pod-2075ba48-b7b1-45fb-a09f-a7648cc3a926": Phase="Pending", Reason="", readiness=false. Elapsed: 2.278865ms Mar 25 13:12:17.099: INFO: Pod "pod-2075ba48-b7b1-45fb-a09f-a7648cc3a926": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006596787s Mar 25 13:12:19.103: INFO: Pod "pod-2075ba48-b7b1-45fb-a09f-a7648cc3a926": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011219983s STEP: Saw pod success Mar 25 13:12:19.103: INFO: Pod "pod-2075ba48-b7b1-45fb-a09f-a7648cc3a926" satisfied condition "success or failure" Mar 25 13:12:19.106: INFO: Trying to get logs from node iruya-worker2 pod pod-2075ba48-b7b1-45fb-a09f-a7648cc3a926 container test-container: STEP: delete the pod Mar 25 13:12:19.146: INFO: Waiting for pod pod-2075ba48-b7b1-45fb-a09f-a7648cc3a926 to disappear Mar 25 13:12:19.152: INFO: Pod pod-2075ba48-b7b1-45fb-a09f-a7648cc3a926 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:12:19.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3558" for this suite. Mar 25 13:12:25.182: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:12:25.264: INFO: namespace emptydir-3558 deletion completed in 6.108120785s • [SLOW TEST:10.247 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:12:25.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-8740 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 25 13:12:25.364: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 25 13:12:53.473: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.200:8080/dial?request=hostName&protocol=http&host=10.244.1.199&port=8080&tries=1'] Namespace:pod-network-test-8740 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 25 13:12:53.473: INFO: >>> kubeConfig: /root/.kube/config I0325 13:12:53.508231 6 log.go:172] (0xc000449600) (0xc002607860) Create stream I0325 13:12:53.508266 6 log.go:172] (0xc000449600) (0xc002607860) Stream added, broadcasting: 1 I0325 13:12:53.510983 6 log.go:172] (0xc000449600) Reply frame received for 1 I0325 13:12:53.511032 6 log.go:172] (0xc000449600) (0xc0023b6000) Create stream I0325 13:12:53.511048 6 log.go:172] (0xc000449600) (0xc0023b6000) Stream added, broadcasting: 3 I0325 13:12:53.512271 6 log.go:172] (0xc000449600) Reply frame received for 3 I0325 13:12:53.512323 6 log.go:172] (0xc000449600) (0xc002d24820) Create stream I0325 13:12:53.512337 6 log.go:172] (0xc000449600) (0xc002d24820) Stream added, broadcasting: 5 I0325 13:12:53.513476 6 log.go:172] (0xc000449600) Reply frame received for 5 I0325 13:12:53.594869 6 log.go:172] (0xc000449600) Data frame received for 3 I0325 13:12:53.594900 6 log.go:172] (0xc0023b6000) (3) Data frame handling I0325 13:12:53.594920 6 log.go:172] (0xc0023b6000) (3) Data frame sent I0325 13:12:53.595386 6 log.go:172] (0xc000449600) Data frame received for 3 I0325 13:12:53.595412 6 log.go:172] (0xc0023b6000) (3) Data frame handling I0325 13:12:53.595466 6 log.go:172] (0xc000449600) Data frame received for 5 I0325 13:12:53.595520 6 log.go:172] (0xc002d24820) (5) Data frame handling I0325 13:12:53.597690 6 log.go:172] (0xc000449600) Data frame received for 1 I0325 13:12:53.597738 6 log.go:172] (0xc002607860) (1) Data frame handling I0325 13:12:53.597772 6 log.go:172] (0xc002607860) (1) Data frame sent I0325 13:12:53.597793 6 log.go:172] (0xc000449600) (0xc002607860) Stream removed, broadcasting: 1 I0325 13:12:53.597926 6 log.go:172] (0xc000449600) (0xc002607860) Stream removed, broadcasting: 1 I0325 13:12:53.597958 6 log.go:172] (0xc000449600) (0xc0023b6000) Stream removed, broadcasting: 3 I0325 13:12:53.597977 6 log.go:172] (0xc000449600) (0xc002d24820) Stream removed, broadcasting: 5 Mar 25 13:12:53.598: INFO: Waiting for endpoints: map[] I0325 13:12:53.598354 6 log.go:172] (0xc000449600) Go away received Mar 25 13:12:53.601: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.200:8080/dial?request=hostName&protocol=http&host=10.244.2.98&port=8080&tries=1'] Namespace:pod-network-test-8740 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 25 13:12:53.601: INFO: >>> kubeConfig: /root/.kube/config I0325 13:12:53.636662 6 log.go:172] (0xc0029ba840) (0xc0023b6320) Create stream I0325 13:12:53.636684 6 log.go:172] (0xc0029ba840) (0xc0023b6320) Stream added, broadcasting: 1 I0325 13:12:53.639022 6 log.go:172] (0xc0029ba840) Reply frame received for 1 I0325 13:12:53.639076 6 log.go:172] (0xc0029ba840) (0xc0023b63c0) Create stream I0325 13:12:53.639093 6 log.go:172] (0xc0029ba840) (0xc0023b63c0) Stream added, broadcasting: 3 I0325 13:12:53.640138 6 log.go:172] (0xc0029ba840) Reply frame received for 3 I0325 13:12:53.640174 6 log.go:172] (0xc0029ba840) (0xc002d248c0) Create stream I0325 13:12:53.640186 6 log.go:172] (0xc0029ba840) (0xc002d248c0) Stream added, broadcasting: 5 I0325 13:12:53.641091 6 log.go:172] (0xc0029ba840) Reply frame received for 5 I0325 13:12:53.699103 6 log.go:172] (0xc0029ba840) Data frame received for 3 I0325 13:12:53.699145 6 log.go:172] (0xc0023b63c0) (3) Data frame handling I0325 13:12:53.699182 6 log.go:172] (0xc0023b63c0) (3) Data frame sent I0325 13:12:53.699375 6 log.go:172] (0xc0029ba840) Data frame received for 5 I0325 13:12:53.699396 6 log.go:172] (0xc002d248c0) (5) Data frame handling I0325 13:12:53.699426 6 log.go:172] (0xc0029ba840) Data frame received for 3 I0325 13:12:53.699455 6 log.go:172] (0xc0023b63c0) (3) Data frame handling I0325 13:12:53.701063 6 log.go:172] (0xc0029ba840) Data frame received for 1 I0325 13:12:53.701088 6 log.go:172] (0xc0023b6320) (1) Data frame handling I0325 13:12:53.701208 6 log.go:172] (0xc0023b6320) (1) Data frame sent I0325 13:12:53.701263 6 log.go:172] (0xc0029ba840) (0xc0023b6320) Stream removed, broadcasting: 1 I0325 13:12:53.701302 6 log.go:172] (0xc0029ba840) Go away received I0325 13:12:53.701349 6 log.go:172] (0xc0029ba840) (0xc0023b6320) Stream removed, broadcasting: 1 I0325 13:12:53.701363 6 log.go:172] (0xc0029ba840) (0xc0023b63c0) Stream removed, broadcasting: 3 I0325 13:12:53.701374 6 log.go:172] (0xc0029ba840) (0xc002d248c0) Stream removed, broadcasting: 5 Mar 25 13:12:53.701: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:12:53.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8740" for this suite. Mar 25 13:13:17.720: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:13:17.803: INFO: namespace pod-network-test-8740 deletion completed in 24.097651506s • [SLOW TEST:52.538 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:13:17.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 25 13:13:17.950: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"fdeeca18-59ff-4f2b-af50-fbc2cd275440", Controller:(*bool)(0xc00301735a), BlockOwnerDeletion:(*bool)(0xc00301735b)}} Mar 25 13:13:18.006: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"722bdc67-03cc-48a8-af23-c2573fb16a15", Controller:(*bool)(0xc00291eb32), BlockOwnerDeletion:(*bool)(0xc00291eb33)}} Mar 25 13:13:18.029: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"d7414430-ffac-48b1-b041-dac031ccbb8e", Controller:(*bool)(0xc00291ecfa), BlockOwnerDeletion:(*bool)(0xc00291ecfb)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:13:23.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4792" for this suite. Mar 25 13:13:29.116: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:13:29.198: INFO: namespace gc-4792 deletion completed in 6.099892855s • [SLOW TEST:11.395 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:13:29.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0325 13:13:40.187757 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 25 13:13:40.187: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:13:40.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1149" for this suite. Mar 25 13:13:46.261: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:13:46.332: INFO: namespace gc-1149 deletion completed in 6.141787685s • [SLOW TEST:17.133 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:13:46.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Mar 25 13:13:46.489: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:14:02.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7258" for this suite. Mar 25 13:14:08.194: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:14:08.282: INFO: namespace pods-7258 deletion completed in 6.09863655s • [SLOW TEST:21.950 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:14:08.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0325 13:14:09.407304 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 25 13:14:09.407: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:14:09.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8082" for this suite. Mar 25 13:14:15.435: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:14:15.514: INFO: namespace gc-8082 deletion completed in 6.103819537s • [SLOW TEST:7.232 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:14:15.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:14:21.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7781" for this suite. Mar 25 13:14:27.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:14:27.322: INFO: namespace watch-7781 deletion completed in 6.201854512s • [SLOW TEST:11.808 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:14:27.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-trbk STEP: Creating a pod to test atomic-volume-subpath Mar 25 13:14:27.432: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-trbk" in namespace "subpath-8150" to be "success or failure" Mar 25 13:14:27.436: INFO: Pod "pod-subpath-test-secret-trbk": Phase="Pending", Reason="", readiness=false. Elapsed: 3.481515ms Mar 25 13:14:29.440: INFO: Pod "pod-subpath-test-secret-trbk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007711053s Mar 25 13:14:31.444: INFO: Pod "pod-subpath-test-secret-trbk": Phase="Running", Reason="", readiness=true. Elapsed: 4.011975276s Mar 25 13:14:33.449: INFO: Pod "pod-subpath-test-secret-trbk": Phase="Running", Reason="", readiness=true. Elapsed: 6.016662777s Mar 25 13:14:35.453: INFO: Pod "pod-subpath-test-secret-trbk": Phase="Running", Reason="", readiness=true. Elapsed: 8.020985018s Mar 25 13:14:37.458: INFO: Pod "pod-subpath-test-secret-trbk": Phase="Running", Reason="", readiness=true. Elapsed: 10.025663761s Mar 25 13:14:39.462: INFO: Pod "pod-subpath-test-secret-trbk": Phase="Running", Reason="", readiness=true. Elapsed: 12.02992857s Mar 25 13:14:41.466: INFO: Pod "pod-subpath-test-secret-trbk": Phase="Running", Reason="", readiness=true. Elapsed: 14.033717426s Mar 25 13:14:43.489: INFO: Pod "pod-subpath-test-secret-trbk": Phase="Running", Reason="", readiness=true. Elapsed: 16.056880635s Mar 25 13:14:45.493: INFO: Pod "pod-subpath-test-secret-trbk": Phase="Running", Reason="", readiness=true. Elapsed: 18.060874079s Mar 25 13:14:47.497: INFO: Pod "pod-subpath-test-secret-trbk": Phase="Running", Reason="", readiness=true. Elapsed: 20.064843293s Mar 25 13:14:49.502: INFO: Pod "pod-subpath-test-secret-trbk": Phase="Running", Reason="", readiness=true. Elapsed: 22.069180415s Mar 25 13:14:51.506: INFO: Pod "pod-subpath-test-secret-trbk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.073127943s STEP: Saw pod success Mar 25 13:14:51.506: INFO: Pod "pod-subpath-test-secret-trbk" satisfied condition "success or failure" Mar 25 13:14:51.509: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-secret-trbk container test-container-subpath-secret-trbk: STEP: delete the pod Mar 25 13:14:51.558: INFO: Waiting for pod pod-subpath-test-secret-trbk to disappear Mar 25 13:14:51.570: INFO: Pod pod-subpath-test-secret-trbk no longer exists STEP: Deleting pod pod-subpath-test-secret-trbk Mar 25 13:14:51.570: INFO: Deleting pod "pod-subpath-test-secret-trbk" in namespace "subpath-8150" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:14:51.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8150" for this suite. Mar 25 13:14:57.598: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:14:57.686: INFO: namespace subpath-8150 deletion completed in 6.103767376s • [SLOW TEST:30.364 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:14:57.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-2bafa85f-5bde-4903-a3cd-c1bebbb8f6c6 STEP: Creating configMap with name cm-test-opt-upd-91f5e8f7-b0dd-4332-8fc9-3b5b2bed166b STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-2bafa85f-5bde-4903-a3cd-c1bebbb8f6c6 STEP: Updating configmap cm-test-opt-upd-91f5e8f7-b0dd-4332-8fc9-3b5b2bed166b STEP: Creating configMap with name cm-test-opt-create-8c2eabe6-75f3-40c3-ab57-64db417e6fcc STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:16:34.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9018" for this suite. Mar 25 13:16:56.292: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:16:56.381: INFO: namespace configmap-9018 deletion completed in 22.114355948s • [SLOW TEST:118.695 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:16:56.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Mar 25 13:16:56.502: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-6176,SelfLink:/api/v1/namespaces/watch-6176/configmaps/e2e-watch-test-resource-version,UID:9228c9b3-fe7f-4356-a25c-c47659b7ce79,ResourceVersion:1775689,Generation:0,CreationTimestamp:2020-03-25 13:16:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 25 13:16:56.502: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-6176,SelfLink:/api/v1/namespaces/watch-6176/configmaps/e2e-watch-test-resource-version,UID:9228c9b3-fe7f-4356-a25c-c47659b7ce79,ResourceVersion:1775690,Generation:0,CreationTimestamp:2020-03-25 13:16:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:16:56.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6176" for this suite. Mar 25 13:17:02.519: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:17:02.621: INFO: namespace watch-6176 deletion completed in 6.115540659s • [SLOW TEST:6.240 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:17:02.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Mar 25 13:17:02.701: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7301,SelfLink:/api/v1/namespaces/watch-7301/configmaps/e2e-watch-test-configmap-a,UID:7f4340f2-1900-4875-8bc2-cc9bac941285,ResourceVersion:1775711,Generation:0,CreationTimestamp:2020-03-25 13:17:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 25 13:17:02.701: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7301,SelfLink:/api/v1/namespaces/watch-7301/configmaps/e2e-watch-test-configmap-a,UID:7f4340f2-1900-4875-8bc2-cc9bac941285,ResourceVersion:1775711,Generation:0,CreationTimestamp:2020-03-25 13:17:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Mar 25 13:17:12.709: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7301,SelfLink:/api/v1/namespaces/watch-7301/configmaps/e2e-watch-test-configmap-a,UID:7f4340f2-1900-4875-8bc2-cc9bac941285,ResourceVersion:1775732,Generation:0,CreationTimestamp:2020-03-25 13:17:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Mar 25 13:17:12.709: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7301,SelfLink:/api/v1/namespaces/watch-7301/configmaps/e2e-watch-test-configmap-a,UID:7f4340f2-1900-4875-8bc2-cc9bac941285,ResourceVersion:1775732,Generation:0,CreationTimestamp:2020-03-25 13:17:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Mar 25 13:17:22.717: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7301,SelfLink:/api/v1/namespaces/watch-7301/configmaps/e2e-watch-test-configmap-a,UID:7f4340f2-1900-4875-8bc2-cc9bac941285,ResourceVersion:1775753,Generation:0,CreationTimestamp:2020-03-25 13:17:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 25 13:17:22.717: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7301,SelfLink:/api/v1/namespaces/watch-7301/configmaps/e2e-watch-test-configmap-a,UID:7f4340f2-1900-4875-8bc2-cc9bac941285,ResourceVersion:1775753,Generation:0,CreationTimestamp:2020-03-25 13:17:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Mar 25 13:17:32.724: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7301,SelfLink:/api/v1/namespaces/watch-7301/configmaps/e2e-watch-test-configmap-a,UID:7f4340f2-1900-4875-8bc2-cc9bac941285,ResourceVersion:1775774,Generation:0,CreationTimestamp:2020-03-25 13:17:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 25 13:17:32.725: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7301,SelfLink:/api/v1/namespaces/watch-7301/configmaps/e2e-watch-test-configmap-a,UID:7f4340f2-1900-4875-8bc2-cc9bac941285,ResourceVersion:1775774,Generation:0,CreationTimestamp:2020-03-25 13:17:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Mar 25 13:17:42.732: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7301,SelfLink:/api/v1/namespaces/watch-7301/configmaps/e2e-watch-test-configmap-b,UID:2b3dd7e3-c2d0-4cce-93df-5db76d66097a,ResourceVersion:1775794,Generation:0,CreationTimestamp:2020-03-25 13:17:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 25 13:17:42.732: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7301,SelfLink:/api/v1/namespaces/watch-7301/configmaps/e2e-watch-test-configmap-b,UID:2b3dd7e3-c2d0-4cce-93df-5db76d66097a,ResourceVersion:1775794,Generation:0,CreationTimestamp:2020-03-25 13:17:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Mar 25 13:17:52.737: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7301,SelfLink:/api/v1/namespaces/watch-7301/configmaps/e2e-watch-test-configmap-b,UID:2b3dd7e3-c2d0-4cce-93df-5db76d66097a,ResourceVersion:1775814,Generation:0,CreationTimestamp:2020-03-25 13:17:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 25 13:17:52.737: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7301,SelfLink:/api/v1/namespaces/watch-7301/configmaps/e2e-watch-test-configmap-b,UID:2b3dd7e3-c2d0-4cce-93df-5db76d66097a,ResourceVersion:1775814,Generation:0,CreationTimestamp:2020-03-25 13:17:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:18:02.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7301" for this suite. Mar 25 13:18:08.754: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:18:08.833: INFO: namespace watch-7301 deletion completed in 6.090163561s • [SLOW TEST:66.211 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:18:08.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod test-webserver-e3ec2083-b693-4457-9a72-cbabb64b857e in namespace container-probe-321 Mar 25 13:18:12.928: INFO: Started pod test-webserver-e3ec2083-b693-4457-9a72-cbabb64b857e in namespace container-probe-321 STEP: checking the pod's current state and verifying that restartCount is present Mar 25 13:18:12.932: INFO: Initial restart count of pod test-webserver-e3ec2083-b693-4457-9a72-cbabb64b857e is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:22:13.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-321" for this suite. Mar 25 13:22:19.648: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:22:19.734: INFO: namespace container-probe-321 deletion completed in 6.111891338s • [SLOW TEST:250.901 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:22:19.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 25 13:22:19.791: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1dea13a5-82da-435a-abec-91ff81894a24" in namespace "downward-api-2895" to be "success or failure" Mar 25 13:22:19.798: INFO: Pod "downwardapi-volume-1dea13a5-82da-435a-abec-91ff81894a24": Phase="Pending", Reason="", readiness=false. Elapsed: 6.38108ms Mar 25 13:22:21.802: INFO: Pod "downwardapi-volume-1dea13a5-82da-435a-abec-91ff81894a24": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010430392s Mar 25 13:22:23.806: INFO: Pod "downwardapi-volume-1dea13a5-82da-435a-abec-91ff81894a24": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014786963s STEP: Saw pod success Mar 25 13:22:23.806: INFO: Pod "downwardapi-volume-1dea13a5-82da-435a-abec-91ff81894a24" satisfied condition "success or failure" Mar 25 13:22:23.809: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-1dea13a5-82da-435a-abec-91ff81894a24 container client-container: STEP: delete the pod Mar 25 13:22:23.850: INFO: Waiting for pod downwardapi-volume-1dea13a5-82da-435a-abec-91ff81894a24 to disappear Mar 25 13:22:23.878: INFO: Pod downwardapi-volume-1dea13a5-82da-435a-abec-91ff81894a24 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:22:23.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2895" for this suite. Mar 25 13:22:29.894: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:22:29.988: INFO: namespace downward-api-2895 deletion completed in 6.107505701s • [SLOW TEST:10.254 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:22:29.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Mar 25 13:22:30.075: INFO: namespace kubectl-88 Mar 25 13:22:30.075: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-88' Mar 25 13:22:32.750: INFO: stderr: "" Mar 25 13:22:32.750: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Mar 25 13:22:33.765: INFO: Selector matched 1 pods for map[app:redis] Mar 25 13:22:33.765: INFO: Found 0 / 1 Mar 25 13:22:34.755: INFO: Selector matched 1 pods for map[app:redis] Mar 25 13:22:34.755: INFO: Found 0 / 1 Mar 25 13:22:35.754: INFO: Selector matched 1 pods for map[app:redis] Mar 25 13:22:35.754: INFO: Found 0 / 1 Mar 25 13:22:36.755: INFO: Selector matched 1 pods for map[app:redis] Mar 25 13:22:36.755: INFO: Found 1 / 1 Mar 25 13:22:36.755: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 25 13:22:36.758: INFO: Selector matched 1 pods for map[app:redis] Mar 25 13:22:36.758: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 25 13:22:36.758: INFO: wait on redis-master startup in kubectl-88 Mar 25 13:22:36.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-4p7xt redis-master --namespace=kubectl-88' Mar 25 13:22:36.868: INFO: stderr: "" Mar 25 13:22:36.868: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 25 Mar 13:22:35.333 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 25 Mar 13:22:35.333 # Server started, Redis version 3.2.12\n1:M 25 Mar 13:22:35.334 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 25 Mar 13:22:35.334 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Mar 25 13:22:36.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-88' Mar 25 13:22:36.993: INFO: stderr: "" Mar 25 13:22:36.994: INFO: stdout: "service/rm2 exposed\n" Mar 25 13:22:37.023: INFO: Service rm2 in namespace kubectl-88 found. STEP: exposing service Mar 25 13:22:39.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-88' Mar 25 13:22:39.191: INFO: stderr: "" Mar 25 13:22:39.191: INFO: stdout: "service/rm3 exposed\n" Mar 25 13:22:39.203: INFO: Service rm3 in namespace kubectl-88 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:22:41.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-88" for this suite. Mar 25 13:23:03.224: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:23:03.310: INFO: namespace kubectl-88 deletion completed in 22.096581942s • [SLOW TEST:33.322 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:23:03.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:23:03.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7878" for this suite. Mar 25 13:23:09.476: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:23:09.562: INFO: namespace kubelet-test-7878 deletion completed in 6.098865492s • [SLOW TEST:6.252 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:23:09.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Mar 25 13:23:10.135: INFO: Pod name wrapped-volume-race-2154efec-5249-4626-af9b-589bc32c48d1: Found 0 pods out of 5 Mar 25 13:23:15.166: INFO: Pod name wrapped-volume-race-2154efec-5249-4626-af9b-589bc32c48d1: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-2154efec-5249-4626-af9b-589bc32c48d1 in namespace emptydir-wrapper-2281, will wait for the garbage collector to delete the pods Mar 25 13:23:29.261: INFO: Deleting ReplicationController wrapped-volume-race-2154efec-5249-4626-af9b-589bc32c48d1 took: 7.409842ms Mar 25 13:23:29.562: INFO: Terminating ReplicationController wrapped-volume-race-2154efec-5249-4626-af9b-589bc32c48d1 pods took: 300.323303ms STEP: Creating RC which spawns configmap-volume pods Mar 25 13:24:13.297: INFO: Pod name wrapped-volume-race-d47deddd-0c80-432a-842a-edc3023d17d3: Found 0 pods out of 5 Mar 25 13:24:18.308: INFO: Pod name wrapped-volume-race-d47deddd-0c80-432a-842a-edc3023d17d3: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-d47deddd-0c80-432a-842a-edc3023d17d3 in namespace emptydir-wrapper-2281, will wait for the garbage collector to delete the pods Mar 25 13:24:32.501: INFO: Deleting ReplicationController wrapped-volume-race-d47deddd-0c80-432a-842a-edc3023d17d3 took: 7.67487ms Mar 25 13:24:32.801: INFO: Terminating ReplicationController wrapped-volume-race-d47deddd-0c80-432a-842a-edc3023d17d3 pods took: 300.256275ms STEP: Creating RC which spawns configmap-volume pods Mar 25 13:25:12.771: INFO: Pod name wrapped-volume-race-2deba0c3-ce12-4593-9188-25b22657d18b: Found 0 pods out of 5 Mar 25 13:25:17.800: INFO: Pod name wrapped-volume-race-2deba0c3-ce12-4593-9188-25b22657d18b: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-2deba0c3-ce12-4593-9188-25b22657d18b in namespace emptydir-wrapper-2281, will wait for the garbage collector to delete the pods Mar 25 13:25:31.881: INFO: Deleting ReplicationController wrapped-volume-race-2deba0c3-ce12-4593-9188-25b22657d18b took: 7.986897ms Mar 25 13:25:32.181: INFO: Terminating ReplicationController wrapped-volume-race-2deba0c3-ce12-4593-9188-25b22657d18b pods took: 300.250678ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:26:13.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-2281" for this suite. Mar 25 13:26:21.860: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:26:21.944: INFO: namespace emptydir-wrapper-2281 deletion completed in 8.098475503s • [SLOW TEST:192.381 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:26:21.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-2e0913b9-0386-4b68-8713-2b627b2a67f1 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:26:21.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5446" for this suite. Mar 25 13:26:28.016: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:26:28.097: INFO: namespace secrets-5446 deletion completed in 6.089331264s • [SLOW TEST:6.152 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:26:28.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 25 13:26:28.158: INFO: Waiting up to 5m0s for pod "pod-770d557c-f12f-40be-9702-102db472bae5" in namespace "emptydir-5326" to be "success or failure" Mar 25 13:26:28.174: INFO: Pod "pod-770d557c-f12f-40be-9702-102db472bae5": Phase="Pending", Reason="", readiness=false. Elapsed: 15.7457ms Mar 25 13:26:30.179: INFO: Pod "pod-770d557c-f12f-40be-9702-102db472bae5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020118254s Mar 25 13:26:32.183: INFO: Pod "pod-770d557c-f12f-40be-9702-102db472bae5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024432984s STEP: Saw pod success Mar 25 13:26:32.183: INFO: Pod "pod-770d557c-f12f-40be-9702-102db472bae5" satisfied condition "success or failure" Mar 25 13:26:32.186: INFO: Trying to get logs from node iruya-worker2 pod pod-770d557c-f12f-40be-9702-102db472bae5 container test-container: STEP: delete the pod Mar 25 13:26:32.231: INFO: Waiting for pod pod-770d557c-f12f-40be-9702-102db472bae5 to disappear Mar 25 13:26:32.270: INFO: Pod pod-770d557c-f12f-40be-9702-102db472bae5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:26:32.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5326" for this suite. Mar 25 13:26:38.288: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:26:38.381: INFO: namespace emptydir-5326 deletion completed in 6.107001197s • [SLOW TEST:10.283 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:26:38.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-42a588bb-0520-4e2c-b6eb-c0a2d581c2ef STEP: Creating a pod to test consume secrets Mar 25 13:26:38.439: INFO: Waiting up to 5m0s for pod "pod-secrets-5ff028b1-998f-4fd4-ac9f-5415cbdeb243" in namespace "secrets-164" to be "success or failure" Mar 25 13:26:38.454: INFO: Pod "pod-secrets-5ff028b1-998f-4fd4-ac9f-5415cbdeb243": Phase="Pending", Reason="", readiness=false. Elapsed: 14.809675ms Mar 25 13:26:40.459: INFO: Pod "pod-secrets-5ff028b1-998f-4fd4-ac9f-5415cbdeb243": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019118095s Mar 25 13:26:42.463: INFO: Pod "pod-secrets-5ff028b1-998f-4fd4-ac9f-5415cbdeb243": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023573345s STEP: Saw pod success Mar 25 13:26:42.463: INFO: Pod "pod-secrets-5ff028b1-998f-4fd4-ac9f-5415cbdeb243" satisfied condition "success or failure" Mar 25 13:26:42.466: INFO: Trying to get logs from node iruya-worker pod pod-secrets-5ff028b1-998f-4fd4-ac9f-5415cbdeb243 container secret-volume-test: STEP: delete the pod Mar 25 13:26:42.481: INFO: Waiting for pod pod-secrets-5ff028b1-998f-4fd4-ac9f-5415cbdeb243 to disappear Mar 25 13:26:42.496: INFO: Pod pod-secrets-5ff028b1-998f-4fd4-ac9f-5415cbdeb243 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:26:42.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-164" for this suite. Mar 25 13:26:48.514: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:26:48.615: INFO: namespace secrets-164 deletion completed in 6.115528309s • [SLOW TEST:10.234 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:26:48.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 25 13:26:48.734: INFO: Waiting up to 5m0s for pod "pod-14648b8a-7cad-40f2-9470-2c3a80174b2e" in namespace "emptydir-4814" to be "success or failure" Mar 25 13:26:48.750: INFO: Pod "pod-14648b8a-7cad-40f2-9470-2c3a80174b2e": Phase="Pending", Reason="", readiness=false. Elapsed: 15.999478ms Mar 25 13:26:50.754: INFO: Pod "pod-14648b8a-7cad-40f2-9470-2c3a80174b2e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019871733s Mar 25 13:26:52.758: INFO: Pod "pod-14648b8a-7cad-40f2-9470-2c3a80174b2e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024372262s STEP: Saw pod success Mar 25 13:26:52.758: INFO: Pod "pod-14648b8a-7cad-40f2-9470-2c3a80174b2e" satisfied condition "success or failure" Mar 25 13:26:52.761: INFO: Trying to get logs from node iruya-worker2 pod pod-14648b8a-7cad-40f2-9470-2c3a80174b2e container test-container: STEP: delete the pod Mar 25 13:26:52.781: INFO: Waiting for pod pod-14648b8a-7cad-40f2-9470-2c3a80174b2e to disappear Mar 25 13:26:52.786: INFO: Pod pod-14648b8a-7cad-40f2-9470-2c3a80174b2e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:26:52.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4814" for this suite. Mar 25 13:26:58.802: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:26:58.879: INFO: namespace emptydir-4814 deletion completed in 6.090344273s • [SLOW TEST:10.264 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:26:58.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-e88c88ec-f77a-44cf-b07b-326c4d80d3fe STEP: Creating a pod to test consume secrets Mar 25 13:26:58.954: INFO: Waiting up to 5m0s for pod "pod-secrets-dfd2774e-8ce1-4e49-acb4-b00ce59effb8" in namespace "secrets-2290" to be "success or failure" Mar 25 13:26:58.959: INFO: Pod "pod-secrets-dfd2774e-8ce1-4e49-acb4-b00ce59effb8": Phase="Pending", Reason="", readiness=false. Elapsed: 5.065612ms Mar 25 13:27:00.963: INFO: Pod "pod-secrets-dfd2774e-8ce1-4e49-acb4-b00ce59effb8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008740539s Mar 25 13:27:02.967: INFO: Pod "pod-secrets-dfd2774e-8ce1-4e49-acb4-b00ce59effb8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013252047s STEP: Saw pod success Mar 25 13:27:02.967: INFO: Pod "pod-secrets-dfd2774e-8ce1-4e49-acb4-b00ce59effb8" satisfied condition "success or failure" Mar 25 13:27:02.971: INFO: Trying to get logs from node iruya-worker pod pod-secrets-dfd2774e-8ce1-4e49-acb4-b00ce59effb8 container secret-volume-test: STEP: delete the pod Mar 25 13:27:02.998: INFO: Waiting for pod pod-secrets-dfd2774e-8ce1-4e49-acb4-b00ce59effb8 to disappear Mar 25 13:27:03.002: INFO: Pod pod-secrets-dfd2774e-8ce1-4e49-acb4-b00ce59effb8 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:27:03.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2290" for this suite. Mar 25 13:27:09.031: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:27:09.122: INFO: namespace secrets-2290 deletion completed in 6.116562552s • [SLOW TEST:10.242 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:27:09.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 25 13:27:09.173: INFO: Creating deployment "test-recreate-deployment" Mar 25 13:27:09.181: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Mar 25 13:27:09.213: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Mar 25 13:27:11.221: INFO: Waiting deployment "test-recreate-deployment" to complete Mar 25 13:27:11.223: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720739629, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720739629, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720739629, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720739629, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 13:27:13.227: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Mar 25 13:27:13.233: INFO: Updating deployment test-recreate-deployment Mar 25 13:27:13.233: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Mar 25 13:27:13.512: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-7860,SelfLink:/apis/apps/v1/namespaces/deployment-7860/deployments/test-recreate-deployment,UID:3af7dade-86ce-48ac-b284-120c6cbecabf,ResourceVersion:1778003,Generation:2,CreationTimestamp:2020-03-25 13:27:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-03-25 13:27:13 +0000 UTC 2020-03-25 13:27:13 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-03-25 13:27:13 +0000 UTC 2020-03-25 13:27:09 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Mar 25 13:27:13.516: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-7860,SelfLink:/apis/apps/v1/namespaces/deployment-7860/replicasets/test-recreate-deployment-5c8c9cc69d,UID:d671661b-76e8-4bcc-833d-cae4050513f2,ResourceVersion:1778001,Generation:1,CreationTimestamp:2020-03-25 13:27:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 3af7dade-86ce-48ac-b284-120c6cbecabf 0xc00307b2e7 0xc00307b2e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 25 13:27:13.516: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Mar 25 13:27:13.516: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-7860,SelfLink:/apis/apps/v1/namespaces/deployment-7860/replicasets/test-recreate-deployment-6df85df6b9,UID:7634a0dc-6971-40ff-b475-9e038d73b1b3,ResourceVersion:1777992,Generation:2,CreationTimestamp:2020-03-25 13:27:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 3af7dade-86ce-48ac-b284-120c6cbecabf 0xc00307b3b7 0xc00307b3b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 25 13:27:13.598: INFO: Pod "test-recreate-deployment-5c8c9cc69d-xq65l" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-xq65l,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-7860,SelfLink:/api/v1/namespaces/deployment-7860/pods/test-recreate-deployment-5c8c9cc69d-xq65l,UID:0128b232-73cf-48b2-9988-4384c7ce9c33,ResourceVersion:1778004,Generation:0,CreationTimestamp:2020-03-25 13:27:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d d671661b-76e8-4bcc-833d-cae4050513f2 0xc00307bc87 0xc00307bc88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-npzgw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-npzgw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-npzgw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00307bd00} {node.kubernetes.io/unreachable Exists NoExecute 0xc00307bd20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:27:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:27:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:27:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:27:13 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-03-25 13:27:13 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:27:13.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7860" for this suite. Mar 25 13:27:19.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:27:19.748: INFO: namespace deployment-7860 deletion completed in 6.147370556s • [SLOW TEST:10.626 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:27:19.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-a6b7814f-ca89-4d0b-a63d-68033a3df10f STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-a6b7814f-ca89-4d0b-a63d-68033a3df10f STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:28:48.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8053" for this suite. Mar 25 13:29:10.291: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:29:10.368: INFO: namespace projected-8053 deletion completed in 22.108117107s • [SLOW TEST:110.619 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:29:10.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-8844 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-8844 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8844 Mar 25 13:29:10.445: INFO: Found 0 stateful pods, waiting for 1 Mar 25 13:29:20.450: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Mar 25 13:29:20.453: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8844 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 25 13:29:20.720: INFO: stderr: "I0325 13:29:20.582057 452 log.go:172] (0xc00099c580) (0xc0005b2960) Create stream\nI0325 13:29:20.582119 452 log.go:172] (0xc00099c580) (0xc0005b2960) Stream added, broadcasting: 1\nI0325 13:29:20.584429 452 log.go:172] (0xc00099c580) Reply frame received for 1\nI0325 13:29:20.584476 452 log.go:172] (0xc00099c580) (0xc0005ba000) Create stream\nI0325 13:29:20.584493 452 log.go:172] (0xc00099c580) (0xc0005ba000) Stream added, broadcasting: 3\nI0325 13:29:20.585751 452 log.go:172] (0xc00099c580) Reply frame received for 3\nI0325 13:29:20.585786 452 log.go:172] (0xc00099c580) (0xc0005b2a00) Create stream\nI0325 13:29:20.585806 452 log.go:172] (0xc00099c580) (0xc0005b2a00) Stream added, broadcasting: 5\nI0325 13:29:20.586908 452 log.go:172] (0xc00099c580) Reply frame received for 5\nI0325 13:29:20.669368 452 log.go:172] (0xc00099c580) Data frame received for 5\nI0325 13:29:20.669411 452 log.go:172] (0xc0005b2a00) (5) Data frame handling\nI0325 13:29:20.669432 452 log.go:172] (0xc0005b2a00) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0325 13:29:20.712995 452 log.go:172] (0xc00099c580) Data frame received for 3\nI0325 13:29:20.713048 452 log.go:172] (0xc0005ba000) (3) Data frame handling\nI0325 13:29:20.713068 452 log.go:172] (0xc0005ba000) (3) Data frame sent\nI0325 13:29:20.713351 452 log.go:172] (0xc00099c580) Data frame received for 3\nI0325 13:29:20.713381 452 log.go:172] (0xc0005ba000) (3) Data frame handling\nI0325 13:29:20.713561 452 log.go:172] (0xc00099c580) Data frame received for 5\nI0325 13:29:20.713589 452 log.go:172] (0xc0005b2a00) (5) Data frame handling\nI0325 13:29:20.715475 452 log.go:172] (0xc00099c580) Data frame received for 1\nI0325 13:29:20.715488 452 log.go:172] (0xc0005b2960) (1) Data frame handling\nI0325 13:29:20.715505 452 log.go:172] (0xc0005b2960) (1) Data frame sent\nI0325 13:29:20.715598 452 log.go:172] (0xc00099c580) (0xc0005b2960) Stream removed, broadcasting: 1\nI0325 13:29:20.716124 452 log.go:172] (0xc00099c580) Go away received\nI0325 13:29:20.716200 452 log.go:172] (0xc00099c580) (0xc0005b2960) Stream removed, broadcasting: 1\nI0325 13:29:20.716248 452 log.go:172] (0xc00099c580) (0xc0005ba000) Stream removed, broadcasting: 3\nI0325 13:29:20.716302 452 log.go:172] (0xc00099c580) (0xc0005b2a00) Stream removed, broadcasting: 5\n" Mar 25 13:29:20.721: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 25 13:29:20.721: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 25 13:29:20.747: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 25 13:29:30.789: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 25 13:29:30.789: INFO: Waiting for statefulset status.replicas updated to 0 Mar 25 13:29:30.806: INFO: POD NODE PHASE GRACE CONDITIONS Mar 25 13:29:30.806: INFO: ss-0 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:10 +0000 UTC }] Mar 25 13:29:30.806: INFO: Mar 25 13:29:30.807: INFO: StatefulSet ss has not reached scale 3, at 1 Mar 25 13:29:31.811: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.992217742s Mar 25 13:29:32.815: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.987692581s Mar 25 13:29:33.820: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.984176266s Mar 25 13:29:34.825: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.979243695s Mar 25 13:29:35.830: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.973364498s Mar 25 13:29:36.834: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.96859595s Mar 25 13:29:37.839: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.964805361s Mar 25 13:29:38.879: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.959843523s Mar 25 13:29:39.884: INFO: Verifying statefulset ss doesn't scale past 3 for another 919.712299ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8844 Mar 25 13:29:40.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8844 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 25 13:29:41.111: INFO: stderr: "I0325 13:29:41.020075 472 log.go:172] (0xc000a7a630) (0xc000688aa0) Create stream\nI0325 13:29:41.020133 472 log.go:172] (0xc000a7a630) (0xc000688aa0) Stream added, broadcasting: 1\nI0325 13:29:41.022944 472 log.go:172] (0xc000a7a630) Reply frame received for 1\nI0325 13:29:41.023033 472 log.go:172] (0xc000a7a630) (0xc000948000) Create stream\nI0325 13:29:41.023063 472 log.go:172] (0xc000a7a630) (0xc000948000) Stream added, broadcasting: 3\nI0325 13:29:41.025002 472 log.go:172] (0xc000a7a630) Reply frame received for 3\nI0325 13:29:41.025033 472 log.go:172] (0xc000a7a630) (0xc0009480a0) Create stream\nI0325 13:29:41.025043 472 log.go:172] (0xc000a7a630) (0xc0009480a0) Stream added, broadcasting: 5\nI0325 13:29:41.026562 472 log.go:172] (0xc000a7a630) Reply frame received for 5\nI0325 13:29:41.105251 472 log.go:172] (0xc000a7a630) Data frame received for 3\nI0325 13:29:41.105423 472 log.go:172] (0xc000948000) (3) Data frame handling\nI0325 13:29:41.105453 472 log.go:172] (0xc000a7a630) Data frame received for 5\nI0325 13:29:41.105500 472 log.go:172] (0xc0009480a0) (5) Data frame handling\nI0325 13:29:41.105526 472 log.go:172] (0xc0009480a0) (5) Data frame sent\nI0325 13:29:41.105543 472 log.go:172] (0xc000a7a630) Data frame received for 5\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0325 13:29:41.105580 472 log.go:172] (0xc0009480a0) (5) Data frame handling\nI0325 13:29:41.105676 472 log.go:172] (0xc000948000) (3) Data frame sent\nI0325 13:29:41.105704 472 log.go:172] (0xc000a7a630) Data frame received for 3\nI0325 13:29:41.105720 472 log.go:172] (0xc000948000) (3) Data frame handling\nI0325 13:29:41.107153 472 log.go:172] (0xc000a7a630) Data frame received for 1\nI0325 13:29:41.107180 472 log.go:172] (0xc000688aa0) (1) Data frame handling\nI0325 13:29:41.107209 472 log.go:172] (0xc000688aa0) (1) Data frame sent\nI0325 13:29:41.107231 472 log.go:172] (0xc000a7a630) (0xc000688aa0) Stream removed, broadcasting: 1\nI0325 13:29:41.107293 472 log.go:172] (0xc000a7a630) Go away received\nI0325 13:29:41.107540 472 log.go:172] (0xc000a7a630) (0xc000688aa0) Stream removed, broadcasting: 1\nI0325 13:29:41.107554 472 log.go:172] (0xc000a7a630) (0xc000948000) Stream removed, broadcasting: 3\nI0325 13:29:41.107564 472 log.go:172] (0xc000a7a630) (0xc0009480a0) Stream removed, broadcasting: 5\n" Mar 25 13:29:41.111: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 25 13:29:41.111: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 25 13:29:41.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8844 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 25 13:29:41.304: INFO: stderr: "I0325 13:29:41.234150 492 log.go:172] (0xc000a60420) (0xc00028eaa0) Create stream\nI0325 13:29:41.234201 492 log.go:172] (0xc000a60420) (0xc00028eaa0) Stream added, broadcasting: 1\nI0325 13:29:41.237989 492 log.go:172] (0xc000a60420) Reply frame received for 1\nI0325 13:29:41.238040 492 log.go:172] (0xc000a60420) (0xc000617ea0) Create stream\nI0325 13:29:41.238054 492 log.go:172] (0xc000a60420) (0xc000617ea0) Stream added, broadcasting: 3\nI0325 13:29:41.239074 492 log.go:172] (0xc000a60420) Reply frame received for 3\nI0325 13:29:41.239112 492 log.go:172] (0xc000a60420) (0xc00028e320) Create stream\nI0325 13:29:41.239124 492 log.go:172] (0xc000a60420) (0xc00028e320) Stream added, broadcasting: 5\nI0325 13:29:41.240107 492 log.go:172] (0xc000a60420) Reply frame received for 5\nI0325 13:29:41.297772 492 log.go:172] (0xc000a60420) Data frame received for 5\nI0325 13:29:41.297828 492 log.go:172] (0xc00028e320) (5) Data frame handling\nI0325 13:29:41.297851 492 log.go:172] (0xc00028e320) (5) Data frame sent\nI0325 13:29:41.297870 492 log.go:172] (0xc000a60420) Data frame received for 5\nI0325 13:29:41.297886 492 log.go:172] (0xc00028e320) (5) Data frame handling\nI0325 13:29:41.297909 492 log.go:172] (0xc000a60420) Data frame received for 3\nI0325 13:29:41.297933 492 log.go:172] (0xc000617ea0) (3) Data frame handling\nI0325 13:29:41.297951 492 log.go:172] (0xc000617ea0) (3) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0325 13:29:41.297967 492 log.go:172] (0xc000a60420) Data frame received for 3\nI0325 13:29:41.297982 492 log.go:172] (0xc000617ea0) (3) Data frame handling\nI0325 13:29:41.299581 492 log.go:172] (0xc000a60420) Data frame received for 1\nI0325 13:29:41.299592 492 log.go:172] (0xc00028eaa0) (1) Data frame handling\nI0325 13:29:41.299602 492 log.go:172] (0xc00028eaa0) (1) Data frame sent\nI0325 13:29:41.299609 492 log.go:172] (0xc000a60420) (0xc00028eaa0) Stream removed, broadcasting: 1\nI0325 13:29:41.299790 492 log.go:172] (0xc000a60420) (0xc00028eaa0) Stream removed, broadcasting: 1\nI0325 13:29:41.299805 492 log.go:172] (0xc000a60420) (0xc000617ea0) Stream removed, broadcasting: 3\nI0325 13:29:41.299825 492 log.go:172] (0xc000a60420) (0xc00028e320) Stream removed, broadcasting: 5\n" Mar 25 13:29:41.304: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 25 13:29:41.304: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 25 13:29:41.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 25 13:29:41.529: INFO: stderr: "I0325 13:29:41.441582 512 log.go:172] (0xc00012adc0) (0xc000280820) Create stream\nI0325 13:29:41.441658 512 log.go:172] (0xc00012adc0) (0xc000280820) Stream added, broadcasting: 1\nI0325 13:29:41.444279 512 log.go:172] (0xc00012adc0) Reply frame received for 1\nI0325 13:29:41.444324 512 log.go:172] (0xc00012adc0) (0xc0008ac000) Create stream\nI0325 13:29:41.444336 512 log.go:172] (0xc00012adc0) (0xc0008ac000) Stream added, broadcasting: 3\nI0325 13:29:41.445550 512 log.go:172] (0xc00012adc0) Reply frame received for 3\nI0325 13:29:41.445594 512 log.go:172] (0xc00012adc0) (0xc000942000) Create stream\nI0325 13:29:41.445616 512 log.go:172] (0xc00012adc0) (0xc000942000) Stream added, broadcasting: 5\nI0325 13:29:41.446727 512 log.go:172] (0xc00012adc0) Reply frame received for 5\nI0325 13:29:41.522361 512 log.go:172] (0xc00012adc0) Data frame received for 5\nI0325 13:29:41.522407 512 log.go:172] (0xc000942000) (5) Data frame handling\nI0325 13:29:41.522421 512 log.go:172] (0xc000942000) (5) Data frame sent\nI0325 13:29:41.522434 512 log.go:172] (0xc00012adc0) Data frame received for 5\nI0325 13:29:41.522443 512 log.go:172] (0xc000942000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0325 13:29:41.522495 512 log.go:172] (0xc00012adc0) Data frame received for 3\nI0325 13:29:41.522527 512 log.go:172] (0xc0008ac000) (3) Data frame handling\nI0325 13:29:41.522551 512 log.go:172] (0xc0008ac000) (3) Data frame sent\nI0325 13:29:41.522564 512 log.go:172] (0xc00012adc0) Data frame received for 3\nI0325 13:29:41.522574 512 log.go:172] (0xc0008ac000) (3) Data frame handling\nI0325 13:29:41.523847 512 log.go:172] (0xc00012adc0) Data frame received for 1\nI0325 13:29:41.523858 512 log.go:172] (0xc000280820) (1) Data frame handling\nI0325 13:29:41.523864 512 log.go:172] (0xc000280820) (1) Data frame sent\nI0325 13:29:41.524090 512 log.go:172] (0xc00012adc0) (0xc000280820) Stream removed, broadcasting: 1\nI0325 13:29:41.524188 512 log.go:172] (0xc00012adc0) Go away received\nI0325 13:29:41.524350 512 log.go:172] (0xc00012adc0) (0xc000280820) Stream removed, broadcasting: 1\nI0325 13:29:41.524364 512 log.go:172] (0xc00012adc0) (0xc0008ac000) Stream removed, broadcasting: 3\nI0325 13:29:41.524370 512 log.go:172] (0xc00012adc0) (0xc000942000) Stream removed, broadcasting: 5\n" Mar 25 13:29:41.529: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 25 13:29:41.529: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 25 13:29:41.533: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Mar 25 13:29:51.538: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 25 13:29:51.538: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 25 13:29:51.538: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Mar 25 13:29:51.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8844 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 25 13:29:51.764: INFO: stderr: "I0325 13:29:51.670791 536 log.go:172] (0xc0009b64d0) (0xc000670820) Create stream\nI0325 13:29:51.670864 536 log.go:172] (0xc0009b64d0) (0xc000670820) Stream added, broadcasting: 1\nI0325 13:29:51.674823 536 log.go:172] (0xc0009b64d0) Reply frame received for 1\nI0325 13:29:51.674888 536 log.go:172] (0xc0009b64d0) (0xc000670000) Create stream\nI0325 13:29:51.674913 536 log.go:172] (0xc0009b64d0) (0xc000670000) Stream added, broadcasting: 3\nI0325 13:29:51.675862 536 log.go:172] (0xc0009b64d0) Reply frame received for 3\nI0325 13:29:51.675903 536 log.go:172] (0xc0009b64d0) (0xc0002800a0) Create stream\nI0325 13:29:51.675919 536 log.go:172] (0xc0009b64d0) (0xc0002800a0) Stream added, broadcasting: 5\nI0325 13:29:51.676679 536 log.go:172] (0xc0009b64d0) Reply frame received for 5\nI0325 13:29:51.759683 536 log.go:172] (0xc0009b64d0) Data frame received for 5\nI0325 13:29:51.759717 536 log.go:172] (0xc0002800a0) (5) Data frame handling\nI0325 13:29:51.759732 536 log.go:172] (0xc0002800a0) (5) Data frame sent\nI0325 13:29:51.759744 536 log.go:172] (0xc0009b64d0) Data frame received for 5\nI0325 13:29:51.759754 536 log.go:172] (0xc0002800a0) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0325 13:29:51.759781 536 log.go:172] (0xc0009b64d0) Data frame received for 3\nI0325 13:29:51.759800 536 log.go:172] (0xc000670000) (3) Data frame handling\nI0325 13:29:51.759809 536 log.go:172] (0xc000670000) (3) Data frame sent\nI0325 13:29:51.759814 536 log.go:172] (0xc0009b64d0) Data frame received for 3\nI0325 13:29:51.759819 536 log.go:172] (0xc000670000) (3) Data frame handling\nI0325 13:29:51.760885 536 log.go:172] (0xc0009b64d0) Data frame received for 1\nI0325 13:29:51.760926 536 log.go:172] (0xc000670820) (1) Data frame handling\nI0325 13:29:51.760937 536 log.go:172] (0xc000670820) (1) Data frame sent\nI0325 13:29:51.760948 536 log.go:172] (0xc0009b64d0) (0xc000670820) Stream removed, broadcasting: 1\nI0325 13:29:51.760971 536 log.go:172] (0xc0009b64d0) Go away received\nI0325 13:29:51.761408 536 log.go:172] (0xc0009b64d0) (0xc000670820) Stream removed, broadcasting: 1\nI0325 13:29:51.761423 536 log.go:172] (0xc0009b64d0) (0xc000670000) Stream removed, broadcasting: 3\nI0325 13:29:51.761429 536 log.go:172] (0xc0009b64d0) (0xc0002800a0) Stream removed, broadcasting: 5\n" Mar 25 13:29:51.764: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 25 13:29:51.764: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 25 13:29:51.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8844 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 25 13:29:51.986: INFO: stderr: "I0325 13:29:51.890024 559 log.go:172] (0xc0009e0420) (0xc00067a820) Create stream\nI0325 13:29:51.890075 559 log.go:172] (0xc0009e0420) (0xc00067a820) Stream added, broadcasting: 1\nI0325 13:29:51.893943 559 log.go:172] (0xc0009e0420) Reply frame received for 1\nI0325 13:29:51.893981 559 log.go:172] (0xc0009e0420) (0xc00067a000) Create stream\nI0325 13:29:51.893992 559 log.go:172] (0xc0009e0420) (0xc00067a000) Stream added, broadcasting: 3\nI0325 13:29:51.895073 559 log.go:172] (0xc0009e0420) Reply frame received for 3\nI0325 13:29:51.895115 559 log.go:172] (0xc0009e0420) (0xc000684280) Create stream\nI0325 13:29:51.895129 559 log.go:172] (0xc0009e0420) (0xc000684280) Stream added, broadcasting: 5\nI0325 13:29:51.896079 559 log.go:172] (0xc0009e0420) Reply frame received for 5\nI0325 13:29:51.953007 559 log.go:172] (0xc0009e0420) Data frame received for 5\nI0325 13:29:51.953030 559 log.go:172] (0xc000684280) (5) Data frame handling\nI0325 13:29:51.953041 559 log.go:172] (0xc000684280) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0325 13:29:51.980455 559 log.go:172] (0xc0009e0420) Data frame received for 3\nI0325 13:29:51.980491 559 log.go:172] (0xc00067a000) (3) Data frame handling\nI0325 13:29:51.980610 559 log.go:172] (0xc00067a000) (3) Data frame sent\nI0325 13:29:51.980631 559 log.go:172] (0xc0009e0420) Data frame received for 3\nI0325 13:29:51.980642 559 log.go:172] (0xc00067a000) (3) Data frame handling\nI0325 13:29:51.980678 559 log.go:172] (0xc0009e0420) Data frame received for 5\nI0325 13:29:51.980719 559 log.go:172] (0xc000684280) (5) Data frame handling\nI0325 13:29:51.982504 559 log.go:172] (0xc0009e0420) Data frame received for 1\nI0325 13:29:51.982530 559 log.go:172] (0xc00067a820) (1) Data frame handling\nI0325 13:29:51.982545 559 log.go:172] (0xc00067a820) (1) Data frame sent\nI0325 13:29:51.982559 559 log.go:172] (0xc0009e0420) (0xc00067a820) Stream removed, broadcasting: 1\nI0325 13:29:51.982577 559 log.go:172] (0xc0009e0420) Go away received\nI0325 13:29:51.982991 559 log.go:172] (0xc0009e0420) (0xc00067a820) Stream removed, broadcasting: 1\nI0325 13:29:51.983020 559 log.go:172] (0xc0009e0420) (0xc00067a000) Stream removed, broadcasting: 3\nI0325 13:29:51.983033 559 log.go:172] (0xc0009e0420) (0xc000684280) Stream removed, broadcasting: 5\n" Mar 25 13:29:51.986: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 25 13:29:51.986: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 25 13:29:51.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8844 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 25 13:29:52.187: INFO: stderr: "I0325 13:29:52.103213 580 log.go:172] (0xc000a0e630) (0xc000534aa0) Create stream\nI0325 13:29:52.103274 580 log.go:172] (0xc000a0e630) (0xc000534aa0) Stream added, broadcasting: 1\nI0325 13:29:52.106795 580 log.go:172] (0xc000a0e630) Reply frame received for 1\nI0325 13:29:52.106858 580 log.go:172] (0xc000a0e630) (0xc0005341e0) Create stream\nI0325 13:29:52.106882 580 log.go:172] (0xc000a0e630) (0xc0005341e0) Stream added, broadcasting: 3\nI0325 13:29:52.107734 580 log.go:172] (0xc000a0e630) Reply frame received for 3\nI0325 13:29:52.107771 580 log.go:172] (0xc000a0e630) (0xc00001a000) Create stream\nI0325 13:29:52.107783 580 log.go:172] (0xc000a0e630) (0xc00001a000) Stream added, broadcasting: 5\nI0325 13:29:52.108808 580 log.go:172] (0xc000a0e630) Reply frame received for 5\nI0325 13:29:52.154848 580 log.go:172] (0xc000a0e630) Data frame received for 5\nI0325 13:29:52.154888 580 log.go:172] (0xc00001a000) (5) Data frame handling\nI0325 13:29:52.154916 580 log.go:172] (0xc00001a000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0325 13:29:52.180354 580 log.go:172] (0xc000a0e630) Data frame received for 3\nI0325 13:29:52.180396 580 log.go:172] (0xc0005341e0) (3) Data frame handling\nI0325 13:29:52.180437 580 log.go:172] (0xc0005341e0) (3) Data frame sent\nI0325 13:29:52.180463 580 log.go:172] (0xc000a0e630) Data frame received for 3\nI0325 13:29:52.180491 580 log.go:172] (0xc0005341e0) (3) Data frame handling\nI0325 13:29:52.180707 580 log.go:172] (0xc000a0e630) Data frame received for 5\nI0325 13:29:52.180730 580 log.go:172] (0xc00001a000) (5) Data frame handling\nI0325 13:29:52.182494 580 log.go:172] (0xc000a0e630) Data frame received for 1\nI0325 13:29:52.182540 580 log.go:172] (0xc000534aa0) (1) Data frame handling\nI0325 13:29:52.182567 580 log.go:172] (0xc000534aa0) (1) Data frame sent\nI0325 13:29:52.182600 580 log.go:172] (0xc000a0e630) (0xc000534aa0) Stream removed, broadcasting: 1\nI0325 13:29:52.182771 580 log.go:172] (0xc000a0e630) Go away received\nI0325 13:29:52.183019 580 log.go:172] (0xc000a0e630) (0xc000534aa0) Stream removed, broadcasting: 1\nI0325 13:29:52.183071 580 log.go:172] (0xc000a0e630) (0xc0005341e0) Stream removed, broadcasting: 3\nI0325 13:29:52.183107 580 log.go:172] (0xc000a0e630) (0xc00001a000) Stream removed, broadcasting: 5\n" Mar 25 13:29:52.188: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 25 13:29:52.188: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 25 13:29:52.188: INFO: Waiting for statefulset status.replicas updated to 0 Mar 25 13:29:52.191: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Mar 25 13:30:02.200: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 25 13:30:02.200: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 25 13:30:02.200: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 25 13:30:02.214: INFO: POD NODE PHASE GRACE CONDITIONS Mar 25 13:30:02.214: INFO: ss-0 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:10 +0000 UTC }] Mar 25 13:30:02.214: INFO: ss-1 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:30 +0000 UTC }] Mar 25 13:30:02.214: INFO: ss-2 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:30 +0000 UTC }] Mar 25 13:30:02.214: INFO: Mar 25 13:30:02.214: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 25 13:30:03.217: INFO: POD NODE PHASE GRACE CONDITIONS Mar 25 13:30:03.217: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:10 +0000 UTC }] Mar 25 13:30:03.217: INFO: ss-1 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:30 +0000 UTC }] Mar 25 13:30:03.217: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:30 +0000 UTC }] Mar 25 13:30:03.217: INFO: Mar 25 13:30:03.217: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 25 13:30:04.223: INFO: POD NODE PHASE GRACE CONDITIONS Mar 25 13:30:04.223: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:10 +0000 UTC }] Mar 25 13:30:04.223: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:30 +0000 UTC }] Mar 25 13:30:04.223: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:30 +0000 UTC }] Mar 25 13:30:04.223: INFO: Mar 25 13:30:04.223: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 25 13:30:05.227: INFO: POD NODE PHASE GRACE CONDITIONS Mar 25 13:30:05.227: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:10 +0000 UTC }] Mar 25 13:30:05.227: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:30 +0000 UTC }] Mar 25 13:30:05.227: INFO: Mar 25 13:30:05.227: INFO: StatefulSet ss has not reached scale 0, at 2 Mar 25 13:30:06.232: INFO: POD NODE PHASE GRACE CONDITIONS Mar 25 13:30:06.232: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:10 +0000 UTC }] Mar 25 13:30:06.233: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:30 +0000 UTC }] Mar 25 13:30:06.233: INFO: Mar 25 13:30:06.233: INFO: StatefulSet ss has not reached scale 0, at 2 Mar 25 13:30:07.237: INFO: POD NODE PHASE GRACE CONDITIONS Mar 25 13:30:07.237: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:10 +0000 UTC }] Mar 25 13:30:07.237: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:30 +0000 UTC }] Mar 25 13:30:07.237: INFO: Mar 25 13:30:07.237: INFO: StatefulSet ss has not reached scale 0, at 2 Mar 25 13:30:08.242: INFO: POD NODE PHASE GRACE CONDITIONS Mar 25 13:30:08.242: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:10 +0000 UTC }] Mar 25 13:30:08.242: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:30 +0000 UTC }] Mar 25 13:30:08.242: INFO: Mar 25 13:30:08.242: INFO: StatefulSet ss has not reached scale 0, at 2 Mar 25 13:30:09.247: INFO: POD NODE PHASE GRACE CONDITIONS Mar 25 13:30:09.247: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:10 +0000 UTC }] Mar 25 13:30:09.248: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:30 +0000 UTC }] Mar 25 13:30:09.248: INFO: Mar 25 13:30:09.248: INFO: StatefulSet ss has not reached scale 0, at 2 Mar 25 13:30:10.253: INFO: POD NODE PHASE GRACE CONDITIONS Mar 25 13:30:10.253: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:10 +0000 UTC }] Mar 25 13:30:10.253: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:30 +0000 UTC }] Mar 25 13:30:10.253: INFO: Mar 25 13:30:10.253: INFO: StatefulSet ss has not reached scale 0, at 2 Mar 25 13:30:11.258: INFO: POD NODE PHASE GRACE CONDITIONS Mar 25 13:30:11.258: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:10 +0000 UTC }] Mar 25 13:30:11.258: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:29:30 +0000 UTC }] Mar 25 13:30:11.258: INFO: Mar 25 13:30:11.258: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8844 Mar 25 13:30:12.262: INFO: Scaling statefulset ss to 0 Mar 25 13:30:12.273: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Mar 25 13:30:12.276: INFO: Deleting all statefulset in ns statefulset-8844 Mar 25 13:30:12.279: INFO: Scaling statefulset ss to 0 Mar 25 13:30:12.288: INFO: Waiting for statefulset status.replicas updated to 0 Mar 25 13:30:12.290: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:30:12.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8844" for this suite. Mar 25 13:30:18.362: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:30:18.446: INFO: namespace statefulset-8844 deletion completed in 6.094068458s • [SLOW TEST:68.078 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:30:18.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-6f5063b0-2076-479e-b469-9106ea7a7a2e STEP: Creating a pod to test consume secrets Mar 25 13:30:18.552: INFO: Waiting up to 5m0s for pod "pod-secrets-2919a937-ea75-4e25-a830-c34044422e82" in namespace "secrets-7282" to be "success or failure" Mar 25 13:30:18.557: INFO: Pod "pod-secrets-2919a937-ea75-4e25-a830-c34044422e82": Phase="Pending", Reason="", readiness=false. Elapsed: 3.997086ms Mar 25 13:30:20.560: INFO: Pod "pod-secrets-2919a937-ea75-4e25-a830-c34044422e82": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007627302s Mar 25 13:30:22.564: INFO: Pod "pod-secrets-2919a937-ea75-4e25-a830-c34044422e82": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011778845s STEP: Saw pod success Mar 25 13:30:22.564: INFO: Pod "pod-secrets-2919a937-ea75-4e25-a830-c34044422e82" satisfied condition "success or failure" Mar 25 13:30:22.567: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-2919a937-ea75-4e25-a830-c34044422e82 container secret-volume-test: STEP: delete the pod Mar 25 13:30:22.588: INFO: Waiting for pod pod-secrets-2919a937-ea75-4e25-a830-c34044422e82 to disappear Mar 25 13:30:22.593: INFO: Pod pod-secrets-2919a937-ea75-4e25-a830-c34044422e82 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:30:22.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7282" for this suite. Mar 25 13:30:28.608: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:30:28.688: INFO: namespace secrets-7282 deletion completed in 6.091434807s • [SLOW TEST:10.241 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:30:28.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 25 13:30:28.771: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Mar 25 13:30:28.776: INFO: Number of nodes with available pods: 0 Mar 25 13:30:28.776: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Mar 25 13:30:28.849: INFO: Number of nodes with available pods: 0 Mar 25 13:30:28.849: INFO: Node iruya-worker is running more than one daemon pod Mar 25 13:30:29.853: INFO: Number of nodes with available pods: 0 Mar 25 13:30:29.853: INFO: Node iruya-worker is running more than one daemon pod Mar 25 13:30:30.853: INFO: Number of nodes with available pods: 0 Mar 25 13:30:30.853: INFO: Node iruya-worker is running more than one daemon pod Mar 25 13:30:31.854: INFO: Number of nodes with available pods: 1 Mar 25 13:30:31.854: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Mar 25 13:30:31.909: INFO: Number of nodes with available pods: 1 Mar 25 13:30:31.909: INFO: Number of running nodes: 0, number of available pods: 1 Mar 25 13:30:32.914: INFO: Number of nodes with available pods: 0 Mar 25 13:30:32.914: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Mar 25 13:30:32.926: INFO: Number of nodes with available pods: 0 Mar 25 13:30:32.926: INFO: Node iruya-worker is running more than one daemon pod Mar 25 13:30:33.930: INFO: Number of nodes with available pods: 0 Mar 25 13:30:33.930: INFO: Node iruya-worker is running more than one daemon pod Mar 25 13:30:34.930: INFO: Number of nodes with available pods: 0 Mar 25 13:30:34.930: INFO: Node iruya-worker is running more than one daemon pod Mar 25 13:30:35.930: INFO: Number of nodes with available pods: 0 Mar 25 13:30:35.930: INFO: Node iruya-worker is running more than one daemon pod Mar 25 13:30:36.930: INFO: Number of nodes with available pods: 0 Mar 25 13:30:36.931: INFO: Node iruya-worker is running more than one daemon pod Mar 25 13:30:37.930: INFO: Number of nodes with available pods: 0 Mar 25 13:30:37.931: INFO: Node iruya-worker is running more than one daemon pod Mar 25 13:30:38.931: INFO: Number of nodes with available pods: 0 Mar 25 13:30:38.931: INFO: Node iruya-worker is running more than one daemon pod Mar 25 13:30:39.930: INFO: Number of nodes with available pods: 0 Mar 25 13:30:39.930: INFO: Node iruya-worker is running more than one daemon pod Mar 25 13:30:40.931: INFO: Number of nodes with available pods: 0 Mar 25 13:30:40.931: INFO: Node iruya-worker is running more than one daemon pod Mar 25 13:30:41.930: INFO: Number of nodes with available pods: 0 Mar 25 13:30:41.931: INFO: Node iruya-worker is running more than one daemon pod Mar 25 13:30:42.931: INFO: Number of nodes with available pods: 0 Mar 25 13:30:42.931: INFO: Node iruya-worker is running more than one daemon pod Mar 25 13:30:43.930: INFO: Number of nodes with available pods: 0 Mar 25 13:30:43.930: INFO: Node iruya-worker is running more than one daemon pod Mar 25 13:30:44.930: INFO: Number of nodes with available pods: 1 Mar 25 13:30:44.930: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3359, will wait for the garbage collector to delete the pods Mar 25 13:30:44.994: INFO: Deleting DaemonSet.extensions daemon-set took: 6.388586ms Mar 25 13:30:45.294: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.288278ms Mar 25 13:30:48.909: INFO: Number of nodes with available pods: 0 Mar 25 13:30:48.909: INFO: Number of running nodes: 0, number of available pods: 0 Mar 25 13:30:48.912: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3359/daemonsets","resourceVersion":"1778717"},"items":null} Mar 25 13:30:48.915: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3359/pods","resourceVersion":"1778717"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:30:48.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3359" for this suite. Mar 25 13:30:54.962: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:30:55.034: INFO: namespace daemonsets-3359 deletion completed in 6.088799935s • [SLOW TEST:26.346 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:30:55.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 25 13:30:55.105: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2f71be69-0e7b-410f-94a8-e532d75b8a51" in namespace "projected-3416" to be "success or failure" Mar 25 13:30:55.108: INFO: Pod "downwardapi-volume-2f71be69-0e7b-410f-94a8-e532d75b8a51": Phase="Pending", Reason="", readiness=false. Elapsed: 2.846986ms Mar 25 13:30:57.112: INFO: Pod "downwardapi-volume-2f71be69-0e7b-410f-94a8-e532d75b8a51": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006834553s Mar 25 13:30:59.116: INFO: Pod "downwardapi-volume-2f71be69-0e7b-410f-94a8-e532d75b8a51": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010847817s STEP: Saw pod success Mar 25 13:30:59.116: INFO: Pod "downwardapi-volume-2f71be69-0e7b-410f-94a8-e532d75b8a51" satisfied condition "success or failure" Mar 25 13:30:59.119: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-2f71be69-0e7b-410f-94a8-e532d75b8a51 container client-container: STEP: delete the pod Mar 25 13:30:59.138: INFO: Waiting for pod downwardapi-volume-2f71be69-0e7b-410f-94a8-e532d75b8a51 to disappear Mar 25 13:30:59.142: INFO: Pod downwardapi-volume-2f71be69-0e7b-410f-94a8-e532d75b8a51 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:30:59.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3416" for this suite. Mar 25 13:31:05.158: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:31:05.240: INFO: namespace projected-3416 deletion completed in 6.095432817s • [SLOW TEST:10.205 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:31:05.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:31:09.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-9684" for this suite. Mar 25 13:31:15.434: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:31:15.508: INFO: namespace emptydir-wrapper-9684 deletion completed in 6.106535173s • [SLOW TEST:10.267 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:31:15.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-3201/secret-test-0f5e4ab0-41c7-4b61-8575-58da6caabef8 STEP: Creating a pod to test consume secrets Mar 25 13:31:15.605: INFO: Waiting up to 5m0s for pod "pod-configmaps-623b009e-18d0-42c3-aec4-28ae7e2e408f" in namespace "secrets-3201" to be "success or failure" Mar 25 13:31:15.635: INFO: Pod "pod-configmaps-623b009e-18d0-42c3-aec4-28ae7e2e408f": Phase="Pending", Reason="", readiness=false. Elapsed: 29.815911ms Mar 25 13:31:17.639: INFO: Pod "pod-configmaps-623b009e-18d0-42c3-aec4-28ae7e2e408f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033652629s Mar 25 13:31:19.643: INFO: Pod "pod-configmaps-623b009e-18d0-42c3-aec4-28ae7e2e408f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037577594s STEP: Saw pod success Mar 25 13:31:19.643: INFO: Pod "pod-configmaps-623b009e-18d0-42c3-aec4-28ae7e2e408f" satisfied condition "success or failure" Mar 25 13:31:19.647: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-623b009e-18d0-42c3-aec4-28ae7e2e408f container env-test: STEP: delete the pod Mar 25 13:31:19.667: INFO: Waiting for pod pod-configmaps-623b009e-18d0-42c3-aec4-28ae7e2e408f to disappear Mar 25 13:31:19.678: INFO: Pod pod-configmaps-623b009e-18d0-42c3-aec4-28ae7e2e408f no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:31:19.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3201" for this suite. Mar 25 13:31:25.694: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:31:25.772: INFO: namespace secrets-3201 deletion completed in 6.087616664s • [SLOW TEST:10.264 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:31:25.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 25 13:31:25.836: INFO: Waiting up to 5m0s for pod "pod-88c26bd4-ee19-4d06-ac23-6bfb1f87ad30" in namespace "emptydir-6178" to be "success or failure" Mar 25 13:31:25.846: INFO: Pod "pod-88c26bd4-ee19-4d06-ac23-6bfb1f87ad30": Phase="Pending", Reason="", readiness=false. Elapsed: 10.162085ms Mar 25 13:31:27.850: INFO: Pod "pod-88c26bd4-ee19-4d06-ac23-6bfb1f87ad30": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014495648s Mar 25 13:31:29.855: INFO: Pod "pod-88c26bd4-ee19-4d06-ac23-6bfb1f87ad30": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018910943s STEP: Saw pod success Mar 25 13:31:29.855: INFO: Pod "pod-88c26bd4-ee19-4d06-ac23-6bfb1f87ad30" satisfied condition "success or failure" Mar 25 13:31:29.858: INFO: Trying to get logs from node iruya-worker2 pod pod-88c26bd4-ee19-4d06-ac23-6bfb1f87ad30 container test-container: STEP: delete the pod Mar 25 13:31:29.888: INFO: Waiting for pod pod-88c26bd4-ee19-4d06-ac23-6bfb1f87ad30 to disappear Mar 25 13:31:29.900: INFO: Pod pod-88c26bd4-ee19-4d06-ac23-6bfb1f87ad30 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:31:29.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6178" for this suite. Mar 25 13:31:35.939: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:31:36.079: INFO: namespace emptydir-6178 deletion completed in 6.156523156s • [SLOW TEST:10.307 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:31:36.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 25 13:31:36.163: INFO: Waiting up to 5m0s for pod "pod-e16b40df-aed1-4920-8bc3-b0ee1b5acb1b" in namespace "emptydir-2091" to be "success or failure" Mar 25 13:31:36.167: INFO: Pod "pod-e16b40df-aed1-4920-8bc3-b0ee1b5acb1b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.827406ms Mar 25 13:31:38.174: INFO: Pod "pod-e16b40df-aed1-4920-8bc3-b0ee1b5acb1b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010913688s Mar 25 13:31:40.178: INFO: Pod "pod-e16b40df-aed1-4920-8bc3-b0ee1b5acb1b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015702701s STEP: Saw pod success Mar 25 13:31:40.179: INFO: Pod "pod-e16b40df-aed1-4920-8bc3-b0ee1b5acb1b" satisfied condition "success or failure" Mar 25 13:31:40.182: INFO: Trying to get logs from node iruya-worker pod pod-e16b40df-aed1-4920-8bc3-b0ee1b5acb1b container test-container: STEP: delete the pod Mar 25 13:31:40.210: INFO: Waiting for pod pod-e16b40df-aed1-4920-8bc3-b0ee1b5acb1b to disappear Mar 25 13:31:40.214: INFO: Pod pod-e16b40df-aed1-4920-8bc3-b0ee1b5acb1b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:31:40.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2091" for this suite. Mar 25 13:31:46.230: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:31:46.312: INFO: namespace emptydir-2091 deletion completed in 6.094017285s • [SLOW TEST:10.232 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:31:46.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-d8f53393-3cd4-41e0-b13f-c2cc71b51888 STEP: Creating a pod to test consume configMaps Mar 25 13:31:46.401: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-638f7c7a-da01-4799-aee3-547947be7c4c" in namespace "projected-7203" to be "success or failure" Mar 25 13:31:46.419: INFO: Pod "pod-projected-configmaps-638f7c7a-da01-4799-aee3-547947be7c4c": Phase="Pending", Reason="", readiness=false. Elapsed: 18.201247ms Mar 25 13:31:48.424: INFO: Pod "pod-projected-configmaps-638f7c7a-da01-4799-aee3-547947be7c4c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022651102s Mar 25 13:31:50.428: INFO: Pod "pod-projected-configmaps-638f7c7a-da01-4799-aee3-547947be7c4c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027161902s STEP: Saw pod success Mar 25 13:31:50.428: INFO: Pod "pod-projected-configmaps-638f7c7a-da01-4799-aee3-547947be7c4c" satisfied condition "success or failure" Mar 25 13:31:50.432: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-638f7c7a-da01-4799-aee3-547947be7c4c container projected-configmap-volume-test: STEP: delete the pod Mar 25 13:31:50.455: INFO: Waiting for pod pod-projected-configmaps-638f7c7a-da01-4799-aee3-547947be7c4c to disappear Mar 25 13:31:50.478: INFO: Pod pod-projected-configmaps-638f7c7a-da01-4799-aee3-547947be7c4c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:31:50.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7203" for this suite. Mar 25 13:31:56.512: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:31:56.598: INFO: namespace projected-7203 deletion completed in 6.116040553s • [SLOW TEST:10.286 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:31:56.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container Mar 25 13:32:01.250: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3363 pod-service-account-d1cab9b8-6c88-477c-bc77-89eed9419145 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Mar 25 13:32:01.486: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3363 pod-service-account-d1cab9b8-6c88-477c-bc77-89eed9419145 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Mar 25 13:32:01.674: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3363 pod-service-account-d1cab9b8-6c88-477c-bc77-89eed9419145 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:32:01.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-3363" for this suite. Mar 25 13:32:07.910: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:32:07.989: INFO: namespace svcaccounts-3363 deletion completed in 6.118169289s • [SLOW TEST:11.391 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:32:07.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-a85e19a7-9c47-4623-91cd-a8da63749d2c STEP: Creating a pod to test consume secrets Mar 25 13:32:08.064: INFO: Waiting up to 5m0s for pod "pod-secrets-a0d45d39-ee35-4eec-a556-f74a0c9f09f8" in namespace "secrets-9943" to be "success or failure" Mar 25 13:32:08.098: INFO: Pod "pod-secrets-a0d45d39-ee35-4eec-a556-f74a0c9f09f8": Phase="Pending", Reason="", readiness=false. Elapsed: 34.120951ms Mar 25 13:32:10.102: INFO: Pod "pod-secrets-a0d45d39-ee35-4eec-a556-f74a0c9f09f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037928534s Mar 25 13:32:12.106: INFO: Pod "pod-secrets-a0d45d39-ee35-4eec-a556-f74a0c9f09f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042589638s STEP: Saw pod success Mar 25 13:32:12.106: INFO: Pod "pod-secrets-a0d45d39-ee35-4eec-a556-f74a0c9f09f8" satisfied condition "success or failure" Mar 25 13:32:12.110: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-a0d45d39-ee35-4eec-a556-f74a0c9f09f8 container secret-volume-test: STEP: delete the pod Mar 25 13:32:12.145: INFO: Waiting for pod pod-secrets-a0d45d39-ee35-4eec-a556-f74a0c9f09f8 to disappear Mar 25 13:32:12.180: INFO: Pod pod-secrets-a0d45d39-ee35-4eec-a556-f74a0c9f09f8 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:32:12.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9943" for this suite. Mar 25 13:32:18.219: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:32:18.316: INFO: namespace secrets-9943 deletion completed in 6.131409141s • [SLOW TEST:10.327 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:32:18.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 25 13:32:18.355: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4e4581c8-d590-4bd8-9598-321ebc1f08f2" in namespace "downward-api-162" to be "success or failure" Mar 25 13:32:18.373: INFO: Pod "downwardapi-volume-4e4581c8-d590-4bd8-9598-321ebc1f08f2": Phase="Pending", Reason="", readiness=false. Elapsed: 17.862328ms Mar 25 13:32:20.376: INFO: Pod "downwardapi-volume-4e4581c8-d590-4bd8-9598-321ebc1f08f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021003999s Mar 25 13:32:22.380: INFO: Pod "downwardapi-volume-4e4581c8-d590-4bd8-9598-321ebc1f08f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025361377s STEP: Saw pod success Mar 25 13:32:22.381: INFO: Pod "downwardapi-volume-4e4581c8-d590-4bd8-9598-321ebc1f08f2" satisfied condition "success or failure" Mar 25 13:32:22.384: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-4e4581c8-d590-4bd8-9598-321ebc1f08f2 container client-container: STEP: delete the pod Mar 25 13:32:22.402: INFO: Waiting for pod downwardapi-volume-4e4581c8-d590-4bd8-9598-321ebc1f08f2 to disappear Mar 25 13:32:22.407: INFO: Pod downwardapi-volume-4e4581c8-d590-4bd8-9598-321ebc1f08f2 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:32:22.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-162" for this suite. Mar 25 13:32:28.437: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:32:28.514: INFO: namespace downward-api-162 deletion completed in 6.104869159s • [SLOW TEST:10.198 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:32:28.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 25 13:32:28.571: INFO: Waiting up to 5m0s for pod "downwardapi-volume-aae8a021-f5ee-4bee-8352-c1945157aeb4" in namespace "downward-api-3386" to be "success or failure" Mar 25 13:32:28.575: INFO: Pod "downwardapi-volume-aae8a021-f5ee-4bee-8352-c1945157aeb4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.75368ms Mar 25 13:32:30.579: INFO: Pod "downwardapi-volume-aae8a021-f5ee-4bee-8352-c1945157aeb4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007869242s Mar 25 13:32:32.583: INFO: Pod "downwardapi-volume-aae8a021-f5ee-4bee-8352-c1945157aeb4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011852771s STEP: Saw pod success Mar 25 13:32:32.583: INFO: Pod "downwardapi-volume-aae8a021-f5ee-4bee-8352-c1945157aeb4" satisfied condition "success or failure" Mar 25 13:32:32.586: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-aae8a021-f5ee-4bee-8352-c1945157aeb4 container client-container: STEP: delete the pod Mar 25 13:32:32.633: INFO: Waiting for pod downwardapi-volume-aae8a021-f5ee-4bee-8352-c1945157aeb4 to disappear Mar 25 13:32:32.637: INFO: Pod downwardapi-volume-aae8a021-f5ee-4bee-8352-c1945157aeb4 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:32:32.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3386" for this suite. Mar 25 13:32:38.671: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:32:38.750: INFO: namespace downward-api-3386 deletion completed in 6.089875027s • [SLOW TEST:10.235 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:32:38.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium Mar 25 13:32:38.826: INFO: Waiting up to 5m0s for pod "pod-fe3e89ee-c871-4c02-b82b-855776979966" in namespace "emptydir-3725" to be "success or failure" Mar 25 13:32:38.829: INFO: Pod "pod-fe3e89ee-c871-4c02-b82b-855776979966": Phase="Pending", Reason="", readiness=false. Elapsed: 2.742372ms Mar 25 13:32:40.832: INFO: Pod "pod-fe3e89ee-c871-4c02-b82b-855776979966": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006325839s Mar 25 13:32:42.836: INFO: Pod "pod-fe3e89ee-c871-4c02-b82b-855776979966": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009886445s STEP: Saw pod success Mar 25 13:32:42.836: INFO: Pod "pod-fe3e89ee-c871-4c02-b82b-855776979966" satisfied condition "success or failure" Mar 25 13:32:42.838: INFO: Trying to get logs from node iruya-worker pod pod-fe3e89ee-c871-4c02-b82b-855776979966 container test-container: STEP: delete the pod Mar 25 13:32:42.907: INFO: Waiting for pod pod-fe3e89ee-c871-4c02-b82b-855776979966 to disappear Mar 25 13:32:42.913: INFO: Pod pod-fe3e89ee-c871-4c02-b82b-855776979966 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:32:42.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3725" for this suite. Mar 25 13:32:48.928: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:32:49.006: INFO: namespace emptydir-3725 deletion completed in 6.088967417s • [SLOW TEST:10.256 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:32:49.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Mar 25 13:32:49.089: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 25 13:32:49.096: INFO: Waiting for terminating namespaces to be deleted... Mar 25 13:32:49.099: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Mar 25 13:32:49.104: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Mar 25 13:32:49.104: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 13:32:49.104: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Mar 25 13:32:49.104: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 13:32:49.104: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Mar 25 13:32:49.110: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Mar 25 13:32:49.110: INFO: Container coredns ready: true, restart count 0 Mar 25 13:32:49.110: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Mar 25 13:32:49.110: INFO: Container coredns ready: true, restart count 0 Mar 25 13:32:49.110: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Mar 25 13:32:49.110: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 13:32:49.110: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Mar 25 13:32:49.110: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-6c333d06-b235-452e-b9d3-c542821f4d43 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-6c333d06-b235-452e-b9d3-c542821f4d43 off the node iruya-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-6c333d06-b235-452e-b9d3-c542821f4d43 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:32:57.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7010" for this suite. Mar 25 13:33:15.292: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:33:15.367: INFO: namespace sched-pred-7010 deletion completed in 18.093395103s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:26.361 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:33:15.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 25 13:33:15.455: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Mar 25 13:33:17.570: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:33:18.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9305" for this suite. Mar 25 13:33:24.801: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:33:24.922: INFO: namespace replication-controller-9305 deletion completed in 6.275032432s • [SLOW TEST:9.554 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:33:24.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:33:30.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7118" for this suite. Mar 25 13:33:52.097: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:33:52.168: INFO: namespace replication-controller-7118 deletion completed in 22.087106785s • [SLOW TEST:27.246 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:33:52.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 25 13:34:00.284: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 25 13:34:00.313: INFO: Pod pod-with-poststart-http-hook still exists Mar 25 13:34:02.313: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 25 13:34:02.331: INFO: Pod pod-with-poststart-http-hook still exists Mar 25 13:34:04.314: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 25 13:34:04.318: INFO: Pod pod-with-poststart-http-hook still exists Mar 25 13:34:06.314: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 25 13:34:06.318: INFO: Pod pod-with-poststart-http-hook still exists Mar 25 13:34:08.314: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 25 13:34:08.318: INFO: Pod pod-with-poststart-http-hook still exists Mar 25 13:34:10.314: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 25 13:34:10.318: INFO: Pod pod-with-poststart-http-hook still exists Mar 25 13:34:12.314: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 25 13:34:12.317: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:34:12.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9519" for this suite. Mar 25 13:34:34.338: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:34:34.417: INFO: namespace container-lifecycle-hook-9519 deletion completed in 22.096286482s • [SLOW TEST:42.248 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:34:34.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-5caf4cc0-ada7-4e5c-b54e-2f4914e2c4e5 STEP: Creating a pod to test consume configMaps Mar 25 13:34:34.513: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4b8bccbc-542d-443a-a75f-d709fa64ff19" in namespace "projected-2996" to be "success or failure" Mar 25 13:34:34.535: INFO: Pod "pod-projected-configmaps-4b8bccbc-542d-443a-a75f-d709fa64ff19": Phase="Pending", Reason="", readiness=false. Elapsed: 22.320428ms Mar 25 13:34:36.539: INFO: Pod "pod-projected-configmaps-4b8bccbc-542d-443a-a75f-d709fa64ff19": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026002037s Mar 25 13:34:38.542: INFO: Pod "pod-projected-configmaps-4b8bccbc-542d-443a-a75f-d709fa64ff19": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029100867s STEP: Saw pod success Mar 25 13:34:38.542: INFO: Pod "pod-projected-configmaps-4b8bccbc-542d-443a-a75f-d709fa64ff19" satisfied condition "success or failure" Mar 25 13:34:38.544: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-4b8bccbc-542d-443a-a75f-d709fa64ff19 container projected-configmap-volume-test: STEP: delete the pod Mar 25 13:34:38.575: INFO: Waiting for pod pod-projected-configmaps-4b8bccbc-542d-443a-a75f-d709fa64ff19 to disappear Mar 25 13:34:38.583: INFO: Pod pod-projected-configmaps-4b8bccbc-542d-443a-a75f-d709fa64ff19 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:34:38.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2996" for this suite. Mar 25 13:34:44.598: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:34:44.694: INFO: namespace projected-2996 deletion completed in 6.108254539s • [SLOW TEST:10.277 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:34:44.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 25 13:34:44.782: INFO: Waiting up to 5m0s for pod "pod-a1aa4772-005d-40a2-9d0f-a6e9d901d070" in namespace "emptydir-9822" to be "success or failure" Mar 25 13:34:44.790: INFO: Pod "pod-a1aa4772-005d-40a2-9d0f-a6e9d901d070": Phase="Pending", Reason="", readiness=false. Elapsed: 8.497888ms Mar 25 13:34:46.794: INFO: Pod "pod-a1aa4772-005d-40a2-9d0f-a6e9d901d070": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012232572s Mar 25 13:34:48.799: INFO: Pod "pod-a1aa4772-005d-40a2-9d0f-a6e9d901d070": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016896747s STEP: Saw pod success Mar 25 13:34:48.799: INFO: Pod "pod-a1aa4772-005d-40a2-9d0f-a6e9d901d070" satisfied condition "success or failure" Mar 25 13:34:48.802: INFO: Trying to get logs from node iruya-worker pod pod-a1aa4772-005d-40a2-9d0f-a6e9d901d070 container test-container: STEP: delete the pod Mar 25 13:34:48.836: INFO: Waiting for pod pod-a1aa4772-005d-40a2-9d0f-a6e9d901d070 to disappear Mar 25 13:34:48.844: INFO: Pod pod-a1aa4772-005d-40a2-9d0f-a6e9d901d070 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:34:48.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9822" for this suite. Mar 25 13:34:54.859: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:34:54.936: INFO: namespace emptydir-9822 deletion completed in 6.089223654s • [SLOW TEST:10.242 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:34:54.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-89125c98-8b58-4120-8414-4c479c42e15d STEP: Creating a pod to test consume configMaps Mar 25 13:34:55.026: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-30d46bf4-56fa-48fc-b2d7-815041efa582" in namespace "projected-7161" to be "success or failure" Mar 25 13:34:55.045: INFO: Pod "pod-projected-configmaps-30d46bf4-56fa-48fc-b2d7-815041efa582": Phase="Pending", Reason="", readiness=false. Elapsed: 19.358512ms Mar 25 13:34:57.050: INFO: Pod "pod-projected-configmaps-30d46bf4-56fa-48fc-b2d7-815041efa582": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023628628s Mar 25 13:34:59.054: INFO: Pod "pod-projected-configmaps-30d46bf4-56fa-48fc-b2d7-815041efa582": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02826315s STEP: Saw pod success Mar 25 13:34:59.054: INFO: Pod "pod-projected-configmaps-30d46bf4-56fa-48fc-b2d7-815041efa582" satisfied condition "success or failure" Mar 25 13:34:59.057: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-30d46bf4-56fa-48fc-b2d7-815041efa582 container projected-configmap-volume-test: STEP: delete the pod Mar 25 13:34:59.080: INFO: Waiting for pod pod-projected-configmaps-30d46bf4-56fa-48fc-b2d7-815041efa582 to disappear Mar 25 13:34:59.084: INFO: Pod pod-projected-configmaps-30d46bf4-56fa-48fc-b2d7-815041efa582 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:34:59.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7161" for this suite. Mar 25 13:35:05.100: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:35:05.179: INFO: namespace projected-7161 deletion completed in 6.091663361s • [SLOW TEST:10.241 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:35:05.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod Mar 25 13:35:05.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2089' Mar 25 13:35:07.893: INFO: stderr: "" Mar 25 13:35:07.893: INFO: stdout: "pod/pause created\n" Mar 25 13:35:07.893: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Mar 25 13:35:07.893: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-2089" to be "running and ready" Mar 25 13:35:07.901: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 7.907356ms Mar 25 13:35:09.905: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011829705s Mar 25 13:35:11.909: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.015794688s Mar 25 13:35:11.909: INFO: Pod "pause" satisfied condition "running and ready" Mar 25 13:35:11.909: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod Mar 25 13:35:11.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-2089' Mar 25 13:35:12.003: INFO: stderr: "" Mar 25 13:35:12.003: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Mar 25 13:35:12.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-2089' Mar 25 13:35:12.103: INFO: stderr: "" Mar 25 13:35:12.103: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" STEP: removing the label testing-label of a pod Mar 25 13:35:12.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-2089' Mar 25 13:35:12.196: INFO: stderr: "" Mar 25 13:35:12.196: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Mar 25 13:35:12.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-2089' Mar 25 13:35:12.294: INFO: stderr: "" Mar 25 13:35:12.294: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources Mar 25 13:35:12.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2089' Mar 25 13:35:12.420: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 25 13:35:12.420: INFO: stdout: "pod \"pause\" force deleted\n" Mar 25 13:35:12.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-2089' Mar 25 13:35:12.526: INFO: stderr: "No resources found.\n" Mar 25 13:35:12.526: INFO: stdout: "" Mar 25 13:35:12.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-2089 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 25 13:35:12.623: INFO: stderr: "" Mar 25 13:35:12.623: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:35:12.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2089" for this suite. Mar 25 13:35:18.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:35:18.766: INFO: namespace kubectl-2089 deletion completed in 6.139383351s • [SLOW TEST:13.587 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:35:18.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-4781, will wait for the garbage collector to delete the pods Mar 25 13:35:22.928: INFO: Deleting Job.batch foo took: 16.421437ms Mar 25 13:35:23.228: INFO: Terminating Job.batch foo pods took: 300.251294ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:35:56.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-4781" for this suite. Mar 25 13:36:02.647: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:36:02.727: INFO: namespace job-4781 deletion completed in 6.092976446s • [SLOW TEST:43.961 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:36:02.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 25 13:36:07.320: INFO: Successfully updated pod "pod-update-activedeadlineseconds-fd4b9902-f101-4770-bc56-13ef850900a9" Mar 25 13:36:07.320: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-fd4b9902-f101-4770-bc56-13ef850900a9" in namespace "pods-5592" to be "terminated due to deadline exceeded" Mar 25 13:36:07.351: INFO: Pod "pod-update-activedeadlineseconds-fd4b9902-f101-4770-bc56-13ef850900a9": Phase="Running", Reason="", readiness=true. Elapsed: 30.535295ms Mar 25 13:36:09.356: INFO: Pod "pod-update-activedeadlineseconds-fd4b9902-f101-4770-bc56-13ef850900a9": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.035360098s Mar 25 13:36:09.356: INFO: Pod "pod-update-activedeadlineseconds-fd4b9902-f101-4770-bc56-13ef850900a9" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:36:09.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5592" for this suite. Mar 25 13:36:15.372: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:36:15.454: INFO: namespace pods-5592 deletion completed in 6.093591444s • [SLOW TEST:12.726 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:36:15.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin Mar 25 13:36:15.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6313 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Mar 25 13:36:18.579: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0325 13:36:18.520572 836 log.go:172] (0xc0009762c0) (0xc000a66140) Create stream\nI0325 13:36:18.520626 836 log.go:172] (0xc0009762c0) (0xc000a66140) Stream added, broadcasting: 1\nI0325 13:36:18.523381 836 log.go:172] (0xc0009762c0) Reply frame received for 1\nI0325 13:36:18.523437 836 log.go:172] (0xc0009762c0) (0xc00068c960) Create stream\nI0325 13:36:18.523454 836 log.go:172] (0xc0009762c0) (0xc00068c960) Stream added, broadcasting: 3\nI0325 13:36:18.524597 836 log.go:172] (0xc0009762c0) Reply frame received for 3\nI0325 13:36:18.524632 836 log.go:172] (0xc0009762c0) (0xc00068ca00) Create stream\nI0325 13:36:18.524641 836 log.go:172] (0xc0009762c0) (0xc00068ca00) Stream added, broadcasting: 5\nI0325 13:36:18.525951 836 log.go:172] (0xc0009762c0) Reply frame received for 5\nI0325 13:36:18.526008 836 log.go:172] (0xc0009762c0) (0xc00068caa0) Create stream\nI0325 13:36:18.526029 836 log.go:172] (0xc0009762c0) (0xc00068caa0) Stream added, broadcasting: 7\nI0325 13:36:18.527308 836 log.go:172] (0xc0009762c0) Reply frame received for 7\nI0325 13:36:18.527470 836 log.go:172] (0xc00068c960) (3) Writing data frame\nI0325 13:36:18.527572 836 log.go:172] (0xc00068c960) (3) Writing data frame\nI0325 13:36:18.528382 836 log.go:172] (0xc0009762c0) Data frame received for 5\nI0325 13:36:18.528425 836 log.go:172] (0xc00068ca00) (5) Data frame handling\nI0325 13:36:18.528454 836 log.go:172] (0xc00068ca00) (5) Data frame sent\nI0325 13:36:18.528874 836 log.go:172] (0xc0009762c0) Data frame received for 5\nI0325 13:36:18.528895 836 log.go:172] (0xc00068ca00) (5) Data frame handling\nI0325 13:36:18.528913 836 log.go:172] (0xc00068ca00) (5) Data frame sent\nI0325 13:36:18.559960 836 log.go:172] (0xc0009762c0) Data frame received for 7\nI0325 13:36:18.560005 836 log.go:172] (0xc00068caa0) (7) Data frame handling\nI0325 13:36:18.560031 836 log.go:172] (0xc0009762c0) Data frame received for 5\nI0325 13:36:18.560049 836 log.go:172] (0xc00068ca00) (5) Data frame handling\nI0325 13:36:18.560463 836 log.go:172] (0xc0009762c0) Data frame received for 1\nI0325 13:36:18.560497 836 log.go:172] (0xc000a66140) (1) Data frame handling\nI0325 13:36:18.560511 836 log.go:172] (0xc000a66140) (1) Data frame sent\nI0325 13:36:18.560914 836 log.go:172] (0xc0009762c0) (0xc000a66140) Stream removed, broadcasting: 1\nI0325 13:36:18.561000 836 log.go:172] (0xc0009762c0) (0xc00068c960) Stream removed, broadcasting: 3\nI0325 13:36:18.561042 836 log.go:172] (0xc0009762c0) Go away received\nI0325 13:36:18.561103 836 log.go:172] (0xc0009762c0) (0xc000a66140) Stream removed, broadcasting: 1\nI0325 13:36:18.561389 836 log.go:172] (0xc0009762c0) (0xc00068c960) Stream removed, broadcasting: 3\nI0325 13:36:18.561413 836 log.go:172] (0xc0009762c0) (0xc00068ca00) Stream removed, broadcasting: 5\nI0325 13:36:18.561436 836 log.go:172] (0xc0009762c0) (0xc00068caa0) Stream removed, broadcasting: 7\n" Mar 25 13:36:18.580: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:36:20.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6313" for this suite. Mar 25 13:36:26.616: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:36:26.695: INFO: namespace kubectl-6313 deletion completed in 6.093136s • [SLOW TEST:11.240 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:36:26.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Mar 25 13:36:26.743: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 25 13:36:26.761: INFO: Waiting for terminating namespaces to be deleted... Mar 25 13:36:26.764: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Mar 25 13:36:26.772: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Mar 25 13:36:26.772: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 13:36:26.772: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Mar 25 13:36:26.772: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 13:36:26.772: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Mar 25 13:36:26.791: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Mar 25 13:36:26.792: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 13:36:26.792: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Mar 25 13:36:26.792: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 13:36:26.792: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Mar 25 13:36:26.792: INFO: Container coredns ready: true, restart count 0 Mar 25 13:36:26.792: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Mar 25 13:36:26.792: INFO: Container coredns ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-worker STEP: verifying the node has the label node iruya-worker2 Mar 25 13:36:26.862: INFO: Pod coredns-5d4dd4b4db-6jcgz requesting resource cpu=100m on Node iruya-worker2 Mar 25 13:36:26.862: INFO: Pod coredns-5d4dd4b4db-gm7vr requesting resource cpu=100m on Node iruya-worker2 Mar 25 13:36:26.862: INFO: Pod kindnet-gwz5g requesting resource cpu=100m on Node iruya-worker Mar 25 13:36:26.862: INFO: Pod kindnet-mgd8b requesting resource cpu=100m on Node iruya-worker2 Mar 25 13:36:26.862: INFO: Pod kube-proxy-pmz4p requesting resource cpu=0m on Node iruya-worker Mar 25 13:36:26.862: INFO: Pod kube-proxy-vwbcj requesting resource cpu=0m on Node iruya-worker2 STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-7f1021df-68c2-4680-9b6e-d23527df1204.15ff8f830d3f08fe], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3195/filler-pod-7f1021df-68c2-4680-9b6e-d23527df1204 to iruya-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-7f1021df-68c2-4680-9b6e-d23527df1204.15ff8f838a77f523], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-7f1021df-68c2-4680-9b6e-d23527df1204.15ff8f83b9954207], Reason = [Created], Message = [Created container filler-pod-7f1021df-68c2-4680-9b6e-d23527df1204] STEP: Considering event: Type = [Normal], Name = [filler-pod-7f1021df-68c2-4680-9b6e-d23527df1204.15ff8f83c7f35beb], Reason = [Started], Message = [Started container filler-pod-7f1021df-68c2-4680-9b6e-d23527df1204] STEP: Considering event: Type = [Normal], Name = [filler-pod-e1aa68d7-ae27-426d-a6ff-df10bd996f5f.15ff8f830ce2609c], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3195/filler-pod-e1aa68d7-ae27-426d-a6ff-df10bd996f5f to iruya-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-e1aa68d7-ae27-426d-a6ff-df10bd996f5f.15ff8f835795a69b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-e1aa68d7-ae27-426d-a6ff-df10bd996f5f.15ff8f838f2061c6], Reason = [Created], Message = [Created container filler-pod-e1aa68d7-ae27-426d-a6ff-df10bd996f5f] STEP: Considering event: Type = [Normal], Name = [filler-pod-e1aa68d7-ae27-426d-a6ff-df10bd996f5f.15ff8f83a99e4573], Reason = [Started], Message = [Started container filler-pod-e1aa68d7-ae27-426d-a6ff-df10bd996f5f] STEP: Considering event: Type = [Warning], Name = [additional-pod.15ff8f83fcc4de5d], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node iruya-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:36:31.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3195" for this suite. Mar 25 13:36:37.995: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:36:38.060: INFO: namespace sched-pred-3195 deletion completed in 6.090861344s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:11.364 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:36:38.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-3a680c4e-45ed-4317-9079-726f764647a6 STEP: Creating a pod to test consume configMaps Mar 25 13:36:38.126: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f7eccd1a-c4ed-4edd-ad54-84aec2ae1345" in namespace "projected-3554" to be "success or failure" Mar 25 13:36:38.130: INFO: Pod "pod-projected-configmaps-f7eccd1a-c4ed-4edd-ad54-84aec2ae1345": Phase="Pending", Reason="", readiness=false. Elapsed: 4.142484ms Mar 25 13:36:40.144: INFO: Pod "pod-projected-configmaps-f7eccd1a-c4ed-4edd-ad54-84aec2ae1345": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018153364s Mar 25 13:36:42.153: INFO: Pod "pod-projected-configmaps-f7eccd1a-c4ed-4edd-ad54-84aec2ae1345": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027373629s STEP: Saw pod success Mar 25 13:36:42.153: INFO: Pod "pod-projected-configmaps-f7eccd1a-c4ed-4edd-ad54-84aec2ae1345" satisfied condition "success or failure" Mar 25 13:36:42.156: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-f7eccd1a-c4ed-4edd-ad54-84aec2ae1345 container projected-configmap-volume-test: STEP: delete the pod Mar 25 13:36:42.192: INFO: Waiting for pod pod-projected-configmaps-f7eccd1a-c4ed-4edd-ad54-84aec2ae1345 to disappear Mar 25 13:36:42.206: INFO: Pod pod-projected-configmaps-f7eccd1a-c4ed-4edd-ad54-84aec2ae1345 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:36:42.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3554" for this suite. Mar 25 13:36:48.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:36:48.306: INFO: namespace projected-3554 deletion completed in 6.097116539s • [SLOW TEST:10.246 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:36:48.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:36:52.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1807" for this suite. Mar 25 13:36:58.438: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:36:58.523: INFO: namespace kubelet-test-1807 deletion completed in 6.099018349s • [SLOW TEST:10.216 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:36:58.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode Mar 25 13:36:58.592: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-3682" to be "success or failure" Mar 25 13:36:58.602: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.995412ms Mar 25 13:37:00.605: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012749858s Mar 25 13:37:02.610: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017722801s Mar 25 13:37:04.615: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.02217581s STEP: Saw pod success Mar 25 13:37:04.615: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Mar 25 13:37:04.618: INFO: Trying to get logs from node iruya-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod Mar 25 13:37:04.641: INFO: Waiting for pod pod-host-path-test to disappear Mar 25 13:37:04.645: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:37:04.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-3682" for this suite. Mar 25 13:37:10.672: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:37:10.755: INFO: namespace hostpath-3682 deletion completed in 6.107226803s • [SLOW TEST:12.232 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:37:10.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-2873 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Mar 25 13:37:10.837: INFO: Found 0 stateful pods, waiting for 3 Mar 25 13:37:20.842: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 25 13:37:20.842: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 25 13:37:20.843: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Mar 25 13:37:20.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2873 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 25 13:37:21.110: INFO: stderr: "I0325 13:37:20.979362 860 log.go:172] (0xc000a026e0) (0xc00066cb40) Create stream\nI0325 13:37:20.979423 860 log.go:172] (0xc000a026e0) (0xc00066cb40) Stream added, broadcasting: 1\nI0325 13:37:20.983275 860 log.go:172] (0xc000a026e0) Reply frame received for 1\nI0325 13:37:20.983323 860 log.go:172] (0xc000a026e0) (0xc00066c280) Create stream\nI0325 13:37:20.983339 860 log.go:172] (0xc000a026e0) (0xc00066c280) Stream added, broadcasting: 3\nI0325 13:37:20.984277 860 log.go:172] (0xc000a026e0) Reply frame received for 3\nI0325 13:37:20.984325 860 log.go:172] (0xc000a026e0) (0xc00066c320) Create stream\nI0325 13:37:20.984340 860 log.go:172] (0xc000a026e0) (0xc00066c320) Stream added, broadcasting: 5\nI0325 13:37:20.985310 860 log.go:172] (0xc000a026e0) Reply frame received for 5\nI0325 13:37:21.077446 860 log.go:172] (0xc000a026e0) Data frame received for 5\nI0325 13:37:21.077476 860 log.go:172] (0xc00066c320) (5) Data frame handling\nI0325 13:37:21.077490 860 log.go:172] (0xc00066c320) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0325 13:37:21.102146 860 log.go:172] (0xc000a026e0) Data frame received for 3\nI0325 13:37:21.102196 860 log.go:172] (0xc00066c280) (3) Data frame handling\nI0325 13:37:21.102299 860 log.go:172] (0xc00066c280) (3) Data frame sent\nI0325 13:37:21.102426 860 log.go:172] (0xc000a026e0) Data frame received for 3\nI0325 13:37:21.102446 860 log.go:172] (0xc00066c280) (3) Data frame handling\nI0325 13:37:21.102506 860 log.go:172] (0xc000a026e0) Data frame received for 5\nI0325 13:37:21.102559 860 log.go:172] (0xc00066c320) (5) Data frame handling\nI0325 13:37:21.105408 860 log.go:172] (0xc000a026e0) Data frame received for 1\nI0325 13:37:21.105444 860 log.go:172] (0xc00066cb40) (1) Data frame handling\nI0325 13:37:21.105475 860 log.go:172] (0xc00066cb40) (1) Data frame sent\nI0325 13:37:21.105502 860 log.go:172] (0xc000a026e0) (0xc00066cb40) Stream removed, broadcasting: 1\nI0325 13:37:21.105788 860 log.go:172] (0xc000a026e0) Go away received\nI0325 13:37:21.106003 860 log.go:172] (0xc000a026e0) (0xc00066cb40) Stream removed, broadcasting: 1\nI0325 13:37:21.106041 860 log.go:172] (0xc000a026e0) (0xc00066c280) Stream removed, broadcasting: 3\nI0325 13:37:21.106059 860 log.go:172] (0xc000a026e0) (0xc00066c320) Stream removed, broadcasting: 5\n" Mar 25 13:37:21.110: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 25 13:37:21.110: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Mar 25 13:37:31.142: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Mar 25 13:37:41.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2873 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 25 13:37:41.433: INFO: stderr: "I0325 13:37:41.324214 880 log.go:172] (0xc00021e420) (0xc000202820) Create stream\nI0325 13:37:41.324287 880 log.go:172] (0xc00021e420) (0xc000202820) Stream added, broadcasting: 1\nI0325 13:37:41.330084 880 log.go:172] (0xc00021e420) Reply frame received for 1\nI0325 13:37:41.330144 880 log.go:172] (0xc00021e420) (0xc000202000) Create stream\nI0325 13:37:41.330159 880 log.go:172] (0xc00021e420) (0xc000202000) Stream added, broadcasting: 3\nI0325 13:37:41.331019 880 log.go:172] (0xc00021e420) Reply frame received for 3\nI0325 13:37:41.331047 880 log.go:172] (0xc00021e420) (0xc00039e140) Create stream\nI0325 13:37:41.331055 880 log.go:172] (0xc00021e420) (0xc00039e140) Stream added, broadcasting: 5\nI0325 13:37:41.332111 880 log.go:172] (0xc00021e420) Reply frame received for 5\nI0325 13:37:41.427633 880 log.go:172] (0xc00021e420) Data frame received for 5\nI0325 13:37:41.427661 880 log.go:172] (0xc00039e140) (5) Data frame handling\nI0325 13:37:41.427669 880 log.go:172] (0xc00039e140) (5) Data frame sent\nI0325 13:37:41.427675 880 log.go:172] (0xc00021e420) Data frame received for 5\nI0325 13:37:41.427681 880 log.go:172] (0xc00039e140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0325 13:37:41.427700 880 log.go:172] (0xc00021e420) Data frame received for 3\nI0325 13:37:41.427716 880 log.go:172] (0xc000202000) (3) Data frame handling\nI0325 13:37:41.427731 880 log.go:172] (0xc000202000) (3) Data frame sent\nI0325 13:37:41.427740 880 log.go:172] (0xc00021e420) Data frame received for 3\nI0325 13:37:41.427748 880 log.go:172] (0xc000202000) (3) Data frame handling\nI0325 13:37:41.428746 880 log.go:172] (0xc00021e420) Data frame received for 1\nI0325 13:37:41.428762 880 log.go:172] (0xc000202820) (1) Data frame handling\nI0325 13:37:41.428774 880 log.go:172] (0xc000202820) (1) Data frame sent\nI0325 13:37:41.428784 880 log.go:172] (0xc00021e420) (0xc000202820) Stream removed, broadcasting: 1\nI0325 13:37:41.428978 880 log.go:172] (0xc00021e420) Go away received\nI0325 13:37:41.429352 880 log.go:172] (0xc00021e420) (0xc000202820) Stream removed, broadcasting: 1\nI0325 13:37:41.429371 880 log.go:172] (0xc00021e420) (0xc000202000) Stream removed, broadcasting: 3\nI0325 13:37:41.429378 880 log.go:172] (0xc00021e420) (0xc00039e140) Stream removed, broadcasting: 5\n" Mar 25 13:37:41.433: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 25 13:37:41.433: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 25 13:38:01.452: INFO: Waiting for StatefulSet statefulset-2873/ss2 to complete update Mar 25 13:38:01.452: INFO: Waiting for Pod statefulset-2873/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Rolling back to a previous revision Mar 25 13:38:11.460: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2873 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 25 13:38:11.711: INFO: stderr: "I0325 13:38:11.592503 901 log.go:172] (0xc00099c420) (0xc0003c06e0) Create stream\nI0325 13:38:11.592568 901 log.go:172] (0xc00099c420) (0xc0003c06e0) Stream added, broadcasting: 1\nI0325 13:38:11.594566 901 log.go:172] (0xc00099c420) Reply frame received for 1\nI0325 13:38:11.594611 901 log.go:172] (0xc00099c420) (0xc000830000) Create stream\nI0325 13:38:11.594626 901 log.go:172] (0xc00099c420) (0xc000830000) Stream added, broadcasting: 3\nI0325 13:38:11.595507 901 log.go:172] (0xc00099c420) Reply frame received for 3\nI0325 13:38:11.595553 901 log.go:172] (0xc00099c420) (0xc000810000) Create stream\nI0325 13:38:11.595564 901 log.go:172] (0xc00099c420) (0xc000810000) Stream added, broadcasting: 5\nI0325 13:38:11.596203 901 log.go:172] (0xc00099c420) Reply frame received for 5\nI0325 13:38:11.679485 901 log.go:172] (0xc00099c420) Data frame received for 5\nI0325 13:38:11.679518 901 log.go:172] (0xc000810000) (5) Data frame handling\nI0325 13:38:11.679549 901 log.go:172] (0xc000810000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0325 13:38:11.703037 901 log.go:172] (0xc00099c420) Data frame received for 3\nI0325 13:38:11.703072 901 log.go:172] (0xc000830000) (3) Data frame handling\nI0325 13:38:11.703125 901 log.go:172] (0xc000830000) (3) Data frame sent\nI0325 13:38:11.703433 901 log.go:172] (0xc00099c420) Data frame received for 3\nI0325 13:38:11.703459 901 log.go:172] (0xc000830000) (3) Data frame handling\nI0325 13:38:11.703484 901 log.go:172] (0xc00099c420) Data frame received for 5\nI0325 13:38:11.703547 901 log.go:172] (0xc000810000) (5) Data frame handling\nI0325 13:38:11.706271 901 log.go:172] (0xc00099c420) Data frame received for 1\nI0325 13:38:11.706298 901 log.go:172] (0xc0003c06e0) (1) Data frame handling\nI0325 13:38:11.706310 901 log.go:172] (0xc0003c06e0) (1) Data frame sent\nI0325 13:38:11.706335 901 log.go:172] (0xc00099c420) (0xc0003c06e0) Stream removed, broadcasting: 1\nI0325 13:38:11.706386 901 log.go:172] (0xc00099c420) Go away received\nI0325 13:38:11.706775 901 log.go:172] (0xc00099c420) (0xc0003c06e0) Stream removed, broadcasting: 1\nI0325 13:38:11.706806 901 log.go:172] (0xc00099c420) (0xc000830000) Stream removed, broadcasting: 3\nI0325 13:38:11.706818 901 log.go:172] (0xc00099c420) (0xc000810000) Stream removed, broadcasting: 5\n" Mar 25 13:38:11.711: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 25 13:38:11.711: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 25 13:38:21.754: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Mar 25 13:38:31.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2873 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 25 13:38:31.994: INFO: stderr: "I0325 13:38:31.919200 921 log.go:172] (0xc0001166e0) (0xc000398820) Create stream\nI0325 13:38:31.919258 921 log.go:172] (0xc0001166e0) (0xc000398820) Stream added, broadcasting: 1\nI0325 13:38:31.921772 921 log.go:172] (0xc0001166e0) Reply frame received for 1\nI0325 13:38:31.921839 921 log.go:172] (0xc0001166e0) (0xc0009f2000) Create stream\nI0325 13:38:31.921867 921 log.go:172] (0xc0001166e0) (0xc0009f2000) Stream added, broadcasting: 3\nI0325 13:38:31.922966 921 log.go:172] (0xc0001166e0) Reply frame received for 3\nI0325 13:38:31.923003 921 log.go:172] (0xc0001166e0) (0xc0006c41e0) Create stream\nI0325 13:38:31.923013 921 log.go:172] (0xc0001166e0) (0xc0006c41e0) Stream added, broadcasting: 5\nI0325 13:38:31.923797 921 log.go:172] (0xc0001166e0) Reply frame received for 5\nI0325 13:38:31.987971 921 log.go:172] (0xc0001166e0) Data frame received for 5\nI0325 13:38:31.987992 921 log.go:172] (0xc0006c41e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0325 13:38:31.988025 921 log.go:172] (0xc0001166e0) Data frame received for 3\nI0325 13:38:31.988061 921 log.go:172] (0xc0009f2000) (3) Data frame handling\nI0325 13:38:31.988077 921 log.go:172] (0xc0009f2000) (3) Data frame sent\nI0325 13:38:31.988090 921 log.go:172] (0xc0001166e0) Data frame received for 3\nI0325 13:38:31.988101 921 log.go:172] (0xc0009f2000) (3) Data frame handling\nI0325 13:38:31.988149 921 log.go:172] (0xc0006c41e0) (5) Data frame sent\nI0325 13:38:31.988162 921 log.go:172] (0xc0001166e0) Data frame received for 5\nI0325 13:38:31.988172 921 log.go:172] (0xc0006c41e0) (5) Data frame handling\nI0325 13:38:31.990213 921 log.go:172] (0xc0001166e0) Data frame received for 1\nI0325 13:38:31.990242 921 log.go:172] (0xc000398820) (1) Data frame handling\nI0325 13:38:31.990256 921 log.go:172] (0xc000398820) (1) Data frame sent\nI0325 13:38:31.990272 921 log.go:172] (0xc0001166e0) (0xc000398820) Stream removed, broadcasting: 1\nI0325 13:38:31.990288 921 log.go:172] (0xc0001166e0) Go away received\nI0325 13:38:31.990693 921 log.go:172] (0xc0001166e0) (0xc000398820) Stream removed, broadcasting: 1\nI0325 13:38:31.990722 921 log.go:172] (0xc0001166e0) (0xc0009f2000) Stream removed, broadcasting: 3\nI0325 13:38:31.990734 921 log.go:172] (0xc0001166e0) (0xc0006c41e0) Stream removed, broadcasting: 5\n" Mar 25 13:38:31.995: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 25 13:38:31.995: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 25 13:38:42.017: INFO: Waiting for StatefulSet statefulset-2873/ss2 to complete update Mar 25 13:38:42.017: INFO: Waiting for Pod statefulset-2873/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Mar 25 13:38:42.017: INFO: Waiting for Pod statefulset-2873/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Mar 25 13:38:52.026: INFO: Waiting for StatefulSet statefulset-2873/ss2 to complete update Mar 25 13:38:52.026: INFO: Waiting for Pod statefulset-2873/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Mar 25 13:38:52.026: INFO: Waiting for Pod statefulset-2873/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Mar 25 13:39:02.025: INFO: Waiting for StatefulSet statefulset-2873/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Mar 25 13:39:12.026: INFO: Deleting all statefulset in ns statefulset-2873 Mar 25 13:39:12.029: INFO: Scaling statefulset ss2 to 0 Mar 25 13:39:42.047: INFO: Waiting for statefulset status.replicas updated to 0 Mar 25 13:39:42.050: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:39:42.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2873" for this suite. Mar 25 13:39:48.100: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:39:48.186: INFO: namespace statefulset-2873 deletion completed in 6.121558245s • [SLOW TEST:157.431 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:39:48.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 25 13:39:48.282: INFO: Waiting up to 5m0s for pod "downwardapi-volume-560c273b-31b4-4ed0-a3cd-f991b5826113" in namespace "projected-1382" to be "success or failure" Mar 25 13:39:48.295: INFO: Pod "downwardapi-volume-560c273b-31b4-4ed0-a3cd-f991b5826113": Phase="Pending", Reason="", readiness=false. Elapsed: 12.230122ms Mar 25 13:39:50.299: INFO: Pod "downwardapi-volume-560c273b-31b4-4ed0-a3cd-f991b5826113": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016175029s Mar 25 13:39:52.302: INFO: Pod "downwardapi-volume-560c273b-31b4-4ed0-a3cd-f991b5826113": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019170671s STEP: Saw pod success Mar 25 13:39:52.302: INFO: Pod "downwardapi-volume-560c273b-31b4-4ed0-a3cd-f991b5826113" satisfied condition "success or failure" Mar 25 13:39:52.304: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-560c273b-31b4-4ed0-a3cd-f991b5826113 container client-container: STEP: delete the pod Mar 25 13:39:52.356: INFO: Waiting for pod downwardapi-volume-560c273b-31b4-4ed0-a3cd-f991b5826113 to disappear Mar 25 13:39:52.360: INFO: Pod downwardapi-volume-560c273b-31b4-4ed0-a3cd-f991b5826113 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:39:52.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1382" for this suite. Mar 25 13:39:58.376: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:39:58.460: INFO: namespace projected-1382 deletion completed in 6.09657701s • [SLOW TEST:10.273 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:39:58.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-a803f816-030b-4474-99eb-fd6779ce481d STEP: Creating a pod to test consume configMaps Mar 25 13:39:58.528: INFO: Waiting up to 5m0s for pod "pod-configmaps-f721170e-b585-4c8c-8176-d2b6a195af36" in namespace "configmap-6510" to be "success or failure" Mar 25 13:39:58.541: INFO: Pod "pod-configmaps-f721170e-b585-4c8c-8176-d2b6a195af36": Phase="Pending", Reason="", readiness=false. Elapsed: 12.761346ms Mar 25 13:40:00.545: INFO: Pod "pod-configmaps-f721170e-b585-4c8c-8176-d2b6a195af36": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016967886s Mar 25 13:40:02.549: INFO: Pod "pod-configmaps-f721170e-b585-4c8c-8176-d2b6a195af36": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021165516s STEP: Saw pod success Mar 25 13:40:02.549: INFO: Pod "pod-configmaps-f721170e-b585-4c8c-8176-d2b6a195af36" satisfied condition "success or failure" Mar 25 13:40:02.552: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-f721170e-b585-4c8c-8176-d2b6a195af36 container configmap-volume-test: STEP: delete the pod Mar 25 13:40:02.571: INFO: Waiting for pod pod-configmaps-f721170e-b585-4c8c-8176-d2b6a195af36 to disappear Mar 25 13:40:02.581: INFO: Pod pod-configmaps-f721170e-b585-4c8c-8176-d2b6a195af36 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:40:02.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6510" for this suite. Mar 25 13:40:08.608: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:40:08.718: INFO: namespace configmap-6510 deletion completed in 6.133969792s • [SLOW TEST:10.257 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:40:08.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 25 13:40:08.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-5318' Mar 25 13:40:08.876: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 25 13:40:08.876: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Mar 25 13:40:08.915: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-4fvll] Mar 25 13:40:08.915: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-4fvll" in namespace "kubectl-5318" to be "running and ready" Mar 25 13:40:08.936: INFO: Pod "e2e-test-nginx-rc-4fvll": Phase="Pending", Reason="", readiness=false. Elapsed: 20.785597ms Mar 25 13:40:10.940: INFO: Pod "e2e-test-nginx-rc-4fvll": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025174203s Mar 25 13:40:12.944: INFO: Pod "e2e-test-nginx-rc-4fvll": Phase="Running", Reason="", readiness=true. Elapsed: 4.029568422s Mar 25 13:40:12.944: INFO: Pod "e2e-test-nginx-rc-4fvll" satisfied condition "running and ready" Mar 25 13:40:12.944: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-4fvll] Mar 25 13:40:12.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-5318' Mar 25 13:40:13.066: INFO: stderr: "" Mar 25 13:40:13.066: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 Mar 25 13:40:13.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-5318' Mar 25 13:40:13.160: INFO: stderr: "" Mar 25 13:40:13.160: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:40:13.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5318" for this suite. Mar 25 13:40:19.172: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:40:19.259: INFO: namespace kubectl-5318 deletion completed in 6.095876314s • [SLOW TEST:10.540 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:40:19.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 25 13:40:27.368: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 25 13:40:27.373: INFO: Pod pod-with-prestop-http-hook still exists Mar 25 13:40:29.373: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 25 13:40:29.376: INFO: Pod pod-with-prestop-http-hook still exists Mar 25 13:40:31.373: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 25 13:40:31.377: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:40:31.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2680" for this suite. Mar 25 13:40:53.401: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:40:53.474: INFO: namespace container-lifecycle-hook-2680 deletion completed in 22.08511574s • [SLOW TEST:34.215 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:40:53.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 25 13:40:53.576: INFO: (0) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.363763ms) Mar 25 13:40:53.579: INFO: (1) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.040667ms) Mar 25 13:40:53.582: INFO: (2) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.328146ms) Mar 25 13:40:53.586: INFO: (3) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.599464ms) Mar 25 13:40:53.589: INFO: (4) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.422896ms) Mar 25 13:40:53.593: INFO: (5) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.687687ms) Mar 25 13:40:53.597: INFO: (6) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.03359ms) Mar 25 13:40:53.601: INFO: (7) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.918305ms) Mar 25 13:40:53.605: INFO: (8) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.612952ms) Mar 25 13:40:53.608: INFO: (9) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.570853ms) Mar 25 13:40:53.612: INFO: (10) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.47648ms) Mar 25 13:40:53.616: INFO: (11) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.830362ms) Mar 25 13:40:53.655: INFO: (12) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 39.34521ms) Mar 25 13:40:53.658: INFO: (13) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.290313ms) Mar 25 13:40:53.662: INFO: (14) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.000284ms) Mar 25 13:40:53.665: INFO: (15) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.4475ms) Mar 25 13:40:53.668: INFO: (16) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.1961ms) Mar 25 13:40:53.671: INFO: (17) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.904018ms) Mar 25 13:40:53.674: INFO: (18) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.160494ms) Mar 25 13:40:53.677: INFO: (19) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.055349ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:40:53.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-9326" for this suite. Mar 25 13:40:59.694: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:40:59.771: INFO: namespace proxy-9326 deletion completed in 6.090436331s • [SLOW TEST:6.296 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:40:59.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 25 13:40:59.842: INFO: Waiting up to 5m0s for pod "pod-2345392a-442b-4789-b275-2053ca513c5d" in namespace "emptydir-8979" to be "success or failure" Mar 25 13:40:59.846: INFO: Pod "pod-2345392a-442b-4789-b275-2053ca513c5d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010344ms Mar 25 13:41:01.850: INFO: Pod "pod-2345392a-442b-4789-b275-2053ca513c5d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00807056s Mar 25 13:41:03.855: INFO: Pod "pod-2345392a-442b-4789-b275-2053ca513c5d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012808136s STEP: Saw pod success Mar 25 13:41:03.855: INFO: Pod "pod-2345392a-442b-4789-b275-2053ca513c5d" satisfied condition "success or failure" Mar 25 13:41:03.858: INFO: Trying to get logs from node iruya-worker pod pod-2345392a-442b-4789-b275-2053ca513c5d container test-container: STEP: delete the pod Mar 25 13:41:03.933: INFO: Waiting for pod pod-2345392a-442b-4789-b275-2053ca513c5d to disappear Mar 25 13:41:03.942: INFO: Pod pod-2345392a-442b-4789-b275-2053ca513c5d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:41:03.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8979" for this suite. Mar 25 13:41:09.957: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:41:10.106: INFO: namespace emptydir-8979 deletion completed in 6.161673076s • [SLOW TEST:10.335 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:41:10.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:41:36.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1822" for this suite. Mar 25 13:41:42.330: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:41:42.413: INFO: namespace namespaces-1822 deletion completed in 6.095987174s STEP: Destroying namespace "nsdeletetest-9875" for this suite. Mar 25 13:41:42.415: INFO: Namespace nsdeletetest-9875 was already deleted STEP: Destroying namespace "nsdeletetest-4158" for this suite. Mar 25 13:41:48.445: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:41:48.523: INFO: namespace nsdeletetest-4158 deletion completed in 6.107224428s • [SLOW TEST:38.416 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:41:48.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-h9k9b in namespace proxy-4930 I0325 13:41:48.680881 6 runners.go:180] Created replication controller with name: proxy-service-h9k9b, namespace: proxy-4930, replica count: 1 I0325 13:41:49.731314 6 runners.go:180] proxy-service-h9k9b Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 13:41:50.731541 6 runners.go:180] proxy-service-h9k9b Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 13:41:51.731791 6 runners.go:180] proxy-service-h9k9b Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 13:41:52.732080 6 runners.go:180] proxy-service-h9k9b Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 25 13:41:52.735: INFO: setup took 4.185717987s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Mar 25 13:41:52.742: INFO: (0) /api/v1/namespaces/proxy-4930/pods/proxy-service-h9k9b-pfxkt/proxy/: test (200; 6.806113ms) Mar 25 13:41:52.743: INFO: (0) /api/v1/namespaces/proxy-4930/pods/proxy-service-h9k9b-pfxkt:160/proxy/: foo (200; 6.968248ms) Mar 25 13:41:52.743: INFO: (0) /api/v1/namespaces/proxy-4930/pods/http:proxy-service-h9k9b-pfxkt:1080/proxy/: ... (200; 6.940481ms) Mar 25 13:41:52.743: INFO: (0) /api/v1/namespaces/proxy-4930/pods/http:proxy-service-h9k9b-pfxkt:162/proxy/: bar (200; 7.09675ms) Mar 25 13:41:52.743: INFO: (0) /api/v1/namespaces/proxy-4930/pods/proxy-service-h9k9b-pfxkt:1080/proxy/: test<... (200; 7.503934ms) Mar 25 13:41:52.744: INFO: (0) /api/v1/namespaces/proxy-4930/services/proxy-service-h9k9b:portname2/proxy/: bar (200; 8.046959ms) Mar 25 13:41:52.744: INFO: (0) /api/v1/namespaces/proxy-4930/pods/http:proxy-service-h9k9b-pfxkt:160/proxy/: foo (200; 7.94107ms) Mar 25 13:41:52.744: INFO: (0) /api/v1/namespaces/proxy-4930/services/http:proxy-service-h9k9b:portname1/proxy/: foo (200; 7.930011ms) Mar 25 13:41:52.744: INFO: (0) /api/v1/namespaces/proxy-4930/pods/proxy-service-h9k9b-pfxkt:162/proxy/: bar (200; 8.146129ms) Mar 25 13:41:52.744: INFO: (0) /api/v1/namespaces/proxy-4930/services/http:proxy-service-h9k9b:portname2/proxy/: bar (200; 8.491143ms) Mar 25 13:41:52.744: INFO: (0) /api/v1/namespaces/proxy-4930/services/proxy-service-h9k9b:portname1/proxy/: foo (200; 8.517918ms) Mar 25 13:41:52.748: INFO: (0) /api/v1/namespaces/proxy-4930/pods/https:proxy-service-h9k9b-pfxkt:462/proxy/: tls qux (200; 12.765935ms) Mar 25 13:41:52.750: INFO: (0) /api/v1/namespaces/proxy-4930/services/https:proxy-service-h9k9b:tlsportname2/proxy/: tls qux (200; 14.529156ms) Mar 25 13:41:52.750: INFO: (0) /api/v1/namespaces/proxy-4930/pods/https:proxy-service-h9k9b-pfxkt:443/proxy/: ... (200; 3.890242ms) Mar 25 13:41:52.757: INFO: (1) /api/v1/namespaces/proxy-4930/pods/proxy-service-h9k9b-pfxkt:162/proxy/: bar (200; 4.090556ms) Mar 25 13:41:52.757: INFO: (1) /api/v1/namespaces/proxy-4930/pods/https:proxy-service-h9k9b-pfxkt:462/proxy/: tls qux (200; 4.015274ms) Mar 25 13:41:52.757: INFO: (1) /api/v1/namespaces/proxy-4930/pods/proxy-service-h9k9b-pfxkt:1080/proxy/: test<... (200; 4.187852ms) Mar 25 13:41:52.757: INFO: (1) /api/v1/namespaces/proxy-4930/pods/proxy-service-h9k9b-pfxkt/proxy/: test (200; 4.168085ms) Mar 25 13:41:52.757: INFO: (1) /api/v1/namespaces/proxy-4930/pods/http:proxy-service-h9k9b-pfxkt:160/proxy/: foo (200; 4.331925ms) Mar 25 13:41:52.758: INFO: (1) /api/v1/namespaces/proxy-4930/pods/https:proxy-service-h9k9b-pfxkt:443/proxy/: test<... (200; 4.331124ms) Mar 25 13:41:52.762: INFO: (2) /api/v1/namespaces/proxy-4930/pods/https:proxy-service-h9k9b-pfxkt:460/proxy/: tls baz (200; 4.264999ms) Mar 25 13:41:52.763: INFO: (2) /api/v1/namespaces/proxy-4930/pods/proxy-service-h9k9b-pfxkt/proxy/: test (200; 4.489624ms) Mar 25 13:41:52.763: INFO: (2) /api/v1/namespaces/proxy-4930/services/proxy-service-h9k9b:portname2/proxy/: bar (200; 4.560275ms) Mar 25 13:41:52.763: INFO: (2) /api/v1/namespaces/proxy-4930/pods/http:proxy-service-h9k9b-pfxkt:1080/proxy/: ... (200; 4.639551ms) Mar 25 13:41:52.763: INFO: (2) /api/v1/namespaces/proxy-4930/pods/https:proxy-service-h9k9b-pfxkt:462/proxy/: tls qux (200; 5.121501ms) Mar 25 13:41:52.763: INFO: (2) /api/v1/namespaces/proxy-4930/services/http:proxy-service-h9k9b:portname2/proxy/: bar (200; 5.054108ms) Mar 25 13:41:52.764: INFO: (2) /api/v1/namespaces/proxy-4930/pods/proxy-service-h9k9b-pfxkt:162/proxy/: bar (200; 5.576748ms) Mar 25 13:41:52.764: INFO: (2) /api/v1/namespaces/proxy-4930/pods/https:proxy-service-h9k9b-pfxkt:443/proxy/: ... (200; 2.500703ms) Mar 25 13:41:52.768: INFO: (3) /api/v1/namespaces/proxy-4930/pods/https:proxy-service-h9k9b-pfxkt:460/proxy/: tls baz (200; 2.71833ms) Mar 25 13:41:52.769: INFO: (3) /api/v1/namespaces/proxy-4930/pods/http:proxy-service-h9k9b-pfxkt:160/proxy/: foo (200; 3.165298ms) Mar 25 13:41:52.770: INFO: (3) /api/v1/namespaces/proxy-4930/pods/proxy-service-h9k9b-pfxkt:160/proxy/: foo (200; 3.811194ms) Mar 25 13:41:52.770: INFO: (3) /api/v1/namespaces/proxy-4930/pods/proxy-service-h9k9b-pfxkt/proxy/: test (200; 4.037536ms) Mar 25 13:41:52.770: INFO: (3) /api/v1/namespaces/proxy-4930/pods/http:proxy-service-h9k9b-pfxkt:162/proxy/: bar (200; 4.067917ms) Mar 25 13:41:52.770: INFO: (3) /api/v1/namespaces/proxy-4930/services/http:proxy-service-h9k9b:portname1/proxy/: foo (200; 4.128728ms) Mar 25 13:41:52.770: INFO: (3) /api/v1/namespaces/proxy-4930/pods/https:proxy-service-h9k9b-pfxkt:443/proxy/: test<... (200; 4.183ms) Mar 25 13:41:52.770: INFO: (3) /api/v1/namespaces/proxy-4930/pods/proxy-service-h9k9b-pfxkt:162/proxy/: bar (200; 4.619054ms) Mar 25 13:41:52.770: INFO: (3) /api/v1/namespaces/proxy-4930/pods/https:proxy-service-h9k9b-pfxkt:462/proxy/: tls qux (200; 4.710143ms) Mar 25 13:41:52.771: INFO: (3) /api/v1/namespaces/proxy-4930/services/proxy-service-h9k9b:portname2/proxy/: bar (200; 5.494958ms) Mar 25 13:41:52.771: INFO: (3) /api/v1/namespaces/proxy-4930/services/proxy-service-h9k9b:portname1/proxy/: foo (200; 5.457034ms) Mar 25 13:41:52.771: INFO: (3) /api/v1/namespaces/proxy-4930/services/https:proxy-service-h9k9b:tlsportname2/proxy/: tls qux (200; 5.640658ms) Mar 25 13:41:52.771: INFO: (3) /api/v1/namespaces/proxy-4930/services/https:proxy-service-h9k9b:tlsportname1/proxy/: tls baz (200; 5.755879ms) Mar 25 13:41:52.775: INFO: (4) /api/v1/namespaces/proxy-4930/pods/proxy-service-h9k9b-pfxkt/proxy/: test (200; 3.525568ms) Mar 25 13:41:52.775: INFO: (4) /api/v1/namespaces/proxy-4930/pods/http:proxy-service-h9k9b-pfxkt:162/proxy/: bar (200; 3.847222ms) Mar 25 13:41:52.775: INFO: (4) /api/v1/namespaces/proxy-4930/pods/https:proxy-service-h9k9b-pfxkt:460/proxy/: tls baz (200; 3.922284ms) Mar 25 13:41:52.776: INFO: (4) /api/v1/namespaces/proxy-4930/services/http:proxy-service-h9k9b:portname1/proxy/: foo (200; 4.236717ms) Mar 25 13:41:52.776: INFO: (4) /api/v1/namespaces/proxy-4930/pods/proxy-service-h9k9b-pfxkt:162/proxy/: bar (200; 4.191908ms) Mar 25 13:41:52.776: INFO: (4) /api/v1/namespaces/proxy-4930/pods/http:proxy-service-h9k9b-pfxkt:160/proxy/: foo (200; 4.18713ms) Mar 25 13:41:52.776: INFO: (4) /api/v1/namespaces/proxy-4930/pods/http:proxy-service-h9k9b-pfxkt:1080/proxy/: ... (200; 4.286339ms) Mar 25 13:41:52.776: INFO: (4) /api/v1/namespaces/proxy-4930/pods/proxy-service-h9k9b-pfxkt:1080/proxy/: test<... (200; 4.241879ms) Mar 25 13:41:52.776: INFO: (4) /api/v1/namespaces/proxy-4930/pods/https:proxy-service-h9k9b-pfxkt:462/proxy/: tls qux (200; 4.393845ms) Mar 25 13:41:52.777: INFO: (4) /api/v1/namespaces/proxy-4930/services/https:proxy-service-h9k9b:tlsportname1/proxy/: tls baz (200; 5.942085ms) Mar 25 13:41:52.777: INFO: (4) /api/v1/namespaces/proxy-4930/services/http:proxy-service-h9k9b:portname2/proxy/: bar (200; 5.954435ms) Mar 25 13:41:52.778: INFO: (4) /api/v1/namespaces/proxy-4930/services/proxy-service-h9k9b:portname1/proxy/: foo (200; 6.003842ms) Mar 25 13:41:52.778: INFO: (4) /api/v1/namespaces/proxy-4930/pods/https:proxy-service-h9k9b-pfxkt:443/proxy/: test<... (200; 3.294519ms) Mar 25 13:41:52.781: INFO: (5) /api/v1/namespaces/proxy-4930/pods/proxy-service-h9k9b-pfxkt:160/proxy/: foo (200; 3.312387ms) Mar 25 13:41:52.782: INFO: (5) /api/v1/namespaces/proxy-4930/pods/https:proxy-service-h9k9b-pfxkt:460/proxy/: tls baz (200; 4.349274ms) Mar 25 13:41:52.782: INFO: (5) /api/v1/namespaces/proxy-4930/pods/https:proxy-service-h9k9b-pfxkt:462/proxy/: tls qux (200; 4.26649ms) Mar 25 13:41:52.782: INFO: (5) /api/v1/namespaces/proxy-4930/services/http:proxy-service-h9k9b:portname2/proxy/: bar (200; 4.369384ms) Mar 25 13:41:52.782: INFO: (5) /api/v1/namespaces/proxy-4930/pods/http:proxy-service-h9k9b-pfxkt:162/proxy/: bar (200; 4.249585ms) Mar 25 13:41:52.782: INFO: (5) /api/v1/namespaces/proxy-4930/pods/http:proxy-service-h9k9b-pfxkt:1080/proxy/: ... (200; 4.321624ms) Mar 25 13:41:52.782: INFO: (5) /api/v1/namespaces/proxy-4930/pods/https:proxy-service-h9k9b-pfxkt:443/proxy/: test (200; 4.598377ms) Mar 25 13:41:52.782: INFO: (5) /api/v1/namespaces/proxy-4930/services/proxy-service-h9k9b:portname2/proxy/: bar (200; 4.696686ms) Mar 25 13:41:52.786: INFO: (6) /api/v1/namespaces/proxy-4930/pods/http:proxy-service-h9k9b-pfxkt:160/proxy/: foo (200; 2.835835ms) Mar 25 13:41:52.786: INFO: (6) /api/v1/namespaces/proxy-4930/pods/http:proxy-service-h9k9b-pfxkt:1080/proxy/: ... (200; 2.941826ms) Mar 25 13:41:52.787: INFO: (6) /api/v1/namespaces/proxy-4930/pods/proxy-service-h9k9b-pfxkt/proxy/: test (200; 3.906183ms) Mar 25 13:41:52.787: INFO: (6) /api/v1/namespaces/proxy-4930/services/http:proxy-service-h9k9b:portname1/proxy/: foo (200; 3.614784ms) Mar 25 13:41:52.787: INFO: (6) /api/v1/namespaces/proxy-4930/pods/https:proxy-service-h9k9b-pfxkt:443/proxy/: test<... (200; 4.888217ms) Mar 25 13:41:52.791: INFO: (7) /api/v1/namespaces/proxy-4930/pods/http:proxy-service-h9k9b-pfxkt:160/proxy/: foo (200; 3.407432ms) Mar 25 13:41:52.791: INFO: (7) /api/v1/namespaces/proxy-4930/pods/https:proxy-service-h9k9b-pfxkt:460/proxy/: tls baz (200; 3.429368ms) Mar 25 13:41:52.791: INFO: (7) /api/v1/namespaces/proxy-4930/pods/http:proxy-service-h9k9b-pfxkt:1080/proxy/: ... (200; 3.389908ms) Mar 25 13:41:52.792: INFO: (7) /api/v1/namespaces/proxy-4930/pods/http:proxy-service-h9k9b-pfxkt:162/proxy/: bar (200; 3.731298ms) Mar 25 13:41:52.792: INFO: (7) /api/v1/namespaces/proxy-4930/pods/proxy-service-h9k9b-pfxkt:160/proxy/: foo (200; 3.858449ms) Mar 25 13:41:52.792: INFO: (7) /api/v1/namespaces/proxy-4930/pods/https:proxy-service-h9k9b-pfxkt:462/proxy/: tls qux (200; 3.791111ms) Mar 25 13:41:52.792: INFO: (7) /api/v1/namespaces/proxy-4930/pods/proxy-service-h9k9b-pfxkt:162/proxy/: bar (200; 4.104577ms) Mar 25 13:41:52.792: INFO: (7) /api/v1/namespaces/proxy-4930/pods/https:proxy-service-h9k9b-pfxkt:443/proxy/: test (200; 4.313467ms) Mar 25 13:41:52.792: INFO: (7) /api/v1/namespaces/proxy-4930/pods/proxy-service-h9k9b-pfxkt:1080/proxy/: test<... (200; 4.230195ms) Mar 25 13:41:52.792: INFO: (7) /api/v1/namespaces/proxy-4930/services/proxy-service-h9k9b:portname2/proxy/: bar (200; 4.55079ms) Mar 25 13:41:52.793: INFO: (7) /api/v1/namespaces/proxy-4930/services/http:proxy-service-h9k9b:portname1/proxy/: foo (200; 4.948959ms) Mar 25 13:41:52.793: INFO: (7) /api/v1/namespaces/proxy-4930/services/https:proxy-service-h9k9b:tlsportname2/proxy/: tls qux (200; 4.938125ms) Mar 25 13:41:52.793: INFO: (7) /api/v1/namespaces/proxy-4930/services/proxy-service-h9k9b:portname1/proxy/: foo (200; 4.975061ms) Mar 25 13:41:52.793: INFO: (7) /api/v1/namespaces/proxy-4930/services/https:proxy-service-h9k9b:tlsportname1/proxy/: tls baz (200; 5.096101ms) Mar 25 13:41:52.793: INFO: (7) /api/v1/namespaces/proxy-4930/services/http:proxy-service-h9k9b:portname2/proxy/: bar (200; 5.081583ms) Mar 25 13:41:52.797: INFO: (8) /api/v1/namespaces/proxy-4930/pods/https:proxy-service-h9k9b-pfxkt:462/proxy/: tls qux (200; 4.387789ms) Mar 25 13:41:52.798: INFO: (8) /api/v1/namespaces/proxy-4930/pods/proxy-service-h9k9b-pfxkt:1080/proxy/: test<... (200; 4.94611ms) Mar 25 13:41:52.799: INFO: (8) /api/v1/namespaces/proxy-4930/pods/proxy-service-h9k9b-pfxkt/proxy/: test (200; 5.477667ms) Mar 25 13:41:52.799: INFO: (8) /api/v1/namespaces/proxy-4930/pods/https:proxy-service-h9k9b-pfxkt:443/proxy/: ... (200; 5.70372ms) Mar 25 13:41:52.799: INFO: (8) /api/v1/namespaces/proxy-4930/services/http:proxy-service-h9k9b:portname2/proxy/: bar (200; 5.788325ms) Mar 25 13:41:52.799: INFO: (8) /api/v1/namespaces/proxy-4930/pods/http:proxy-service-h9k9b-pfxkt:160/proxy/: foo (200; 5.869875ms) Mar 25 13:41:52.799: INFO: (8) /api/v1/namespaces/proxy-4930/services/https:proxy-service-h9k9b:tlsportname2/proxy/: tls qux (200; 5.830453ms) Mar 25 13:41:52.799: INFO: (8) /api/v1/namespaces/proxy-4930/services/https:proxy-service-h9k9b:tlsportname1/proxy/: tls baz (200; 5.976665ms) Mar 25 13:41:52.799: INFO: (8) /api/v1/namespaces/proxy-4930/pods/proxy-service-h9k9b-pfxkt:162/proxy/: bar (200; 5.88789ms) Mar 25 13:41:52.799: INFO: (8) /api/v1/namespaces/proxy-4930/services/proxy-service-h9k9b:portname1/proxy/: foo (200; 5.975011ms) Mar 25 13:41:52.802: INFO: (9) /api/v1/namespaces/proxy-4930/pods/http:proxy-service-h9k9b-pfxkt:160/proxy/: foo (200; 3.019704ms) Mar 25 13:41:52.802: INFO: (9) /api/v1/namespaces/proxy-4930/pods/proxy-service-h9k9b-pfxkt:1080/proxy/: test<... (200; 3.166286ms) Mar 25 13:41:52.803: INFO: (9) /api/v1/namespaces/proxy-4930/pods/https:proxy-service-h9k9b-pfxkt:460/proxy/: tls baz (200; 3.566402ms) Mar 25 13:41:52.803: INFO: (9) /api/v1/namespaces/proxy-4930/pods/https:proxy-service-h9k9b-pfxkt:462/proxy/: tls qux (200; 3.563545ms) Mar 25 13:41:52.803: INFO: (9) /api/v1/namespaces/proxy-4930/pods/proxy-service-h9k9b-pfxkt:162/proxy/: bar (200; 3.499595ms) Mar 25 13:41:52.803: INFO: (9) /api/v1/namespaces/proxy-4930/pods/proxy-service-h9k9b-pfxkt:160/proxy/: foo (200; 3.56928ms) Mar 25 13:41:52.803: INFO: (9) /api/v1/namespaces/proxy-4930/pods/http:proxy-service-h9k9b-pfxkt:162/proxy/: bar (200; 3.649938ms) Mar 25 13:41:52.803: INFO: (9) /api/v1/namespaces/proxy-4930/pods/proxy-service-h9k9b-pfxkt/proxy/: test (200; 3.786314ms) Mar 25 13:41:52.803: INFO: (9) /api/v1/namespaces/proxy-4930/pods/https:proxy-service-h9k9b-pfxkt:443/proxy/: ... (200; 5.244304ms) Mar 25 13:41:52.805: INFO: (9) /api/v1/namespaces/proxy-4930/services/http:proxy-service-h9k9b:portname2/proxy/: bar (200; 5.759689ms) Mar 25 13:41:52.805: INFO: (9) /api/v1/namespaces/proxy-4930/services/proxy-service-h9k9b:portname1/proxy/: foo (200; 5.861544ms) Mar 25 13:41:52.805: INFO: (9) /api/v1/namespaces/proxy-4930/services/http:proxy-service-h9k9b:portname1/proxy/: foo (200; 5.903193ms) Mar 25 13:41:52.805: INFO: (9) /api/v1/namespaces/proxy-4930/services/https:proxy-service-h9k9b:tlsportname2/proxy/: tls qux (200; 5.828885ms) Mar 25 13:41:52.805: INFO: (9) /api/v1/namespaces/proxy-4930/services/proxy-service-h9k9b:portname2/proxy/: bar (200; 6.016484ms) Mar 25 13:41:52.805: INFO: (9) /api/v1/namespaces/proxy-4930/services/https:proxy-service-h9k9b:tlsportname1/proxy/: tls baz (200; 6.213493ms) Mar 25 13:41:52.808: INFO: (10) /api/v1/namespaces/proxy-4930/pods/proxy-service-h9k9b-pfxkt/proxy/: test (200; 2.856898ms) Mar 25 13:41:52.808: INFO: (10) /api/v1/namespaces/proxy-4930/pods/https:proxy-service-h9k9b-pfxkt:443/proxy/: test<... (200; 3.755209ms) Mar 25 13:41:52.809: INFO: (10) /api/v1/namespaces/proxy-4930/pods/http:proxy-service-h9k9b-pfxkt:1080/proxy/: ... (200; 3.767016ms) Mar 25 13:41:52.809: INFO: (10) /api/v1/namespaces/proxy-4930/pods/https:proxy-service-h9k9b-pfxkt:462/proxy/: tls qux (200; 3.772739ms) Mar 25 13:41:52.809: INFO: (10) /api/v1/namespaces/proxy-4930/pods/proxy-service-h9k9b-pfxkt:160/proxy/: foo (200; 3.887363ms) Mar 25 13:41:52.810: INFO: (10) /api/v1/namespaces/proxy-4930/pods/proxy-service-h9k9b-pfxkt:162/proxy/: bar (200; 4.041356ms) Mar 25 13:41:52.810: INFO: (10) /api/v1/namespaces/proxy-4930/pods/https:proxy-service-h9k9b-pfxkt:460/proxy/: tls baz (200; 4.203218ms) Mar 25 13:41:52.819: INFO: (10) /api/v1/namespaces/proxy-4930/services/http:proxy-service-h9k9b:portname2/proxy/: bar (200; 13.405837ms) Mar 25 13:41:52.819: INFO: (10) /api/v1/namespaces/proxy-4930/services/http:proxy-service-h9k9b:portname1/proxy/: foo (200; 13.514461ms) Mar 25 13:41:52.819: INFO: (10) /api/v1/namespaces/proxy-4930/services/proxy-service-h9k9b:portname1/proxy/: foo (200; 13.87451ms) Mar 25 13:41:52.819: INFO: (10) /api/v1/namespaces/proxy-4930/services/proxy-service-h9k9b:portname2/proxy/: bar (200; 13.786229ms) Mar 25 13:41:52.819: INFO: (10) /api/v1/namespaces/proxy-4930/services/https:proxy-service-h9k9b:tlsportname2/proxy/: tls qux (200; 13.982057ms) Mar 25 13:41:52.819: INFO: (10) /api/v1/namespaces/proxy-4930/services/https:proxy-service-h9k9b:tlsportname1/proxy/: tls baz (200; 13.887877ms) Mar 25 13:41:52.823: INFO: (11) /api/v1/namespaces/proxy-4930/pods/http:proxy-service-h9k9b-pfxkt:1080/proxy/: ... (200; 3.144321ms) Mar 25 13:41:52.824: INFO: (11) /api/v1/namespaces/proxy-4930/pods/http:proxy-service-h9k9b-pfxkt:162/proxy/: bar (200; 4.073579ms) Mar 25 13:41:52.824: INFO: (11) /api/v1/namespaces/proxy-4930/pods/https:proxy-service-h9k9b-pfxkt:443/proxy/: test (200; 4.484874ms) Mar 25 13:41:52.824: INFO: (11) /api/v1/namespaces/proxy-4930/pods/proxy-service-h9k9b-pfxkt:160/proxy/: foo (200; 4.474758ms) Mar 25 13:41:52.824: INFO: (11) /api/v1/namespaces/proxy-4930/services/https:proxy-service-h9k9b:tlsportname1/proxy/: tls baz (200; 4.45733ms) Mar 25 13:41:52.824: INFO: (11) /api/v1/namespaces/proxy-4930/pods/https:proxy-service-h9k9b-pfxkt:462/proxy/: tls qux (200; 4.610242ms) Mar 25 13:41:52.824: INFO: (11) /api/v1/namespaces/proxy-4930/pods/proxy-service-h9k9b-pfxkt:162/proxy/: bar (200; 4.542158ms) Mar 25 13:41:52.824: INFO: (11) /api/v1/namespaces/proxy-4930/pods/proxy-service-h9k9b-pfxkt:1080/proxy/: test<... (200; 4.506028ms) Mar 25 13:41:52.825: INFO: (11) /api/v1/namespaces/proxy-4930/services/http:proxy-service-h9k9b:portname1/proxy/: foo (200; 5.559939ms) Mar 25 13:41:52.826: INFO: (11) /api/v1/namespaces/proxy-4930/services/proxy-service-h9k9b:portname2/proxy/: bar (200; 5.816296ms) Mar 25 13:41:52.826: INFO: (11) /api/v1/namespaces/proxy-4930/services/http:proxy-service-h9k9b:portname2/proxy/: bar (200; 5.84044ms) Mar 25 13:41:52.826: INFO: (11) /api/v1/namespaces/proxy-4930/services/https:proxy-service-h9k9b:tlsportname2/proxy/: tls qux (200; 5.945378ms) Mar 25 13:41:52.826: INFO: (11) /api/v1/namespaces/proxy-4930/services/proxy-service-h9k9b:portname1/proxy/: foo (200; 6.061285ms) Mar 25 13:41:52.829: INFO: (12) /api/v1/namespaces/proxy-4930/pods/proxy-service-h9k9b-pfxkt:1080/proxy/: test<... (200; 3.431507ms) Mar 25 13:41:52.830: INFO: (12) /api/v1/namespaces/proxy-4930/pods/http:proxy-service-h9k9b-pfxkt:160/proxy/: foo (200; 4.464404ms) Mar 25 13:41:52.830: INFO: (12) /api/v1/namespaces/proxy-4930/pods/proxy-service-h9k9b-pfxkt:162/proxy/: bar (200; 4.442611ms) Mar 25 13:41:52.830: INFO: (12) /api/v1/namespaces/proxy-4930/pods/proxy-service-h9k9b-pfxkt:160/proxy/: foo (200; 4.489624ms) Mar 25 13:41:52.830: INFO: (12) /api/v1/namespaces/proxy-4930/pods/https:proxy-service-h9k9b-pfxkt:462/proxy/: tls qux (200; 4.591256ms) Mar 25 13:41:52.830: INFO: (12) /api/v1/namespaces/proxy-4930/pods/http:proxy-service-h9k9b-pfxkt:162/proxy/: bar (200; 4.47684ms) Mar 25 13:41:52.830: INFO: (12) /api/v1/namespaces/proxy-4930/pods/https:proxy-service-h9k9b-pfxkt:460/proxy/: tls baz (200; 4.680453ms) Mar 25 13:41:52.830: INFO: (12) /api/v1/namespaces/proxy-4930/pods/proxy-service-h9k9b-pfxkt/proxy/: test (200; 4.603707ms) Mar 25 13:41:52.831: INFO: (12) /api/v1/namespaces/proxy-4930/pods/http:proxy-service-h9k9b-pfxkt:1080/proxy/: ... (200; 4.694716ms) Mar 25 13:41:52.831: INFO: (12) /api/v1/namespaces/proxy-4930/pods/https:proxy-service-h9k9b-pfxkt:443/proxy/: ... (200; 3.27193ms) Mar 25 13:41:52.835: INFO: (13) /api/v1/namespaces/proxy-4930/pods/proxy-service-h9k9b-pfxkt:1080/proxy/: test<... (200; 3.256129ms) Mar 25 13:41:52.836: INFO: (13) /api/v1/namespaces/proxy-4930/pods/https:proxy-service-h9k9b-pfxkt:462/proxy/: tls qux (200; 4.334362ms) Mar 25 13:41:52.836: INFO: (13) /api/v1/namespaces/proxy-4930/services/https:proxy-service-h9k9b:tlsportname1/proxy/: tls baz (200; 4.541659ms) Mar 25 13:41:52.836: INFO: (13) /api/v1/namespaces/proxy-4930/pods/http:proxy-service-h9k9b-pfxkt:162/proxy/: bar (200; 4.499322ms) Mar 25 13:41:52.837: INFO: (13) /api/v1/namespaces/proxy-4930/services/http:proxy-service-h9k9b:portname2/proxy/: bar (200; 4.666409ms) Mar 25 13:41:52.837: INFO: (13) /api/v1/namespaces/proxy-4930/pods/proxy-service-h9k9b-pfxkt/proxy/: test (200; 4.585935ms) Mar 25 13:41:52.837: INFO: (13) /api/v1/namespaces/proxy-4930/pods/proxy-service-h9k9b-pfxkt:162/proxy/: bar (200; 4.641323ms) Mar 25 13:41:52.837: INFO: (13) /api/v1/namespaces/proxy-4930/services/http:proxy-service-h9k9b:portname1/proxy/: foo (200; 4.597063ms) Mar 25 13:41:52.837: INFO: (13) /api/v1/namespaces/proxy-4930/pods/proxy-service-h9k9b-pfxkt:160/proxy/: foo (200; 5.007468ms) Mar 25 13:41:52.837: INFO: (13) /api/v1/namespaces/proxy-4930/pods/http:proxy-service-h9k9b-pfxkt:160/proxy/: foo (200; 5.202335ms) Mar 25 13:41:52.837: INFO: (13) /api/v1/namespaces/proxy-4930/services/proxy-service-h9k9b:portname1/proxy/: foo (200; 5.276722ms) Mar 25 13:41:52.837: INFO: (13) /api/v1/namespaces/proxy-4930/services/proxy-service-h9k9b:portname2/proxy/: bar (200; 5.343005ms) Mar 25 13:41:52.837: INFO: (13) /api/v1/namespaces/proxy-4930/services/https:proxy-service-h9k9b:tlsportname2/proxy/: tls qux (200; 5.425411ms) Mar 25 13:41:52.837: INFO: (13) /api/v1/namespaces/proxy-4930/pods/https:proxy-service-h9k9b-pfxkt:460/proxy/: tls baz (200; 5.465582ms) Mar 25 13:41:52.844: INFO: (14) /api/v1/namespaces/proxy-4930/pods/http:proxy-service-h9k9b-pfxkt:160/proxy/: foo (200; 6.718631ms) Mar 25 13:41:52.844: INFO: (14) /api/v1/namespaces/proxy-4930/pods/https:proxy-service-h9k9b-pfxkt:462/proxy/: tls qux (200; 6.87318ms) Mar 25 13:41:52.844: INFO: (14) /api/v1/namespaces/proxy-4930/pods/proxy-service-h9k9b-pfxkt:160/proxy/: foo (200; 7.007328ms) Mar 25 13:41:52.845: INFO: (14) /api/v1/namespaces/proxy-4930/pods/http:proxy-service-h9k9b-pfxkt:1080/proxy/: ... (200; 6.974715ms) Mar 25 13:41:52.845: INFO: (14) /api/v1/namespaces/proxy-4930/pods/http:proxy-service-h9k9b-pfxkt:162/proxy/: bar (200; 6.996294ms) Mar 25 13:41:52.845: INFO: (14) /api/v1/namespaces/proxy-4930/pods/proxy-service-h9k9b-pfxkt/proxy/: test (200; 6.978034ms) Mar 25 13:41:52.845: INFO: (14) /api/v1/namespaces/proxy-4930/pods/https:proxy-service-h9k9b-pfxkt:443/proxy/: test<... (200; 7.462924ms) Mar 25 13:41:52.845: INFO: (14) /api/v1/namespaces/proxy-4930/services/https:proxy-service-h9k9b:tlsportname2/proxy/: tls qux (200; 7.416225ms) Mar 25 13:41:52.845: INFO: (14) /api/v1/namespaces/proxy-4930/services/http:proxy-service-h9k9b:portname1/proxy/: foo (200; 7.394473ms) Mar 25 13:41:52.845: INFO: (14) /api/v1/namespaces/proxy-4930/services/proxy-service-h9k9b:portname1/proxy/: foo (200; 7.468097ms) Mar 25 13:41:52.845: INFO: (14) /api/v1/namespaces/proxy-4930/services/http:proxy-service-h9k9b:portname2/proxy/: bar (200; 7.48002ms) Mar 25 13:41:52.845: INFO: (14) /api/v1/namespaces/proxy-4930/pods/proxy-service-h9k9b-pfxkt:162/proxy/: bar (200; 7.672596ms) Mar 25 13:41:52.845: INFO: (14) /api/v1/namespaces/proxy-4930/services/proxy-service-h9k9b:portname2/proxy/: bar (200; 7.631671ms) Mar 25 13:41:52.845: INFO: (14) /api/v1/namespaces/proxy-4930/services/https:proxy-service-h9k9b:tlsportname1/proxy/: tls baz (200; 7.629043ms) Mar 25 13:41:52.845: INFO: (14) /api/v1/namespaces/proxy-4930/pods/https:proxy-service-h9k9b-pfxkt:460/proxy/: tls baz (200; 7.580516ms) Mar 25 13:41:52.848: INFO: (15) /api/v1/namespaces/proxy-4930/pods/http:proxy-service-h9k9b-pfxkt:162/proxy/: bar (200; 2.957703ms) Mar 25 13:41:52.849: INFO: (15) /api/v1/namespaces/proxy-4930/pods/proxy-service-h9k9b-pfxkt:1080/proxy/: test<... (200; 3.9361ms) Mar 25 13:41:52.850: INFO: (15) /api/v1/namespaces/proxy-4930/pods/http:proxy-service-h9k9b-pfxkt:160/proxy/: foo (200; 4.442806ms) Mar 25 13:41:52.850: INFO: (15) /api/v1/namespaces/proxy-4930/pods/proxy-service-h9k9b-pfxkt:162/proxy/: bar (200; 4.51676ms) Mar 25 13:41:52.850: INFO: (15) /api/v1/namespaces/proxy-4930/pods/https:proxy-service-h9k9b-pfxkt:460/proxy/: tls baz (200; 4.445757ms) Mar 25 13:41:52.850: INFO: (15) /api/v1/namespaces/proxy-4930/pods/proxy-service-h9k9b-pfxkt:160/proxy/: foo (200; 4.391405ms) Mar 25 13:41:52.850: INFO: (15) /api/v1/namespaces/proxy-4930/services/http:proxy-service-h9k9b:portname2/proxy/: bar (200; 4.534247ms) Mar 25 13:41:52.850: INFO: (15) /api/v1/namespaces/proxy-4930/pods/https:proxy-service-h9k9b-pfxkt:462/proxy/: tls qux (200; 4.455001ms) Mar 25 13:41:52.850: INFO: (15) /api/v1/namespaces/proxy-4930/pods/http:proxy-service-h9k9b-pfxkt:1080/proxy/: ... (200; 4.532359ms) Mar 25 13:41:52.850: INFO: (15) /api/v1/namespaces/proxy-4930/pods/https:proxy-service-h9k9b-pfxkt:443/proxy/: test (200; 4.694935ms) Mar 25 13:41:52.850: INFO: (15) /api/v1/namespaces/proxy-4930/services/http:proxy-service-h9k9b:portname1/proxy/: foo (200; 4.678621ms) Mar 25 13:41:52.850: INFO: (15) /api/v1/namespaces/proxy-4930/services/proxy-service-h9k9b:portname2/proxy/: bar (200; 4.732344ms) Mar 25 13:41:52.850: INFO: (15) /api/v1/namespaces/proxy-4930/services/https:proxy-service-h9k9b:tlsportname2/proxy/: tls qux (200; 4.735673ms) Mar 25 13:41:52.850: INFO: (15) /api/v1/namespaces/proxy-4930/services/proxy-service-h9k9b:portname1/proxy/: foo (200; 5.011605ms) Mar 25 13:41:52.854: INFO: (16) /api/v1/namespaces/proxy-4930/pods/proxy-service-h9k9b-pfxkt:160/proxy/: foo (200; 3.223628ms) Mar 25 13:41:52.854: INFO: (16) /api/v1/namespaces/proxy-4930/pods/proxy-service-h9k9b-pfxkt:1080/proxy/: test<... (200; 3.655738ms) Mar 25 13:41:52.854: INFO: (16) /api/v1/namespaces/proxy-4930/pods/http:proxy-service-h9k9b-pfxkt:1080/proxy/: ... (200; 3.600458ms) Mar 25 13:41:52.854: INFO: (16) /api/v1/namespaces/proxy-4930/pods/proxy-service-h9k9b-pfxkt/proxy/: test (200; 3.679709ms) Mar 25 13:41:52.854: INFO: (16) /api/v1/namespaces/proxy-4930/pods/http:proxy-service-h9k9b-pfxkt:160/proxy/: foo (200; 3.650848ms) Mar 25 13:41:52.854: INFO: (16) /api/v1/namespaces/proxy-4930/pods/http:proxy-service-h9k9b-pfxkt:162/proxy/: bar (200; 3.922559ms) Mar 25 13:41:52.854: INFO: (16) /api/v1/namespaces/proxy-4930/pods/https:proxy-service-h9k9b-pfxkt:462/proxy/: tls qux (200; 3.934438ms) Mar 25 13:41:52.854: INFO: (16) /api/v1/namespaces/proxy-4930/pods/proxy-service-h9k9b-pfxkt:162/proxy/: bar (200; 4.026622ms) Mar 25 13:41:52.855: INFO: (16) /api/v1/namespaces/proxy-4930/pods/https:proxy-service-h9k9b-pfxkt:443/proxy/: test (200; 3.108229ms) Mar 25 13:41:52.859: INFO: (17) /api/v1/namespaces/proxy-4930/pods/proxy-service-h9k9b-pfxkt:162/proxy/: bar (200; 3.133207ms) Mar 25 13:41:52.859: INFO: (17) /api/v1/namespaces/proxy-4930/pods/https:proxy-service-h9k9b-pfxkt:443/proxy/: test<... (200; 3.488309ms) Mar 25 13:41:52.860: INFO: (17) /api/v1/namespaces/proxy-4930/services/http:proxy-service-h9k9b:portname2/proxy/: bar (200; 4.116816ms) Mar 25 13:41:52.860: INFO: (17) /api/v1/namespaces/proxy-4930/pods/http:proxy-service-h9k9b-pfxkt:1080/proxy/: ... (200; 4.0654ms) Mar 25 13:41:52.860: INFO: (17) /api/v1/namespaces/proxy-4930/services/proxy-service-h9k9b:portname2/proxy/: bar (200; 4.097214ms) Mar 25 13:41:52.860: INFO: (17) /api/v1/namespaces/proxy-4930/services/https:proxy-service-h9k9b:tlsportname1/proxy/: tls baz (200; 4.20325ms) Mar 25 13:41:52.860: INFO: (17) /api/v1/namespaces/proxy-4930/services/proxy-service-h9k9b:portname1/proxy/: foo (200; 4.148425ms) Mar 25 13:41:52.860: INFO: (17) /api/v1/namespaces/proxy-4930/services/http:proxy-service-h9k9b:portname1/proxy/: foo (200; 4.133671ms) Mar 25 13:41:52.860: INFO: (17) /api/v1/namespaces/proxy-4930/services/https:proxy-service-h9k9b:tlsportname2/proxy/: tls qux (200; 4.136365ms) Mar 25 13:41:52.868: INFO: (18) /api/v1/namespaces/proxy-4930/pods/proxy-service-h9k9b-pfxkt:1080/proxy/: test<... (200; 7.853026ms) Mar 25 13:41:52.868: INFO: (18) /api/v1/namespaces/proxy-4930/pods/https:proxy-service-h9k9b-pfxkt:462/proxy/: tls qux (200; 7.828002ms) Mar 25 13:41:52.868: INFO: (18) /api/v1/namespaces/proxy-4930/pods/https:proxy-service-h9k9b-pfxkt:443/proxy/: ... (200; 7.849449ms) Mar 25 13:41:52.868: INFO: (18) /api/v1/namespaces/proxy-4930/pods/https:proxy-service-h9k9b-pfxkt:460/proxy/: tls baz (200; 7.881755ms) Mar 25 13:41:52.868: INFO: (18) /api/v1/namespaces/proxy-4930/pods/proxy-service-h9k9b-pfxkt/proxy/: test (200; 7.820348ms) Mar 25 13:41:52.868: INFO: (18) /api/v1/namespaces/proxy-4930/pods/proxy-service-h9k9b-pfxkt:160/proxy/: foo (200; 7.89752ms) Mar 25 13:41:52.868: INFO: (18) /api/v1/namespaces/proxy-4930/services/http:proxy-service-h9k9b:portname2/proxy/: bar (200; 7.839165ms) Mar 25 13:41:52.871: INFO: (19) /api/v1/namespaces/proxy-4930/pods/http:proxy-service-h9k9b-pfxkt:162/proxy/: bar (200; 2.641857ms) Mar 25 13:41:52.871: INFO: (19) /api/v1/namespaces/proxy-4930/pods/proxy-service-h9k9b-pfxkt:1080/proxy/: test<... (200; 3.192874ms) Mar 25 13:41:52.871: INFO: (19) /api/v1/namespaces/proxy-4930/pods/proxy-service-h9k9b-pfxkt:160/proxy/: foo (200; 3.395592ms) Mar 25 13:41:52.871: INFO: (19) /api/v1/namespaces/proxy-4930/pods/https:proxy-service-h9k9b-pfxkt:462/proxy/: tls qux (200; 3.413455ms) Mar 25 13:41:52.871: INFO: (19) /api/v1/namespaces/proxy-4930/pods/proxy-service-h9k9b-pfxkt:162/proxy/: bar (200; 3.496191ms) Mar 25 13:41:52.871: INFO: (19) /api/v1/namespaces/proxy-4930/pods/http:proxy-service-h9k9b-pfxkt:160/proxy/: foo (200; 3.471656ms) Mar 25 13:41:52.871: INFO: (19) /api/v1/namespaces/proxy-4930/pods/https:proxy-service-h9k9b-pfxkt:443/proxy/: ... (200; 3.505944ms) Mar 25 13:41:52.871: INFO: (19) /api/v1/namespaces/proxy-4930/pods/https:proxy-service-h9k9b-pfxkt:460/proxy/: tls baz (200; 3.468567ms) Mar 25 13:41:52.871: INFO: (19) /api/v1/namespaces/proxy-4930/pods/proxy-service-h9k9b-pfxkt/proxy/: test (200; 3.464839ms) Mar 25 13:41:52.873: INFO: (19) /api/v1/namespaces/proxy-4930/services/http:proxy-service-h9k9b:portname2/proxy/: bar (200; 5.714563ms) Mar 25 13:41:52.873: INFO: (19) /api/v1/namespaces/proxy-4930/services/proxy-service-h9k9b:portname1/proxy/: foo (200; 5.707335ms) Mar 25 13:41:52.873: INFO: (19) /api/v1/namespaces/proxy-4930/services/http:proxy-service-h9k9b:portname1/proxy/: foo (200; 5.662409ms) Mar 25 13:41:52.873: INFO: (19) /api/v1/namespaces/proxy-4930/services/https:proxy-service-h9k9b:tlsportname1/proxy/: tls baz (200; 5.694106ms) Mar 25 13:41:52.874: INFO: (19) /api/v1/namespaces/proxy-4930/services/proxy-service-h9k9b:portname2/proxy/: bar (200; 5.779336ms) Mar 25 13:41:52.874: INFO: (19) /api/v1/namespaces/proxy-4930/services/https:proxy-service-h9k9b:tlsportname2/proxy/: tls qux (200; 5.745226ms) STEP: deleting ReplicationController proxy-service-h9k9b in namespace proxy-4930, will wait for the garbage collector to delete the pods Mar 25 13:41:52.931: INFO: Deleting ReplicationController proxy-service-h9k9b took: 5.687885ms Mar 25 13:41:53.231: INFO: Terminating ReplicationController proxy-service-h9k9b pods took: 300.233049ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:42:02.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-4930" for this suite. Mar 25 13:42:08.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:42:08.328: INFO: namespace proxy-4930 deletion completed in 6.092194327s • [SLOW TEST:19.804 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:42:08.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Mar 25 13:42:08.383: INFO: Waiting up to 5m0s for pod "downward-api-63abbbbb-0dd2-42c9-b143-2a9d40786e6b" in namespace "downward-api-867" to be "success or failure" Mar 25 13:42:08.386: INFO: Pod "downward-api-63abbbbb-0dd2-42c9-b143-2a9d40786e6b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.015026ms Mar 25 13:42:10.390: INFO: Pod "downward-api-63abbbbb-0dd2-42c9-b143-2a9d40786e6b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006766321s Mar 25 13:42:12.394: INFO: Pod "downward-api-63abbbbb-0dd2-42c9-b143-2a9d40786e6b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010604335s STEP: Saw pod success Mar 25 13:42:12.394: INFO: Pod "downward-api-63abbbbb-0dd2-42c9-b143-2a9d40786e6b" satisfied condition "success or failure" Mar 25 13:42:12.397: INFO: Trying to get logs from node iruya-worker pod downward-api-63abbbbb-0dd2-42c9-b143-2a9d40786e6b container dapi-container: STEP: delete the pod Mar 25 13:42:12.428: INFO: Waiting for pod downward-api-63abbbbb-0dd2-42c9-b143-2a9d40786e6b to disappear Mar 25 13:42:12.446: INFO: Pod downward-api-63abbbbb-0dd2-42c9-b143-2a9d40786e6b no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:42:12.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-867" for this suite. Mar 25 13:42:18.462: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:42:18.546: INFO: namespace downward-api-867 deletion completed in 6.096689506s • [SLOW TEST:10.217 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:42:18.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 25 13:42:18.611: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dc72d71f-6f91-4883-84cf-08aa442cde11" in namespace "downward-api-9530" to be "success or failure" Mar 25 13:42:18.614: INFO: Pod "downwardapi-volume-dc72d71f-6f91-4883-84cf-08aa442cde11": Phase="Pending", Reason="", readiness=false. Elapsed: 3.590788ms Mar 25 13:42:20.621: INFO: Pod "downwardapi-volume-dc72d71f-6f91-4883-84cf-08aa442cde11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009899723s Mar 25 13:42:22.625: INFO: Pod "downwardapi-volume-dc72d71f-6f91-4883-84cf-08aa442cde11": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014252229s STEP: Saw pod success Mar 25 13:42:22.625: INFO: Pod "downwardapi-volume-dc72d71f-6f91-4883-84cf-08aa442cde11" satisfied condition "success or failure" Mar 25 13:42:22.628: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-dc72d71f-6f91-4883-84cf-08aa442cde11 container client-container: STEP: delete the pod Mar 25 13:42:22.646: INFO: Waiting for pod downwardapi-volume-dc72d71f-6f91-4883-84cf-08aa442cde11 to disappear Mar 25 13:42:22.650: INFO: Pod downwardapi-volume-dc72d71f-6f91-4883-84cf-08aa442cde11 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:42:22.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9530" for this suite. Mar 25 13:42:28.679: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:42:28.752: INFO: namespace downward-api-9530 deletion completed in 6.098323224s • [SLOW TEST:10.207 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:42:28.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 25 13:42:28.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-3237' Mar 25 13:42:28.962: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 25 13:42:28.962: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 Mar 25 13:42:28.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-3237' Mar 25 13:42:29.097: INFO: stderr: "" Mar 25 13:42:29.097: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:42:29.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3237" for this suite. Mar 25 13:42:35.116: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:42:35.197: INFO: namespace kubectl-3237 deletion completed in 6.096292136s • [SLOW TEST:6.444 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:42:35.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0325 13:43:15.599918 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 25 13:43:15.599: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:43:15.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8963" for this suite. Mar 25 13:43:23.637: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:43:23.858: INFO: namespace gc-8963 deletion completed in 8.254880846s • [SLOW TEST:48.661 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:43:23.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 25 13:43:23.946: INFO: Creating ReplicaSet my-hostname-basic-3e7b21de-75e7-4f80-a6fd-1ecac3265af2 Mar 25 13:43:24.039: INFO: Pod name my-hostname-basic-3e7b21de-75e7-4f80-a6fd-1ecac3265af2: Found 0 pods out of 1 Mar 25 13:43:29.043: INFO: Pod name my-hostname-basic-3e7b21de-75e7-4f80-a6fd-1ecac3265af2: Found 1 pods out of 1 Mar 25 13:43:29.043: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-3e7b21de-75e7-4f80-a6fd-1ecac3265af2" is running Mar 25 13:43:29.046: INFO: Pod "my-hostname-basic-3e7b21de-75e7-4f80-a6fd-1ecac3265af2-hrdz5" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-25 13:43:24 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-25 13:43:27 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-25 13:43:27 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-25 13:43:24 +0000 UTC Reason: Message:}]) Mar 25 13:43:29.046: INFO: Trying to dial the pod Mar 25 13:43:34.059: INFO: Controller my-hostname-basic-3e7b21de-75e7-4f80-a6fd-1ecac3265af2: Got expected result from replica 1 [my-hostname-basic-3e7b21de-75e7-4f80-a6fd-1ecac3265af2-hrdz5]: "my-hostname-basic-3e7b21de-75e7-4f80-a6fd-1ecac3265af2-hrdz5", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:43:34.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-1629" for this suite. Mar 25 13:43:40.085: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:43:40.160: INFO: namespace replicaset-1629 deletion completed in 6.093494284s • [SLOW TEST:16.301 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:43:40.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-136/configmap-test-5a9f0ba2-951a-42d0-8a81-3a762875b56d STEP: Creating a pod to test consume configMaps Mar 25 13:43:40.285: INFO: Waiting up to 5m0s for pod "pod-configmaps-02c2b6f7-77ef-4357-aeda-1dd9706eb13d" in namespace "configmap-136" to be "success or failure" Mar 25 13:43:40.288: INFO: Pod "pod-configmaps-02c2b6f7-77ef-4357-aeda-1dd9706eb13d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.113612ms Mar 25 13:43:42.294: INFO: Pod "pod-configmaps-02c2b6f7-77ef-4357-aeda-1dd9706eb13d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008345101s Mar 25 13:43:44.298: INFO: Pod "pod-configmaps-02c2b6f7-77ef-4357-aeda-1dd9706eb13d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012742633s STEP: Saw pod success Mar 25 13:43:44.298: INFO: Pod "pod-configmaps-02c2b6f7-77ef-4357-aeda-1dd9706eb13d" satisfied condition "success or failure" Mar 25 13:43:44.301: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-02c2b6f7-77ef-4357-aeda-1dd9706eb13d container env-test: STEP: delete the pod Mar 25 13:43:44.498: INFO: Waiting for pod pod-configmaps-02c2b6f7-77ef-4357-aeda-1dd9706eb13d to disappear Mar 25 13:43:44.561: INFO: Pod pod-configmaps-02c2b6f7-77ef-4357-aeda-1dd9706eb13d no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:43:44.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-136" for this suite. Mar 25 13:43:50.578: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:43:50.644: INFO: namespace configmap-136 deletion completed in 6.079238279s • [SLOW TEST:10.484 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:43:50.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Mar 25 13:43:50.677: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:43:56.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5761" for this suite. Mar 25 13:44:02.420: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:44:02.509: INFO: namespace init-container-5761 deletion completed in 6.116011412s • [SLOW TEST:11.865 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:44:02.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-509fb019-8ba2-47f3-a517-c3db7c50fe05 STEP: Creating a pod to test consume configMaps Mar 25 13:44:02.582: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d4ead9d0-d186-4bb3-8aff-629240b90842" in namespace "projected-9619" to be "success or failure" Mar 25 13:44:02.598: INFO: Pod "pod-projected-configmaps-d4ead9d0-d186-4bb3-8aff-629240b90842": Phase="Pending", Reason="", readiness=false. Elapsed: 16.056214ms Mar 25 13:44:04.603: INFO: Pod "pod-projected-configmaps-d4ead9d0-d186-4bb3-8aff-629240b90842": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020708756s Mar 25 13:44:06.607: INFO: Pod "pod-projected-configmaps-d4ead9d0-d186-4bb3-8aff-629240b90842": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025026795s STEP: Saw pod success Mar 25 13:44:06.607: INFO: Pod "pod-projected-configmaps-d4ead9d0-d186-4bb3-8aff-629240b90842" satisfied condition "success or failure" Mar 25 13:44:06.610: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-d4ead9d0-d186-4bb3-8aff-629240b90842 container projected-configmap-volume-test: STEP: delete the pod Mar 25 13:44:06.643: INFO: Waiting for pod pod-projected-configmaps-d4ead9d0-d186-4bb3-8aff-629240b90842 to disappear Mar 25 13:44:06.653: INFO: Pod pod-projected-configmaps-d4ead9d0-d186-4bb3-8aff-629240b90842 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:44:06.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9619" for this suite. Mar 25 13:44:12.704: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:44:12.780: INFO: namespace projected-9619 deletion completed in 6.124286166s • [SLOW TEST:10.271 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:44:12.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-9934f760-62b7-429d-9f9e-3695e3ead631 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:44:16.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5592" for this suite. Mar 25 13:44:38.930: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:44:39.008: INFO: namespace configmap-5592 deletion completed in 22.114963365s • [SLOW TEST:26.227 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:44:39.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-300887c3-2674-4e80-90ce-69e1582f3fb3 STEP: Creating configMap with name cm-test-opt-upd-ebe5a040-1b18-493b-945f-98210331d1b6 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-300887c3-2674-4e80-90ce-69e1582f3fb3 STEP: Updating configmap cm-test-opt-upd-ebe5a040-1b18-493b-945f-98210331d1b6 STEP: Creating configMap with name cm-test-opt-create-6ee31210-6683-40f2-8781-1a3c6da84210 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:46:13.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1450" for this suite. Mar 25 13:46:37.644: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:46:37.725: INFO: namespace projected-1450 deletion completed in 24.121087799s • [SLOW TEST:118.717 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:46:37.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 25 13:46:37.815: INFO: Waiting up to 5m0s for pod "downwardapi-volume-523c52c4-fa64-4ccc-b18b-8c85e04119c1" in namespace "projected-9892" to be "success or failure" Mar 25 13:46:37.817: INFO: Pod "downwardapi-volume-523c52c4-fa64-4ccc-b18b-8c85e04119c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.252399ms Mar 25 13:46:39.822: INFO: Pod "downwardapi-volume-523c52c4-fa64-4ccc-b18b-8c85e04119c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006861439s Mar 25 13:46:41.825: INFO: Pod "downwardapi-volume-523c52c4-fa64-4ccc-b18b-8c85e04119c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010383534s STEP: Saw pod success Mar 25 13:46:41.825: INFO: Pod "downwardapi-volume-523c52c4-fa64-4ccc-b18b-8c85e04119c1" satisfied condition "success or failure" Mar 25 13:46:41.828: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-523c52c4-fa64-4ccc-b18b-8c85e04119c1 container client-container: STEP: delete the pod Mar 25 13:46:41.847: INFO: Waiting for pod downwardapi-volume-523c52c4-fa64-4ccc-b18b-8c85e04119c1 to disappear Mar 25 13:46:41.857: INFO: Pod downwardapi-volume-523c52c4-fa64-4ccc-b18b-8c85e04119c1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:46:41.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9892" for this suite. Mar 25 13:46:47.873: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:46:47.956: INFO: namespace projected-9892 deletion completed in 6.095919639s • [SLOW TEST:10.230 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:46:47.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 25 13:46:48.007: INFO: Creating deployment "nginx-deployment" Mar 25 13:46:48.038: INFO: Waiting for observed generation 1 Mar 25 13:46:50.120: INFO: Waiting for all required pods to come up Mar 25 13:46:50.124: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Mar 25 13:46:58.133: INFO: Waiting for deployment "nginx-deployment" to complete Mar 25 13:46:58.139: INFO: Updating deployment "nginx-deployment" with a non-existent image Mar 25 13:46:58.147: INFO: Updating deployment nginx-deployment Mar 25 13:46:58.147: INFO: Waiting for observed generation 2 Mar 25 13:47:00.222: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Mar 25 13:47:00.224: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Mar 25 13:47:00.226: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Mar 25 13:47:00.232: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Mar 25 13:47:00.232: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Mar 25 13:47:00.234: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Mar 25 13:47:00.238: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Mar 25 13:47:00.238: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Mar 25 13:47:00.244: INFO: Updating deployment nginx-deployment Mar 25 13:47:00.244: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Mar 25 13:47:00.646: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Mar 25 13:47:00.865: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Mar 25 13:47:03.271: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-4873,SelfLink:/apis/apps/v1/namespaces/deployment-4873/deployments/nginx-deployment,UID:f0e48cfd-4e30-43bb-83ad-163ad5bb9fef,ResourceVersion:1782717,Generation:3,CreationTimestamp:2020-03-25 13:46:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2020-03-25 13:47:00 +0000 UTC 2020-03-25 13:47:00 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-03-25 13:47:00 +0000 UTC 2020-03-25 13:46:48 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},} Mar 25 13:47:03.274: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-4873,SelfLink:/apis/apps/v1/namespaces/deployment-4873/replicasets/nginx-deployment-55fb7cb77f,UID:07fe0728-b5f3-44f7-b7bf-929f50138db2,ResourceVersion:1782710,Generation:3,CreationTimestamp:2020-03-25 13:46:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment f0e48cfd-4e30-43bb-83ad-163ad5bb9fef 0xc002d4d177 0xc002d4d178}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 25 13:47:03.274: INFO: All old ReplicaSets of Deployment "nginx-deployment": Mar 25 13:47:03.274: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-4873,SelfLink:/apis/apps/v1/namespaces/deployment-4873/replicasets/nginx-deployment-7b8c6f4498,UID:2d34c0b5-c24f-45e9-86aa-5ae4d5f8c5b3,ResourceVersion:1782706,Generation:3,CreationTimestamp:2020-03-25 13:46:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment f0e48cfd-4e30-43bb-83ad-163ad5bb9fef 0xc002d4d247 0xc002d4d248}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Mar 25 13:47:03.280: INFO: Pod "nginx-deployment-55fb7cb77f-5crkd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-5crkd,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4873,SelfLink:/api/v1/namespaces/deployment-4873/pods/nginx-deployment-55fb7cb77f-5crkd,UID:86166aed-a7d2-457b-96b2-0cd13c669d8d,ResourceVersion:1782762,Generation:0,CreationTimestamp:2020-03-25 13:47:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 07fe0728-b5f3-44f7-b7bf-929f50138db2 0xc002d4dbe7 0xc002d4dbe8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9ht6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9ht6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-t9ht6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d4dc60} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d4dc80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:00 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-03-25 13:47:01 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 25 13:47:03.280: INFO: Pod "nginx-deployment-55fb7cb77f-7xmhm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-7xmhm,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4873,SelfLink:/api/v1/namespaces/deployment-4873/pods/nginx-deployment-55fb7cb77f-7xmhm,UID:b2c484ee-6450-4f59-b63d-e706a22fa143,ResourceVersion:1782640,Generation:0,CreationTimestamp:2020-03-25 13:46:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 07fe0728-b5f3-44f7-b7bf-929f50138db2 0xc002d4dd50 0xc002d4dd51}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9ht6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9ht6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-t9ht6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d4ddf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d4de10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:46:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:46:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:46:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:46:58 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-03-25 13:46:58 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 25 13:47:03.280: INFO: Pod "nginx-deployment-55fb7cb77f-8hss2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-8hss2,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4873,SelfLink:/api/v1/namespaces/deployment-4873/pods/nginx-deployment-55fb7cb77f-8hss2,UID:12c9ddd5-d25c-49a0-a376-d333761ed7c6,ResourceVersion:1782719,Generation:0,CreationTimestamp:2020-03-25 13:47:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 07fe0728-b5f3-44f7-b7bf-929f50138db2 0xc002d4dee0 0xc002d4dee1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9ht6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9ht6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-t9ht6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d4df60} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d4df80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:00 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-03-25 13:47:00 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 25 13:47:03.281: INFO: Pod "nginx-deployment-55fb7cb77f-gnzhb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-gnzhb,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4873,SelfLink:/api/v1/namespaces/deployment-4873/pods/nginx-deployment-55fb7cb77f-gnzhb,UID:703bfb1a-d855-42cd-8b04-a52600c88077,ResourceVersion:1782773,Generation:0,CreationTimestamp:2020-03-25 13:46:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 07fe0728-b5f3-44f7-b7bf-929f50138db2 0xc0027be050 0xc0027be051}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9ht6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9ht6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-t9ht6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027be0d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027be0f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:46:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:46:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:46:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:46:58 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.6,StartTime:2020-03-25 13:46:58 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/library/nginx:404": failed to resolve reference "docker.io/library/nginx:404": docker.io/library/nginx:404: not found,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 25 13:47:03.281: INFO: Pod "nginx-deployment-55fb7cb77f-hb6p6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-hb6p6,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4873,SelfLink:/api/v1/namespaces/deployment-4873/pods/nginx-deployment-55fb7cb77f-hb6p6,UID:e1cedf38-7600-4e81-afae-2949243e8e3f,ResourceVersion:1782768,Generation:0,CreationTimestamp:2020-03-25 13:47:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 07fe0728-b5f3-44f7-b7bf-929f50138db2 0xc0027be1e0 0xc0027be1e1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9ht6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9ht6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-t9ht6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027be260} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027be280}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:00 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-03-25 13:47:01 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 25 13:47:03.281: INFO: Pod "nginx-deployment-55fb7cb77f-m8cwt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-m8cwt,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4873,SelfLink:/api/v1/namespaces/deployment-4873/pods/nginx-deployment-55fb7cb77f-m8cwt,UID:fcd4ead7-000d-433b-890d-6f5c708051de,ResourceVersion:1782776,Generation:0,CreationTimestamp:2020-03-25 13:47:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 07fe0728-b5f3-44f7-b7bf-929f50138db2 0xc0027be350 0xc0027be351}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9ht6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9ht6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-t9ht6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027be3d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027be3f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:00 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-03-25 13:47:01 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 25 13:47:03.281: INFO: Pod "nginx-deployment-55fb7cb77f-mpvct" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-mpvct,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4873,SelfLink:/api/v1/namespaces/deployment-4873/pods/nginx-deployment-55fb7cb77f-mpvct,UID:ff2014b6-d92a-43a3-92f9-cb70e1ed6b39,ResourceVersion:1782619,Generation:0,CreationTimestamp:2020-03-25 13:46:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 07fe0728-b5f3-44f7-b7bf-929f50138db2 0xc0027be4d0 0xc0027be4d1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9ht6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9ht6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-t9ht6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027be550} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027be570}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:46:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:46:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:46:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:46:58 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-03-25 13:46:58 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 25 13:47:03.282: INFO: Pod "nginx-deployment-55fb7cb77f-mvdpw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-mvdpw,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4873,SelfLink:/api/v1/namespaces/deployment-4873/pods/nginx-deployment-55fb7cb77f-mvdpw,UID:7260d2df-290d-4e96-ade5-5299ec4975c0,ResourceVersion:1782707,Generation:0,CreationTimestamp:2020-03-25 13:47:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 07fe0728-b5f3-44f7-b7bf-929f50138db2 0xc0027be640 0xc0027be641}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9ht6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9ht6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-t9ht6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027be6c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027be6e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:00 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 25 13:47:03.282: INFO: Pod "nginx-deployment-55fb7cb77f-nrst7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-nrst7,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4873,SelfLink:/api/v1/namespaces/deployment-4873/pods/nginx-deployment-55fb7cb77f-nrst7,UID:2f71a141-02c1-481e-888b-06633ce41e45,ResourceVersion:1782641,Generation:0,CreationTimestamp:2020-03-25 13:46:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 07fe0728-b5f3-44f7-b7bf-929f50138db2 0xc0027be767 0xc0027be768}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9ht6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9ht6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-t9ht6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027be7e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027be800}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:46:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:46:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:46:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:46:58 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-03-25 13:46:58 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 25 13:47:03.282: INFO: Pod "nginx-deployment-55fb7cb77f-pmj97" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-pmj97,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4873,SelfLink:/api/v1/namespaces/deployment-4873/pods/nginx-deployment-55fb7cb77f-pmj97,UID:d1d1b143-81b5-47f8-8d2f-93029a54a9d6,ResourceVersion:1782725,Generation:0,CreationTimestamp:2020-03-25 13:47:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 07fe0728-b5f3-44f7-b7bf-929f50138db2 0xc0027be8d0 0xc0027be8d1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9ht6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9ht6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-t9ht6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027be950} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027be970}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:00 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-03-25 13:47:00 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 25 13:47:03.282: INFO: Pod "nginx-deployment-55fb7cb77f-vlgnd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-vlgnd,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4873,SelfLink:/api/v1/namespaces/deployment-4873/pods/nginx-deployment-55fb7cb77f-vlgnd,UID:9c3aea29-59c5-4d40-a1ed-9da78e0c363c,ResourceVersion:1782638,Generation:0,CreationTimestamp:2020-03-25 13:46:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 07fe0728-b5f3-44f7-b7bf-929f50138db2 0xc0027bea40 0xc0027bea41}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9ht6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9ht6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-t9ht6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027beac0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027beae0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:46:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:46:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:46:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:46:58 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-03-25 13:46:58 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 25 13:47:03.282: INFO: Pod "nginx-deployment-55fb7cb77f-z2p5k" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-z2p5k,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4873,SelfLink:/api/v1/namespaces/deployment-4873/pods/nginx-deployment-55fb7cb77f-z2p5k,UID:c76736bd-0012-4708-86ef-b320fe71fc8f,ResourceVersion:1782771,Generation:0,CreationTimestamp:2020-03-25 13:47:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 07fe0728-b5f3-44f7-b7bf-929f50138db2 0xc0027bebb0 0xc0027bebb1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9ht6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9ht6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-t9ht6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027bec30} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027bec50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:00 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-03-25 13:47:01 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 25 13:47:03.282: INFO: Pod "nginx-deployment-55fb7cb77f-zb89l" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-zb89l,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4873,SelfLink:/api/v1/namespaces/deployment-4873/pods/nginx-deployment-55fb7cb77f-zb89l,UID:e1b508d2-e508-4048-876f-0b5f82b2de48,ResourceVersion:1782737,Generation:0,CreationTimestamp:2020-03-25 13:47:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 07fe0728-b5f3-44f7-b7bf-929f50138db2 0xc0027bed20 0xc0027bed21}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9ht6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9ht6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-t9ht6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027beda0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027bedc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:00 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-03-25 13:47:00 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 25 13:47:03.282: INFO: Pod "nginx-deployment-7b8c6f4498-2w7sk" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2w7sk,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4873,SelfLink:/api/v1/namespaces/deployment-4873/pods/nginx-deployment-7b8c6f4498-2w7sk,UID:51d646a3-48e3-444c-9a6e-66a117337ccb,ResourceVersion:1782565,Generation:0,CreationTimestamp:2020-03-25 13:46:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2d34c0b5-c24f-45e9-86aa-5ae4d5f8c5b3 0xc0027beea0 0xc0027beea1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9ht6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9ht6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-t9ht6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027bef10} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027bef30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:46:48 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:46:55 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:46:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:46:48 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.4,StartTime:2020-03-25 13:46:48 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-25 13:46:54 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://0bfbbfa64be2a6e4da66160e2b0ee61fb3420f5c2af1556fd233e6966c44cbff}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 25 13:47:03.282: INFO: Pod "nginx-deployment-7b8c6f4498-4hnrl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-4hnrl,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4873,SelfLink:/api/v1/namespaces/deployment-4873/pods/nginx-deployment-7b8c6f4498-4hnrl,UID:8679bfbc-8b2e-46ca-aa7d-b609e5a40ab2,ResourceVersion:1782716,Generation:0,CreationTimestamp:2020-03-25 13:47:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2d34c0b5-c24f-45e9-86aa-5ae4d5f8c5b3 0xc0027bf017 0xc0027bf018}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9ht6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9ht6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-t9ht6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027bf090} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027bf0b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:00 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-03-25 13:47:00 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 25 13:47:03.283: INFO: Pod "nginx-deployment-7b8c6f4498-5wqmc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-5wqmc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4873,SelfLink:/api/v1/namespaces/deployment-4873/pods/nginx-deployment-7b8c6f4498-5wqmc,UID:3a6ac24d-362a-41c7-9278-80d91d09bd7b,ResourceVersion:1782767,Generation:0,CreationTimestamp:2020-03-25 13:47:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2d34c0b5-c24f-45e9-86aa-5ae4d5f8c5b3 0xc0027bf177 0xc0027bf178}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9ht6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9ht6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-t9ht6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027bf1f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027bf210}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:00 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-03-25 13:47:01 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 25 13:47:03.283: INFO: Pod "nginx-deployment-7b8c6f4498-69qgn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-69qgn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4873,SelfLink:/api/v1/namespaces/deployment-4873/pods/nginx-deployment-7b8c6f4498-69qgn,UID:5f0abbdd-a7df-4607-8eb9-e681a5e48cfa,ResourceVersion:1782705,Generation:0,CreationTimestamp:2020-03-25 13:47:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2d34c0b5-c24f-45e9-86aa-5ae4d5f8c5b3 0xc0027bf2d7 0xc0027bf2d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9ht6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9ht6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-t9ht6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027bf350} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027bf370}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:00 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-03-25 13:47:00 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 25 13:47:03.283: INFO: Pod "nginx-deployment-7b8c6f4498-76g4t" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-76g4t,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4873,SelfLink:/api/v1/namespaces/deployment-4873/pods/nginx-deployment-7b8c6f4498-76g4t,UID:e9c15c6b-f938-42f8-96b7-01cc71f08260,ResourceVersion:1782552,Generation:0,CreationTimestamp:2020-03-25 13:46:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2d34c0b5-c24f-45e9-86aa-5ae4d5f8c5b3 0xc0027bf437 0xc0027bf438}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9ht6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9ht6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-t9ht6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027bf4b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027bf4d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:46:48 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:46:54 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:46:54 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:46:48 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.252,StartTime:2020-03-25 13:46:48 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-25 13:46:54 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://d494944d1cea36a887674454ddfb3e4e8877ff548dfaf1c822e9dbd55f53c32b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 25 13:47:03.283: INFO: Pod "nginx-deployment-7b8c6f4498-8w8mw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8w8mw,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4873,SelfLink:/api/v1/namespaces/deployment-4873/pods/nginx-deployment-7b8c6f4498-8w8mw,UID:cfd54f2d-be5c-4af5-ae60-25c3a6af49a4,ResourceVersion:1782759,Generation:0,CreationTimestamp:2020-03-25 13:47:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2d34c0b5-c24f-45e9-86aa-5ae4d5f8c5b3 0xc0027bf5a7 0xc0027bf5a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9ht6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9ht6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-t9ht6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027bf620} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027bf640}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:00 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-03-25 13:47:01 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 25 13:47:03.283: INFO: Pod "nginx-deployment-7b8c6f4498-9t2zl" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-9t2zl,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4873,SelfLink:/api/v1/namespaces/deployment-4873/pods/nginx-deployment-7b8c6f4498-9t2zl,UID:2f00add2-5a82-426c-ab8b-f06f7c3d84d0,ResourceVersion:1782560,Generation:0,CreationTimestamp:2020-03-25 13:46:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2d34c0b5-c24f-45e9-86aa-5ae4d5f8c5b3 0xc0027bf707 0xc0027bf708}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9ht6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9ht6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-t9ht6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027bf780} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027bf7a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:46:48 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:46:55 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:46:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:46:48 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.254,StartTime:2020-03-25 13:46:48 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-25 13:46:55 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://5ed8f764742f24ca60fa174846ad598a71b1ce998f74fe800b0810390ade32ee}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 25 13:47:03.283: INFO: Pod "nginx-deployment-7b8c6f4498-gcktp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-gcktp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4873,SelfLink:/api/v1/namespaces/deployment-4873/pods/nginx-deployment-7b8c6f4498-gcktp,UID:7b1d8698-9786-4fd5-b49e-a8f4ef296b3a,ResourceVersion:1782782,Generation:0,CreationTimestamp:2020-03-25 13:47:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2d34c0b5-c24f-45e9-86aa-5ae4d5f8c5b3 0xc0027bf877 0xc0027bf878}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9ht6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9ht6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-t9ht6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027bf8f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027bf910}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:00 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-03-25 13:47:01 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 25 13:47:03.283: INFO: Pod "nginx-deployment-7b8c6f4498-h6lz6" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-h6lz6,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4873,SelfLink:/api/v1/namespaces/deployment-4873/pods/nginx-deployment-7b8c6f4498-h6lz6,UID:b4b297ea-fc9d-4331-8702-a73dae37a30a,ResourceVersion:1782585,Generation:0,CreationTimestamp:2020-03-25 13:46:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2d34c0b5-c24f-45e9-86aa-5ae4d5f8c5b3 0xc0027bf9d7 0xc0027bf9d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9ht6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9ht6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-t9ht6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027bfa50} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027bfa70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:46:48 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:46:56 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:46:56 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:46:48 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.169,StartTime:2020-03-25 13:46:48 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-25 13:46:56 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://ca6423530c7d82cb32524afb569bec87af129878ccb7f05baff3d9f9b1ccf3cf}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 25 13:47:03.284: INFO: Pod "nginx-deployment-7b8c6f4498-h6nsd" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-h6nsd,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4873,SelfLink:/api/v1/namespaces/deployment-4873/pods/nginx-deployment-7b8c6f4498-h6nsd,UID:5047bc96-e4b8-43f3-90a3-521753c7591a,ResourceVersion:1782527,Generation:0,CreationTimestamp:2020-03-25 13:46:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2d34c0b5-c24f-45e9-86aa-5ae4d5f8c5b3 0xc0027bfb47 0xc0027bfb48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9ht6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9ht6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-t9ht6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027bfbc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027bfbe0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:46:48 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:46:51 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:46:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:46:48 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.165,StartTime:2020-03-25 13:46:48 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-25 13:46:51 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://ee3416a956b24f91430b37093afe0a7559cc94f2f5976f17d92d83fd09dd12c5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 25 13:47:03.284: INFO: Pod "nginx-deployment-7b8c6f4498-mdwws" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-mdwws,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4873,SelfLink:/api/v1/namespaces/deployment-4873/pods/nginx-deployment-7b8c6f4498-mdwws,UID:9007878f-5cd0-49ac-b8a3-4011fb7520eb,ResourceVersion:1782736,Generation:0,CreationTimestamp:2020-03-25 13:47:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2d34c0b5-c24f-45e9-86aa-5ae4d5f8c5b3 0xc0027bfcb7 0xc0027bfcb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9ht6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9ht6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-t9ht6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027bfd30} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027bfd50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:00 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-03-25 13:47:00 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 25 13:47:03.284: INFO: Pod "nginx-deployment-7b8c6f4498-nr2st" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-nr2st,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4873,SelfLink:/api/v1/namespaces/deployment-4873/pods/nginx-deployment-7b8c6f4498-nr2st,UID:9b83eaba-333a-4aa9-b890-31bf4b1468ec,ResourceVersion:1782727,Generation:0,CreationTimestamp:2020-03-25 13:47:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2d34c0b5-c24f-45e9-86aa-5ae4d5f8c5b3 0xc0027bfe17 0xc0027bfe18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9ht6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9ht6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-t9ht6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027bfe90} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027bfeb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:00 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-03-25 13:47:00 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 25 13:47:03.284: INFO: Pod "nginx-deployment-7b8c6f4498-nzz7k" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-nzz7k,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4873,SelfLink:/api/v1/namespaces/deployment-4873/pods/nginx-deployment-7b8c6f4498-nzz7k,UID:5e671ed1-e313-4645-a506-8a89919e28f9,ResourceVersion:1782733,Generation:0,CreationTimestamp:2020-03-25 13:47:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2d34c0b5-c24f-45e9-86aa-5ae4d5f8c5b3 0xc0027bff77 0xc0027bff78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9ht6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9ht6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-t9ht6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027bfff0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0031b0010}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:00 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-03-25 13:47:00 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 25 13:47:03.284: INFO: Pod "nginx-deployment-7b8c6f4498-p4cpb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-p4cpb,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4873,SelfLink:/api/v1/namespaces/deployment-4873/pods/nginx-deployment-7b8c6f4498-p4cpb,UID:83e5fa9e-5d97-4d51-88e6-a2b9ea4a0691,ResourceVersion:1782749,Generation:0,CreationTimestamp:2020-03-25 13:47:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2d34c0b5-c24f-45e9-86aa-5ae4d5f8c5b3 0xc0031b00d7 0xc0031b00d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9ht6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9ht6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-t9ht6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0031b0150} {node.kubernetes.io/unreachable Exists NoExecute 0xc0031b0180}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:00 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-03-25 13:47:01 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 25 13:47:03.284: INFO: Pod "nginx-deployment-7b8c6f4498-p82m2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-p82m2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4873,SelfLink:/api/v1/namespaces/deployment-4873/pods/nginx-deployment-7b8c6f4498-p82m2,UID:1a1c7194-0266-4ca9-91f7-6c83daac6648,ResourceVersion:1782730,Generation:0,CreationTimestamp:2020-03-25 13:47:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2d34c0b5-c24f-45e9-86aa-5ae4d5f8c5b3 0xc0031b0247 0xc0031b0248}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9ht6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9ht6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-t9ht6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0031b02c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0031b02f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:00 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-03-25 13:47:00 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 25 13:47:03.285: INFO: Pod "nginx-deployment-7b8c6f4498-rrrbf" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rrrbf,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4873,SelfLink:/api/v1/namespaces/deployment-4873/pods/nginx-deployment-7b8c6f4498-rrrbf,UID:022ebdbd-8bfd-4a67-b9f4-3cfe32641570,ResourceVersion:1782568,Generation:0,CreationTimestamp:2020-03-25 13:46:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2d34c0b5-c24f-45e9-86aa-5ae4d5f8c5b3 0xc0031b03b7 0xc0031b03b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9ht6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9ht6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-t9ht6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0031b0430} {node.kubernetes.io/unreachable Exists NoExecute 0xc0031b0450}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:46:48 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:46:55 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:46:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:46:48 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.253,StartTime:2020-03-25 13:46:48 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-25 13:46:54 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://b11e836ad050f69f81377123b4c10bf771b440d8a81f499cf270836079ec36b9}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 25 13:47:03.285: INFO: Pod "nginx-deployment-7b8c6f4498-vsmrn" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vsmrn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4873,SelfLink:/api/v1/namespaces/deployment-4873/pods/nginx-deployment-7b8c6f4498-vsmrn,UID:6b2eb820-924d-4d07-8e5a-54caba7a78c2,ResourceVersion:1782579,Generation:0,CreationTimestamp:2020-03-25 13:46:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2d34c0b5-c24f-45e9-86aa-5ae4d5f8c5b3 0xc0031b0527 0xc0031b0528}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9ht6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9ht6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-t9ht6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0031b05a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0031b05d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:46:48 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:46:56 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:46:56 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:46:48 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.5,StartTime:2020-03-25 13:46:48 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-25 13:46:56 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://26eccd351001656d61526e8da295edaf065b28c19ed4c5285b69a85b6c161507}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 25 13:47:03.285: INFO: Pod "nginx-deployment-7b8c6f4498-vtcjt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vtcjt,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4873,SelfLink:/api/v1/namespaces/deployment-4873/pods/nginx-deployment-7b8c6f4498-vtcjt,UID:2fb4b68c-9452-4d9d-a2de-cb46e01e3870,ResourceVersion:1782743,Generation:0,CreationTimestamp:2020-03-25 13:47:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2d34c0b5-c24f-45e9-86aa-5ae4d5f8c5b3 0xc0031b06a7 0xc0031b06a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9ht6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9ht6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-t9ht6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0031b0720} {node.kubernetes.io/unreachable Exists NoExecute 0xc0031b0740}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:00 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-03-25 13:47:01 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 25 13:47:03.285: INFO: Pod "nginx-deployment-7b8c6f4498-vzmdd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vzmdd,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4873,SelfLink:/api/v1/namespaces/deployment-4873/pods/nginx-deployment-7b8c6f4498-vzmdd,UID:f4d2bb5b-43f1-45c3-ac3c-e15a55b3e33d,ResourceVersion:1782742,Generation:0,CreationTimestamp:2020-03-25 13:47:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2d34c0b5-c24f-45e9-86aa-5ae4d5f8c5b3 0xc0031b0807 0xc0031b0808}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9ht6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9ht6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-t9ht6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0031b0880} {node.kubernetes.io/unreachable Exists NoExecute 0xc0031b08a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:47:00 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-03-25 13:47:01 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 25 13:47:03.285: INFO: Pod "nginx-deployment-7b8c6f4498-zxfsz" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zxfsz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4873,SelfLink:/api/v1/namespaces/deployment-4873/pods/nginx-deployment-7b8c6f4498-zxfsz,UID:2d8fd04e-af19-48cb-9f55-cdaa73d38995,ResourceVersion:1782541,Generation:0,CreationTimestamp:2020-03-25 13:46:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2d34c0b5-c24f-45e9-86aa-5ae4d5f8c5b3 0xc0031b0967 0xc0031b0968}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9ht6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9ht6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-t9ht6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0031b09e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0031b0a00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:46:48 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:46:53 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:46:53 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:46:48 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.166,StartTime:2020-03-25 13:46:48 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-25 13:46:53 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://eaa97fb2c78d1d4ef88493b4cd1e0848c9b6a0f7d4013b8ab4d2de8da9e8979e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:47:03.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4873" for this suite. Mar 25 13:47:25.326: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:47:25.421: INFO: namespace deployment-4873 deletion completed in 22.133038372s • [SLOW TEST:37.464 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:47:25.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0325 13:47:35.543971 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 25 13:47:35.544: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:47:35.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-853" for this suite. Mar 25 13:47:41.582: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:47:41.669: INFO: namespace gc-853 deletion completed in 6.121677766s • [SLOW TEST:16.248 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:47:41.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap that has name configmap-test-emptyKey-928eb82f-3a59-45a0-abb5-00d1cf570cd5 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:47:41.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1958" for this suite. Mar 25 13:47:47.743: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:47:47.822: INFO: namespace configmap-1958 deletion completed in 6.097681762s • [SLOW TEST:6.152 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:47:47.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 25 13:47:47.887: INFO: Waiting up to 5m0s for pod "downwardapi-volume-59a80b4b-985d-4eca-94b9-dcd4cc0712b2" in namespace "downward-api-9378" to be "success or failure" Mar 25 13:47:47.890: INFO: Pod "downwardapi-volume-59a80b4b-985d-4eca-94b9-dcd4cc0712b2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.01985ms Mar 25 13:47:49.894: INFO: Pod "downwardapi-volume-59a80b4b-985d-4eca-94b9-dcd4cc0712b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006939696s Mar 25 13:47:51.899: INFO: Pod "downwardapi-volume-59a80b4b-985d-4eca-94b9-dcd4cc0712b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011978751s STEP: Saw pod success Mar 25 13:47:51.899: INFO: Pod "downwardapi-volume-59a80b4b-985d-4eca-94b9-dcd4cc0712b2" satisfied condition "success or failure" Mar 25 13:47:51.902: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-59a80b4b-985d-4eca-94b9-dcd4cc0712b2 container client-container: STEP: delete the pod Mar 25 13:47:51.922: INFO: Waiting for pod downwardapi-volume-59a80b4b-985d-4eca-94b9-dcd4cc0712b2 to disappear Mar 25 13:47:51.926: INFO: Pod downwardapi-volume-59a80b4b-985d-4eca-94b9-dcd4cc0712b2 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:47:51.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9378" for this suite. Mar 25 13:47:57.958: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:47:58.026: INFO: namespace downward-api-9378 deletion completed in 6.096351459s • [SLOW TEST:10.203 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:47:58.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-9fe2083b-0ea6-451c-955e-6c5634a7d476 in namespace container-probe-6132 Mar 25 13:48:02.120: INFO: Started pod liveness-9fe2083b-0ea6-451c-955e-6c5634a7d476 in namespace container-probe-6132 STEP: checking the pod's current state and verifying that restartCount is present Mar 25 13:48:02.123: INFO: Initial restart count of pod liveness-9fe2083b-0ea6-451c-955e-6c5634a7d476 is 0 Mar 25 13:48:22.166: INFO: Restart count of pod container-probe-6132/liveness-9fe2083b-0ea6-451c-955e-6c5634a7d476 is now 1 (20.043201749s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:48:22.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6132" for this suite. Mar 25 13:48:28.278: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:48:28.353: INFO: namespace container-probe-6132 deletion completed in 6.09342502s • [SLOW TEST:30.327 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:48:28.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-hl7r STEP: Creating a pod to test atomic-volume-subpath Mar 25 13:48:28.444: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-hl7r" in namespace "subpath-63" to be "success or failure" Mar 25 13:48:28.460: INFO: Pod "pod-subpath-test-configmap-hl7r": Phase="Pending", Reason="", readiness=false. Elapsed: 15.297298ms Mar 25 13:48:30.527: INFO: Pod "pod-subpath-test-configmap-hl7r": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082677445s Mar 25 13:48:32.532: INFO: Pod "pod-subpath-test-configmap-hl7r": Phase="Running", Reason="", readiness=true. Elapsed: 4.08737104s Mar 25 13:48:34.536: INFO: Pod "pod-subpath-test-configmap-hl7r": Phase="Running", Reason="", readiness=true. Elapsed: 6.091835873s Mar 25 13:48:36.541: INFO: Pod "pod-subpath-test-configmap-hl7r": Phase="Running", Reason="", readiness=true. Elapsed: 8.096419292s Mar 25 13:48:38.545: INFO: Pod "pod-subpath-test-configmap-hl7r": Phase="Running", Reason="", readiness=true. Elapsed: 10.100700346s Mar 25 13:48:40.549: INFO: Pod "pod-subpath-test-configmap-hl7r": Phase="Running", Reason="", readiness=true. Elapsed: 12.104599567s Mar 25 13:48:42.554: INFO: Pod "pod-subpath-test-configmap-hl7r": Phase="Running", Reason="", readiness=true. Elapsed: 14.109177938s Mar 25 13:48:44.558: INFO: Pod "pod-subpath-test-configmap-hl7r": Phase="Running", Reason="", readiness=true. Elapsed: 16.113071024s Mar 25 13:48:46.562: INFO: Pod "pod-subpath-test-configmap-hl7r": Phase="Running", Reason="", readiness=true. Elapsed: 18.117228724s Mar 25 13:48:48.566: INFO: Pod "pod-subpath-test-configmap-hl7r": Phase="Running", Reason="", readiness=true. Elapsed: 20.121832619s Mar 25 13:48:50.571: INFO: Pod "pod-subpath-test-configmap-hl7r": Phase="Running", Reason="", readiness=true. Elapsed: 22.126411809s Mar 25 13:48:52.575: INFO: Pod "pod-subpath-test-configmap-hl7r": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.130910607s STEP: Saw pod success Mar 25 13:48:52.575: INFO: Pod "pod-subpath-test-configmap-hl7r" satisfied condition "success or failure" Mar 25 13:48:52.579: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-configmap-hl7r container test-container-subpath-configmap-hl7r: STEP: delete the pod Mar 25 13:48:52.638: INFO: Waiting for pod pod-subpath-test-configmap-hl7r to disappear Mar 25 13:48:52.642: INFO: Pod pod-subpath-test-configmap-hl7r no longer exists STEP: Deleting pod pod-subpath-test-configmap-hl7r Mar 25 13:48:52.642: INFO: Deleting pod "pod-subpath-test-configmap-hl7r" in namespace "subpath-63" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:48:52.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-63" for this suite. Mar 25 13:48:58.658: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:48:58.738: INFO: namespace subpath-63 deletion completed in 6.090406376s • [SLOW TEST:30.384 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:48:58.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Mar 25 13:49:03.384: INFO: Successfully updated pod "annotationupdatea95c0766-71aa-4090-9c42-7c3fd2346f8b" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:49:05.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6610" for this suite. Mar 25 13:49:27.452: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:49:27.538: INFO: namespace projected-6610 deletion completed in 22.110096032s • [SLOW TEST:28.800 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:49:27.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 25 13:49:27.612: INFO: Waiting up to 5m0s for pod "downwardapi-volume-81add1aa-2684-461d-8586-16bce4754515" in namespace "projected-1998" to be "success or failure" Mar 25 13:49:27.626: INFO: Pod "downwardapi-volume-81add1aa-2684-461d-8586-16bce4754515": Phase="Pending", Reason="", readiness=false. Elapsed: 14.099388ms Mar 25 13:49:29.635: INFO: Pod "downwardapi-volume-81add1aa-2684-461d-8586-16bce4754515": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022821543s Mar 25 13:49:31.639: INFO: Pod "downwardapi-volume-81add1aa-2684-461d-8586-16bce4754515": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026525004s STEP: Saw pod success Mar 25 13:49:31.639: INFO: Pod "downwardapi-volume-81add1aa-2684-461d-8586-16bce4754515" satisfied condition "success or failure" Mar 25 13:49:31.641: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-81add1aa-2684-461d-8586-16bce4754515 container client-container: STEP: delete the pod Mar 25 13:49:31.678: INFO: Waiting for pod downwardapi-volume-81add1aa-2684-461d-8586-16bce4754515 to disappear Mar 25 13:49:31.682: INFO: Pod downwardapi-volume-81add1aa-2684-461d-8586-16bce4754515 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:49:31.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1998" for this suite. Mar 25 13:49:37.698: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:49:37.773: INFO: namespace projected-1998 deletion completed in 6.087212676s • [SLOW TEST:10.234 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:49:37.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc Mar 25 13:49:37.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7017' Mar 25 13:49:40.678: INFO: stderr: "" Mar 25 13:49:40.678: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. Mar 25 13:49:41.696: INFO: Selector matched 1 pods for map[app:redis] Mar 25 13:49:41.696: INFO: Found 0 / 1 Mar 25 13:49:42.683: INFO: Selector matched 1 pods for map[app:redis] Mar 25 13:49:42.683: INFO: Found 0 / 1 Mar 25 13:49:43.685: INFO: Selector matched 1 pods for map[app:redis] Mar 25 13:49:43.685: INFO: Found 1 / 1 Mar 25 13:49:43.685: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 25 13:49:43.688: INFO: Selector matched 1 pods for map[app:redis] Mar 25 13:49:43.688: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Mar 25 13:49:43.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-xbthq redis-master --namespace=kubectl-7017' Mar 25 13:49:43.799: INFO: stderr: "" Mar 25 13:49:43.799: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 25 Mar 13:49:43.177 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 25 Mar 13:49:43.177 # Server started, Redis version 3.2.12\n1:M 25 Mar 13:49:43.177 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 25 Mar 13:49:43.178 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Mar 25 13:49:43.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-xbthq redis-master --namespace=kubectl-7017 --tail=1' Mar 25 13:49:43.905: INFO: stderr: "" Mar 25 13:49:43.905: INFO: stdout: "1:M 25 Mar 13:49:43.178 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Mar 25 13:49:43.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-xbthq redis-master --namespace=kubectl-7017 --limit-bytes=1' Mar 25 13:49:44.010: INFO: stderr: "" Mar 25 13:49:44.010: INFO: stdout: " " STEP: exposing timestamps Mar 25 13:49:44.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-xbthq redis-master --namespace=kubectl-7017 --tail=1 --timestamps' Mar 25 13:49:44.123: INFO: stderr: "" Mar 25 13:49:44.123: INFO: stdout: "2020-03-25T13:49:43.178172685Z 1:M 25 Mar 13:49:43.178 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Mar 25 13:49:46.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-xbthq redis-master --namespace=kubectl-7017 --since=1s' Mar 25 13:49:46.743: INFO: stderr: "" Mar 25 13:49:46.743: INFO: stdout: "" Mar 25 13:49:46.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-xbthq redis-master --namespace=kubectl-7017 --since=24h' Mar 25 13:49:46.852: INFO: stderr: "" Mar 25 13:49:46.852: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 25 Mar 13:49:43.177 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 25 Mar 13:49:43.177 # Server started, Redis version 3.2.12\n1:M 25 Mar 13:49:43.177 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 25 Mar 13:49:43.178 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources Mar 25 13:49:46.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7017' Mar 25 13:49:46.970: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 25 13:49:46.970: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Mar 25 13:49:46.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-7017' Mar 25 13:49:47.077: INFO: stderr: "No resources found.\n" Mar 25 13:49:47.077: INFO: stdout: "" Mar 25 13:49:47.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-7017 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 25 13:49:47.192: INFO: stderr: "" Mar 25 13:49:47.192: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:49:47.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7017" for this suite. Mar 25 13:49:53.216: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:49:53.289: INFO: namespace kubectl-7017 deletion completed in 6.086950453s • [SLOW TEST:15.515 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:49:53.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-4737 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 25 13:49:53.355: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 25 13:50:13.473: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.21 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4737 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 25 13:50:13.473: INFO: >>> kubeConfig: /root/.kube/config I0325 13:50:13.507549 6 log.go:172] (0xc0005653f0) (0xc001622960) Create stream I0325 13:50:13.507581 6 log.go:172] (0xc0005653f0) (0xc001622960) Stream added, broadcasting: 1 I0325 13:50:13.510601 6 log.go:172] (0xc0005653f0) Reply frame received for 1 I0325 13:50:13.510651 6 log.go:172] (0xc0005653f0) (0xc0016c60a0) Create stream I0325 13:50:13.510673 6 log.go:172] (0xc0005653f0) (0xc0016c60a0) Stream added, broadcasting: 3 I0325 13:50:13.511740 6 log.go:172] (0xc0005653f0) Reply frame received for 3 I0325 13:50:13.511776 6 log.go:172] (0xc0005653f0) (0xc0016c6140) Create stream I0325 13:50:13.511789 6 log.go:172] (0xc0005653f0) (0xc0016c6140) Stream added, broadcasting: 5 I0325 13:50:13.512681 6 log.go:172] (0xc0005653f0) Reply frame received for 5 I0325 13:50:14.596069 6 log.go:172] (0xc0005653f0) Data frame received for 3 I0325 13:50:14.596099 6 log.go:172] (0xc0016c60a0) (3) Data frame handling I0325 13:50:14.596121 6 log.go:172] (0xc0016c60a0) (3) Data frame sent I0325 13:50:14.596135 6 log.go:172] (0xc0005653f0) Data frame received for 3 I0325 13:50:14.596151 6 log.go:172] (0xc0016c60a0) (3) Data frame handling I0325 13:50:14.596211 6 log.go:172] (0xc0005653f0) Data frame received for 5 I0325 13:50:14.596232 6 log.go:172] (0xc0016c6140) (5) Data frame handling I0325 13:50:14.598459 6 log.go:172] (0xc0005653f0) Data frame received for 1 I0325 13:50:14.598481 6 log.go:172] (0xc001622960) (1) Data frame handling I0325 13:50:14.598498 6 log.go:172] (0xc001622960) (1) Data frame sent I0325 13:50:14.598512 6 log.go:172] (0xc0005653f0) (0xc001622960) Stream removed, broadcasting: 1 I0325 13:50:14.598530 6 log.go:172] (0xc0005653f0) Go away received I0325 13:50:14.598632 6 log.go:172] (0xc0005653f0) (0xc001622960) Stream removed, broadcasting: 1 I0325 13:50:14.598648 6 log.go:172] (0xc0005653f0) (0xc0016c60a0) Stream removed, broadcasting: 3 I0325 13:50:14.598655 6 log.go:172] (0xc0005653f0) (0xc0016c6140) Stream removed, broadcasting: 5 Mar 25 13:50:14.598: INFO: Found all expected endpoints: [netserver-0] Mar 25 13:50:14.601: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.188 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4737 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 25 13:50:14.601: INFO: >>> kubeConfig: /root/.kube/config I0325 13:50:14.635486 6 log.go:172] (0xc00106a420) (0xc002d246e0) Create stream I0325 13:50:14.635511 6 log.go:172] (0xc00106a420) (0xc002d246e0) Stream added, broadcasting: 1 I0325 13:50:14.637780 6 log.go:172] (0xc00106a420) Reply frame received for 1 I0325 13:50:14.637836 6 log.go:172] (0xc00106a420) (0xc003126000) Create stream I0325 13:50:14.637861 6 log.go:172] (0xc00106a420) (0xc003126000) Stream added, broadcasting: 3 I0325 13:50:14.638999 6 log.go:172] (0xc00106a420) Reply frame received for 3 I0325 13:50:14.639027 6 log.go:172] (0xc00106a420) (0xc001622aa0) Create stream I0325 13:50:14.639036 6 log.go:172] (0xc00106a420) (0xc001622aa0) Stream added, broadcasting: 5 I0325 13:50:14.639951 6 log.go:172] (0xc00106a420) Reply frame received for 5 I0325 13:50:15.723899 6 log.go:172] (0xc00106a420) Data frame received for 3 I0325 13:50:15.723941 6 log.go:172] (0xc003126000) (3) Data frame handling I0325 13:50:15.723968 6 log.go:172] (0xc003126000) (3) Data frame sent I0325 13:50:15.724152 6 log.go:172] (0xc00106a420) Data frame received for 5 I0325 13:50:15.724205 6 log.go:172] (0xc001622aa0) (5) Data frame handling I0325 13:50:15.724243 6 log.go:172] (0xc00106a420) Data frame received for 3 I0325 13:50:15.724274 6 log.go:172] (0xc003126000) (3) Data frame handling I0325 13:50:15.727061 6 log.go:172] (0xc00106a420) Data frame received for 1 I0325 13:50:15.727101 6 log.go:172] (0xc002d246e0) (1) Data frame handling I0325 13:50:15.727137 6 log.go:172] (0xc002d246e0) (1) Data frame sent I0325 13:50:15.727161 6 log.go:172] (0xc00106a420) (0xc002d246e0) Stream removed, broadcasting: 1 I0325 13:50:15.727205 6 log.go:172] (0xc00106a420) Go away received I0325 13:50:15.727385 6 log.go:172] (0xc00106a420) (0xc002d246e0) Stream removed, broadcasting: 1 I0325 13:50:15.727420 6 log.go:172] (0xc00106a420) (0xc003126000) Stream removed, broadcasting: 3 I0325 13:50:15.727433 6 log.go:172] (0xc00106a420) (0xc001622aa0) Stream removed, broadcasting: 5 Mar 25 13:50:15.727: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:50:15.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4737" for this suite. Mar 25 13:50:37.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:50:37.868: INFO: namespace pod-network-test-4737 deletion completed in 22.135804799s • [SLOW TEST:44.579 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:50:37.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Mar 25 13:50:41.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-1b7eed5f-ecc1-48c4-99ed-ff66526659b0 -c busybox-main-container --namespace=emptydir-8329 -- cat /usr/share/volumeshare/shareddata.txt' Mar 25 13:50:42.157: INFO: stderr: "I0325 13:50:42.088938 1261 log.go:172] (0xc000a34630) (0xc0006c2be0) Create stream\nI0325 13:50:42.089034 1261 log.go:172] (0xc000a34630) (0xc0006c2be0) Stream added, broadcasting: 1\nI0325 13:50:42.092058 1261 log.go:172] (0xc000a34630) Reply frame received for 1\nI0325 13:50:42.092207 1261 log.go:172] (0xc000a34630) (0xc0008d6000) Create stream\nI0325 13:50:42.092294 1261 log.go:172] (0xc000a34630) (0xc0008d6000) Stream added, broadcasting: 3\nI0325 13:50:42.094033 1261 log.go:172] (0xc000a34630) Reply frame received for 3\nI0325 13:50:42.094065 1261 log.go:172] (0xc000a34630) (0xc0008d60a0) Create stream\nI0325 13:50:42.094076 1261 log.go:172] (0xc000a34630) (0xc0008d60a0) Stream added, broadcasting: 5\nI0325 13:50:42.095187 1261 log.go:172] (0xc000a34630) Reply frame received for 5\nI0325 13:50:42.150153 1261 log.go:172] (0xc000a34630) Data frame received for 5\nI0325 13:50:42.150216 1261 log.go:172] (0xc0008d60a0) (5) Data frame handling\nI0325 13:50:42.150256 1261 log.go:172] (0xc000a34630) Data frame received for 3\nI0325 13:50:42.150279 1261 log.go:172] (0xc0008d6000) (3) Data frame handling\nI0325 13:50:42.150298 1261 log.go:172] (0xc0008d6000) (3) Data frame sent\nI0325 13:50:42.150318 1261 log.go:172] (0xc000a34630) Data frame received for 3\nI0325 13:50:42.150336 1261 log.go:172] (0xc0008d6000) (3) Data frame handling\nI0325 13:50:42.152073 1261 log.go:172] (0xc000a34630) Data frame received for 1\nI0325 13:50:42.152113 1261 log.go:172] (0xc0006c2be0) (1) Data frame handling\nI0325 13:50:42.152144 1261 log.go:172] (0xc0006c2be0) (1) Data frame sent\nI0325 13:50:42.152187 1261 log.go:172] (0xc000a34630) (0xc0006c2be0) Stream removed, broadcasting: 1\nI0325 13:50:42.152224 1261 log.go:172] (0xc000a34630) Go away received\nI0325 13:50:42.152612 1261 log.go:172] (0xc000a34630) (0xc0006c2be0) Stream removed, broadcasting: 1\nI0325 13:50:42.152650 1261 log.go:172] (0xc000a34630) (0xc0008d6000) Stream removed, broadcasting: 3\nI0325 13:50:42.152670 1261 log.go:172] (0xc000a34630) (0xc0008d60a0) Stream removed, broadcasting: 5\n" Mar 25 13:50:42.157: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:50:42.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8329" for this suite. Mar 25 13:50:48.176: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:50:48.247: INFO: namespace emptydir-8329 deletion completed in 6.084720306s • [SLOW TEST:10.378 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:50:48.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-4290 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 25 13:50:48.328: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 25 13:51:14.408: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.22:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-4290 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 25 13:51:14.408: INFO: >>> kubeConfig: /root/.kube/config I0325 13:51:14.435384 6 log.go:172] (0xc001390840) (0xc0023b63c0) Create stream I0325 13:51:14.435414 6 log.go:172] (0xc001390840) (0xc0023b63c0) Stream added, broadcasting: 1 I0325 13:51:14.438101 6 log.go:172] (0xc001390840) Reply frame received for 1 I0325 13:51:14.438142 6 log.go:172] (0xc001390840) (0xc000678be0) Create stream I0325 13:51:14.438154 6 log.go:172] (0xc001390840) (0xc000678be0) Stream added, broadcasting: 3 I0325 13:51:14.439055 6 log.go:172] (0xc001390840) Reply frame received for 3 I0325 13:51:14.439090 6 log.go:172] (0xc001390840) (0xc0023b6460) Create stream I0325 13:51:14.439106 6 log.go:172] (0xc001390840) (0xc0023b6460) Stream added, broadcasting: 5 I0325 13:51:14.440287 6 log.go:172] (0xc001390840) Reply frame received for 5 I0325 13:51:14.523292 6 log.go:172] (0xc001390840) Data frame received for 5 I0325 13:51:14.523327 6 log.go:172] (0xc0023b6460) (5) Data frame handling I0325 13:51:14.523360 6 log.go:172] (0xc001390840) Data frame received for 3 I0325 13:51:14.523397 6 log.go:172] (0xc000678be0) (3) Data frame handling I0325 13:51:14.523422 6 log.go:172] (0xc000678be0) (3) Data frame sent I0325 13:51:14.523433 6 log.go:172] (0xc001390840) Data frame received for 3 I0325 13:51:14.523449 6 log.go:172] (0xc000678be0) (3) Data frame handling I0325 13:51:14.524967 6 log.go:172] (0xc001390840) Data frame received for 1 I0325 13:51:14.524992 6 log.go:172] (0xc0023b63c0) (1) Data frame handling I0325 13:51:14.525005 6 log.go:172] (0xc0023b63c0) (1) Data frame sent I0325 13:51:14.525023 6 log.go:172] (0xc001390840) (0xc0023b63c0) Stream removed, broadcasting: 1 I0325 13:51:14.525074 6 log.go:172] (0xc001390840) Go away received I0325 13:51:14.525301 6 log.go:172] (0xc001390840) (0xc0023b63c0) Stream removed, broadcasting: 1 I0325 13:51:14.525353 6 log.go:172] (0xc001390840) (0xc000678be0) Stream removed, broadcasting: 3 I0325 13:51:14.525370 6 log.go:172] (0xc001390840) (0xc0023b6460) Stream removed, broadcasting: 5 Mar 25 13:51:14.525: INFO: Found all expected endpoints: [netserver-0] Mar 25 13:51:14.528: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.191:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-4290 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 25 13:51:14.529: INFO: >>> kubeConfig: /root/.kube/config I0325 13:51:14.561634 6 log.go:172] (0xc0018ce6e0) (0xc003158000) Create stream I0325 13:51:14.561663 6 log.go:172] (0xc0018ce6e0) (0xc003158000) Stream added, broadcasting: 1 I0325 13:51:14.563903 6 log.go:172] (0xc0018ce6e0) Reply frame received for 1 I0325 13:51:14.563947 6 log.go:172] (0xc0018ce6e0) (0xc000678dc0) Create stream I0325 13:51:14.563963 6 log.go:172] (0xc0018ce6e0) (0xc000678dc0) Stream added, broadcasting: 3 I0325 13:51:14.564877 6 log.go:172] (0xc0018ce6e0) Reply frame received for 3 I0325 13:51:14.564929 6 log.go:172] (0xc0018ce6e0) (0xc0031580a0) Create stream I0325 13:51:14.564945 6 log.go:172] (0xc0018ce6e0) (0xc0031580a0) Stream added, broadcasting: 5 I0325 13:51:14.566158 6 log.go:172] (0xc0018ce6e0) Reply frame received for 5 I0325 13:51:14.628809 6 log.go:172] (0xc0018ce6e0) Data frame received for 5 I0325 13:51:14.628840 6 log.go:172] (0xc0031580a0) (5) Data frame handling I0325 13:51:14.628868 6 log.go:172] (0xc0018ce6e0) Data frame received for 3 I0325 13:51:14.628917 6 log.go:172] (0xc000678dc0) (3) Data frame handling I0325 13:51:14.628938 6 log.go:172] (0xc000678dc0) (3) Data frame sent I0325 13:51:14.628956 6 log.go:172] (0xc0018ce6e0) Data frame received for 3 I0325 13:51:14.628965 6 log.go:172] (0xc000678dc0) (3) Data frame handling I0325 13:51:14.630299 6 log.go:172] (0xc0018ce6e0) Data frame received for 1 I0325 13:51:14.630317 6 log.go:172] (0xc003158000) (1) Data frame handling I0325 13:51:14.630328 6 log.go:172] (0xc003158000) (1) Data frame sent I0325 13:51:14.630339 6 log.go:172] (0xc0018ce6e0) (0xc003158000) Stream removed, broadcasting: 1 I0325 13:51:14.630352 6 log.go:172] (0xc0018ce6e0) Go away received I0325 13:51:14.630488 6 log.go:172] (0xc0018ce6e0) (0xc003158000) Stream removed, broadcasting: 1 I0325 13:51:14.630515 6 log.go:172] (0xc0018ce6e0) (0xc000678dc0) Stream removed, broadcasting: 3 I0325 13:51:14.630523 6 log.go:172] (0xc0018ce6e0) (0xc0031580a0) Stream removed, broadcasting: 5 Mar 25 13:51:14.630: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:51:14.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4290" for this suite. Mar 25 13:51:36.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:51:36.758: INFO: namespace pod-network-test-4290 deletion completed in 22.124455198s • [SLOW TEST:48.511 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:51:36.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-07d0cdfe-8cb5-4665-9646-bfa8a56a4fa7 STEP: Creating a pod to test consume configMaps Mar 25 13:51:36.833: INFO: Waiting up to 5m0s for pod "pod-configmaps-dd0bc605-af72-44b6-b146-776c97112488" in namespace "configmap-4314" to be "success or failure" Mar 25 13:51:36.878: INFO: Pod "pod-configmaps-dd0bc605-af72-44b6-b146-776c97112488": Phase="Pending", Reason="", readiness=false. Elapsed: 45.390393ms Mar 25 13:51:38.882: INFO: Pod "pod-configmaps-dd0bc605-af72-44b6-b146-776c97112488": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049286655s Mar 25 13:51:40.886: INFO: Pod "pod-configmaps-dd0bc605-af72-44b6-b146-776c97112488": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053171689s STEP: Saw pod success Mar 25 13:51:40.886: INFO: Pod "pod-configmaps-dd0bc605-af72-44b6-b146-776c97112488" satisfied condition "success or failure" Mar 25 13:51:40.889: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-dd0bc605-af72-44b6-b146-776c97112488 container configmap-volume-test: STEP: delete the pod Mar 25 13:51:40.907: INFO: Waiting for pod pod-configmaps-dd0bc605-af72-44b6-b146-776c97112488 to disappear Mar 25 13:51:40.911: INFO: Pod pod-configmaps-dd0bc605-af72-44b6-b146-776c97112488 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:51:40.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4314" for this suite. Mar 25 13:51:46.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:51:47.018: INFO: namespace configmap-4314 deletion completed in 6.104474664s • [SLOW TEST:10.260 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:51:47.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Mar 25 13:51:51.620: INFO: Successfully updated pod "labelsupdatefccf3257-1014-4171-b84a-76c0fc6cbfbc" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:51:53.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8302" for this suite. Mar 25 13:52:15.670: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:52:15.754: INFO: namespace projected-8302 deletion completed in 22.101705032s • [SLOW TEST:28.735 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:52:15.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 25 13:52:15.881: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 13:52:15.886: INFO: Number of nodes with available pods: 0 Mar 25 13:52:15.886: INFO: Node iruya-worker is running more than one daemon pod Mar 25 13:52:16.952: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 13:52:16.955: INFO: Number of nodes with available pods: 0 Mar 25 13:52:16.955: INFO: Node iruya-worker is running more than one daemon pod Mar 25 13:52:17.891: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 13:52:17.894: INFO: Number of nodes with available pods: 0 Mar 25 13:52:17.894: INFO: Node iruya-worker is running more than one daemon pod Mar 25 13:52:18.891: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 13:52:18.915: INFO: Number of nodes with available pods: 0 Mar 25 13:52:18.915: INFO: Node iruya-worker is running more than one daemon pod Mar 25 13:52:19.891: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 13:52:19.893: INFO: Number of nodes with available pods: 2 Mar 25 13:52:19.893: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Mar 25 13:52:19.940: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 13:52:19.942: INFO: Number of nodes with available pods: 1 Mar 25 13:52:19.942: INFO: Node iruya-worker2 is running more than one daemon pod Mar 25 13:52:20.949: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 13:52:20.952: INFO: Number of nodes with available pods: 1 Mar 25 13:52:20.952: INFO: Node iruya-worker2 is running more than one daemon pod Mar 25 13:52:21.947: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 13:52:21.950: INFO: Number of nodes with available pods: 1 Mar 25 13:52:21.950: INFO: Node iruya-worker2 is running more than one daemon pod Mar 25 13:52:22.948: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 13:52:22.951: INFO: Number of nodes with available pods: 1 Mar 25 13:52:22.951: INFO: Node iruya-worker2 is running more than one daemon pod Mar 25 13:52:23.946: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 13:52:23.949: INFO: Number of nodes with available pods: 1 Mar 25 13:52:23.949: INFO: Node iruya-worker2 is running more than one daemon pod Mar 25 13:52:24.947: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 13:52:24.950: INFO: Number of nodes with available pods: 1 Mar 25 13:52:24.950: INFO: Node iruya-worker2 is running more than one daemon pod Mar 25 13:52:25.947: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 13:52:25.950: INFO: Number of nodes with available pods: 1 Mar 25 13:52:25.950: INFO: Node iruya-worker2 is running more than one daemon pod Mar 25 13:52:26.947: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 13:52:26.951: INFO: Number of nodes with available pods: 1 Mar 25 13:52:26.951: INFO: Node iruya-worker2 is running more than one daemon pod Mar 25 13:52:27.948: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 13:52:27.951: INFO: Number of nodes with available pods: 1 Mar 25 13:52:27.952: INFO: Node iruya-worker2 is running more than one daemon pod Mar 25 13:52:28.947: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 13:52:28.950: INFO: Number of nodes with available pods: 1 Mar 25 13:52:28.950: INFO: Node iruya-worker2 is running more than one daemon pod Mar 25 13:52:29.946: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 13:52:29.949: INFO: Number of nodes with available pods: 1 Mar 25 13:52:29.949: INFO: Node iruya-worker2 is running more than one daemon pod Mar 25 13:52:30.947: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 13:52:30.951: INFO: Number of nodes with available pods: 1 Mar 25 13:52:30.951: INFO: Node iruya-worker2 is running more than one daemon pod Mar 25 13:52:32.007: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 13:52:32.011: INFO: Number of nodes with available pods: 1 Mar 25 13:52:32.011: INFO: Node iruya-worker2 is running more than one daemon pod Mar 25 13:52:32.947: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 13:52:32.950: INFO: Number of nodes with available pods: 1 Mar 25 13:52:32.950: INFO: Node iruya-worker2 is running more than one daemon pod Mar 25 13:52:33.947: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 13:52:33.949: INFO: Number of nodes with available pods: 1 Mar 25 13:52:33.949: INFO: Node iruya-worker2 is running more than one daemon pod Mar 25 13:52:34.947: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 13:52:34.951: INFO: Number of nodes with available pods: 1 Mar 25 13:52:34.951: INFO: Node iruya-worker2 is running more than one daemon pod Mar 25 13:52:35.946: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 13:52:35.949: INFO: Number of nodes with available pods: 2 Mar 25 13:52:35.949: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3248, will wait for the garbage collector to delete the pods Mar 25 13:52:36.010: INFO: Deleting DaemonSet.extensions daemon-set took: 7.202869ms Mar 25 13:52:36.310: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.31048ms Mar 25 13:52:42.220: INFO: Number of nodes with available pods: 0 Mar 25 13:52:42.220: INFO: Number of running nodes: 0, number of available pods: 0 Mar 25 13:52:42.222: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3248/daemonsets","resourceVersion":"1784226"},"items":null} Mar 25 13:52:42.224: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3248/pods","resourceVersion":"1784226"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:52:42.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3248" for this suite. Mar 25 13:52:48.246: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:52:48.361: INFO: namespace daemonsets-3248 deletion completed in 6.126677941s • [SLOW TEST:32.605 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:52:48.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:53:48.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5097" for this suite. Mar 25 13:54:10.462: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:54:10.543: INFO: namespace container-probe-5097 deletion completed in 22.094867873s • [SLOW TEST:82.182 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:54:10.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Mar 25 13:54:10.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5818' Mar 25 13:54:10.863: INFO: stderr: "" Mar 25 13:54:10.863: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Mar 25 13:54:11.868: INFO: Selector matched 1 pods for map[app:redis] Mar 25 13:54:11.868: INFO: Found 0 / 1 Mar 25 13:54:12.870: INFO: Selector matched 1 pods for map[app:redis] Mar 25 13:54:12.870: INFO: Found 0 / 1 Mar 25 13:54:13.868: INFO: Selector matched 1 pods for map[app:redis] Mar 25 13:54:13.868: INFO: Found 0 / 1 Mar 25 13:54:14.868: INFO: Selector matched 1 pods for map[app:redis] Mar 25 13:54:14.868: INFO: Found 1 / 1 Mar 25 13:54:14.868: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Mar 25 13:54:14.872: INFO: Selector matched 1 pods for map[app:redis] Mar 25 13:54:14.872: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 25 13:54:14.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-bc8q2 --namespace=kubectl-5818 -p {"metadata":{"annotations":{"x":"y"}}}' Mar 25 13:54:14.972: INFO: stderr: "" Mar 25 13:54:14.972: INFO: stdout: "pod/redis-master-bc8q2 patched\n" STEP: checking annotations Mar 25 13:54:14.974: INFO: Selector matched 1 pods for map[app:redis] Mar 25 13:54:14.974: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:54:14.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5818" for this suite. Mar 25 13:54:37.000: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:54:37.082: INFO: namespace kubectl-5818 deletion completed in 22.10540742s • [SLOW TEST:26.539 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:54:37.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info Mar 25 13:54:37.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Mar 25 13:54:37.269: INFO: stderr: "" Mar 25 13:54:37.269: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:54:37.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4778" for this suite. Mar 25 13:54:43.289: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:54:43.364: INFO: namespace kubectl-4778 deletion completed in 6.091524437s • [SLOW TEST:6.282 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:54:43.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller Mar 25 13:54:43.453: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2408' Mar 25 13:54:43.701: INFO: stderr: "" Mar 25 13:54:43.701: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 25 13:54:43.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2408' Mar 25 13:54:43.851: INFO: stderr: "" Mar 25 13:54:43.851: INFO: stdout: "update-demo-nautilus-r4hb5 update-demo-nautilus-t9z4n " Mar 25 13:54:43.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r4hb5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2408' Mar 25 13:54:43.942: INFO: stderr: "" Mar 25 13:54:43.942: INFO: stdout: "" Mar 25 13:54:43.942: INFO: update-demo-nautilus-r4hb5 is created but not running Mar 25 13:54:48.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2408' Mar 25 13:54:49.064: INFO: stderr: "" Mar 25 13:54:49.064: INFO: stdout: "update-demo-nautilus-r4hb5 update-demo-nautilus-t9z4n " Mar 25 13:54:49.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r4hb5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2408' Mar 25 13:54:49.153: INFO: stderr: "" Mar 25 13:54:49.153: INFO: stdout: "true" Mar 25 13:54:49.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r4hb5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2408' Mar 25 13:54:49.244: INFO: stderr: "" Mar 25 13:54:49.244: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 25 13:54:49.244: INFO: validating pod update-demo-nautilus-r4hb5 Mar 25 13:54:49.249: INFO: got data: { "image": "nautilus.jpg" } Mar 25 13:54:49.249: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 25 13:54:49.249: INFO: update-demo-nautilus-r4hb5 is verified up and running Mar 25 13:54:49.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t9z4n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2408' Mar 25 13:54:49.341: INFO: stderr: "" Mar 25 13:54:49.341: INFO: stdout: "true" Mar 25 13:54:49.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t9z4n -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2408' Mar 25 13:54:49.436: INFO: stderr: "" Mar 25 13:54:49.436: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 25 13:54:49.436: INFO: validating pod update-demo-nautilus-t9z4n Mar 25 13:54:49.440: INFO: got data: { "image": "nautilus.jpg" } Mar 25 13:54:49.440: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 25 13:54:49.440: INFO: update-demo-nautilus-t9z4n is verified up and running STEP: rolling-update to new replication controller Mar 25 13:54:49.443: INFO: scanned /root for discovery docs: Mar 25 13:54:49.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-2408' Mar 25 13:55:11.997: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Mar 25 13:55:11.997: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 25 13:55:11.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2408' Mar 25 13:55:12.090: INFO: stderr: "" Mar 25 13:55:12.090: INFO: stdout: "update-demo-kitten-jwnlt update-demo-kitten-wpcfd update-demo-nautilus-t9z4n " STEP: Replicas for name=update-demo: expected=2 actual=3 Mar 25 13:55:17.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2408' Mar 25 13:55:17.190: INFO: stderr: "" Mar 25 13:55:17.190: INFO: stdout: "update-demo-kitten-jwnlt update-demo-kitten-wpcfd " Mar 25 13:55:17.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-jwnlt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2408' Mar 25 13:55:17.277: INFO: stderr: "" Mar 25 13:55:17.277: INFO: stdout: "true" Mar 25 13:55:17.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-jwnlt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2408' Mar 25 13:55:17.375: INFO: stderr: "" Mar 25 13:55:17.375: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Mar 25 13:55:17.375: INFO: validating pod update-demo-kitten-jwnlt Mar 25 13:55:17.379: INFO: got data: { "image": "kitten.jpg" } Mar 25 13:55:17.379: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Mar 25 13:55:17.379: INFO: update-demo-kitten-jwnlt is verified up and running Mar 25 13:55:17.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-wpcfd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2408' Mar 25 13:55:17.468: INFO: stderr: "" Mar 25 13:55:17.468: INFO: stdout: "true" Mar 25 13:55:17.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-wpcfd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2408' Mar 25 13:55:17.550: INFO: stderr: "" Mar 25 13:55:17.550: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Mar 25 13:55:17.550: INFO: validating pod update-demo-kitten-wpcfd Mar 25 13:55:17.555: INFO: got data: { "image": "kitten.jpg" } Mar 25 13:55:17.555: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Mar 25 13:55:17.555: INFO: update-demo-kitten-wpcfd is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:55:17.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2408" for this suite. Mar 25 13:55:39.571: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:55:39.652: INFO: namespace kubectl-2408 deletion completed in 22.094391941s • [SLOW TEST:56.287 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:55:39.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Mar 25 13:55:39.693: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:55:46.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7374" for this suite. Mar 25 13:56:08.455: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:56:08.535: INFO: namespace init-container-7374 deletion completed in 22.105148002s • [SLOW TEST:28.882 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:56:08.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 25 13:56:08.581: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Mar 25 13:56:08.598: INFO: Pod name sample-pod: Found 0 pods out of 1 Mar 25 13:56:13.602: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 25 13:56:13.603: INFO: Creating deployment "test-rolling-update-deployment" Mar 25 13:56:13.610: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Mar 25 13:56:13.618: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Mar 25 13:56:15.652: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Mar 25 13:56:15.654: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720741373, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720741373, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720741373, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720741373, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 13:56:17.663: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Mar 25 13:56:17.671: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-7034,SelfLink:/apis/apps/v1/namespaces/deployment-7034/deployments/test-rolling-update-deployment,UID:b3baeb9f-7f5e-44f9-bd6d-7c0be637f6c7,ResourceVersion:1784953,Generation:1,CreationTimestamp:2020-03-25 13:56:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-03-25 13:56:13 +0000 UTC 2020-03-25 13:56:13 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-03-25 13:56:16 +0000 UTC 2020-03-25 13:56:13 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Mar 25 13:56:17.674: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-7034,SelfLink:/apis/apps/v1/namespaces/deployment-7034/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:3227abc2-c2f8-4c19-b8fb-4b250a64177f,ResourceVersion:1784942,Generation:1,CreationTimestamp:2020-03-25 13:56:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment b3baeb9f-7f5e-44f9-bd6d-7c0be637f6c7 0xc0032642e7 0xc0032642e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Mar 25 13:56:17.674: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Mar 25 13:56:17.674: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-7034,SelfLink:/apis/apps/v1/namespaces/deployment-7034/replicasets/test-rolling-update-controller,UID:8aa7463d-1c2c-4d21-b652-d11965c13eee,ResourceVersion:1784951,Generation:2,CreationTimestamp:2020-03-25 13:56:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment b3baeb9f-7f5e-44f9-bd6d-7c0be637f6c7 0xc003264217 0xc003264218}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 25 13:56:17.677: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-xdp8j" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-xdp8j,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-7034,SelfLink:/api/v1/namespaces/deployment-7034/pods/test-rolling-update-deployment-79f6b9d75c-xdp8j,UID:51530e0a-f802-4ace-9118-fbbf3df55c48,ResourceVersion:1784941,Generation:0,CreationTimestamp:2020-03-25 13:56:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 3227abc2-c2f8-4c19-b8fb-4b250a64177f 0xc003264bd7 0xc003264bd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-fbrz5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fbrz5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-fbrz5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003264c50} {node.kubernetes.io/unreachable Exists NoExecute 0xc003264c70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:56:13 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:56:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:56:16 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:56:13 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.31,StartTime:2020-03-25 13:56:13 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-03-25 13:56:16 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://edd5fbbacaa883051b3ee3558c9f245bba5f46709d8fd15455db42b8c74016ae}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:56:17.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7034" for this suite. Mar 25 13:56:23.697: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:56:23.815: INFO: namespace deployment-7034 deletion completed in 6.134913576s • [SLOW TEST:15.280 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:56:23.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-b9cff6a1-3a06-4501-9d6e-0a9953c798b9 STEP: Creating a pod to test consume configMaps Mar 25 13:56:23.918: INFO: Waiting up to 5m0s for pod "pod-configmaps-8e9efd65-6712-4d32-a303-6a20cd725516" in namespace "configmap-2694" to be "success or failure" Mar 25 13:56:23.922: INFO: Pod "pod-configmaps-8e9efd65-6712-4d32-a303-6a20cd725516": Phase="Pending", Reason="", readiness=false. Elapsed: 3.952858ms Mar 25 13:56:25.926: INFO: Pod "pod-configmaps-8e9efd65-6712-4d32-a303-6a20cd725516": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00816572s Mar 25 13:56:27.930: INFO: Pod "pod-configmaps-8e9efd65-6712-4d32-a303-6a20cd725516": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012516698s STEP: Saw pod success Mar 25 13:56:27.930: INFO: Pod "pod-configmaps-8e9efd65-6712-4d32-a303-6a20cd725516" satisfied condition "success or failure" Mar 25 13:56:27.934: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-8e9efd65-6712-4d32-a303-6a20cd725516 container configmap-volume-test: STEP: delete the pod Mar 25 13:56:27.953: INFO: Waiting for pod pod-configmaps-8e9efd65-6712-4d32-a303-6a20cd725516 to disappear Mar 25 13:56:27.978: INFO: Pod pod-configmaps-8e9efd65-6712-4d32-a303-6a20cd725516 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:56:27.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2694" for this suite. Mar 25 13:56:33.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:56:34.079: INFO: namespace configmap-2694 deletion completed in 6.098264495s • [SLOW TEST:10.264 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:56:34.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command Mar 25 13:56:34.148: INFO: Waiting up to 5m0s for pod "var-expansion-c65676f5-67a6-4056-a4a4-d29a89ce63d9" in namespace "var-expansion-6354" to be "success or failure" Mar 25 13:56:34.162: INFO: Pod "var-expansion-c65676f5-67a6-4056-a4a4-d29a89ce63d9": Phase="Pending", Reason="", readiness=false. Elapsed: 13.344135ms Mar 25 13:56:36.176: INFO: Pod "var-expansion-c65676f5-67a6-4056-a4a4-d29a89ce63d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027694407s Mar 25 13:56:38.180: INFO: Pod "var-expansion-c65676f5-67a6-4056-a4a4-d29a89ce63d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031445347s STEP: Saw pod success Mar 25 13:56:38.180: INFO: Pod "var-expansion-c65676f5-67a6-4056-a4a4-d29a89ce63d9" satisfied condition "success or failure" Mar 25 13:56:38.183: INFO: Trying to get logs from node iruya-worker pod var-expansion-c65676f5-67a6-4056-a4a4-d29a89ce63d9 container dapi-container: STEP: delete the pod Mar 25 13:56:38.224: INFO: Waiting for pod var-expansion-c65676f5-67a6-4056-a4a4-d29a89ce63d9 to disappear Mar 25 13:56:38.242: INFO: Pod var-expansion-c65676f5-67a6-4056-a4a4-d29a89ce63d9 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:56:38.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6354" for this suite. Mar 25 13:56:44.260: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:56:44.350: INFO: namespace var-expansion-6354 deletion completed in 6.104817076s • [SLOW TEST:10.270 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:56:44.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-c4c830da-e21f-43c0-9c83-80b7c182ce85 STEP: Creating a pod to test consume secrets Mar 25 13:56:44.439: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7defa50a-9eef-4e18-a0bc-697ce792b968" in namespace "projected-1808" to be "success or failure" Mar 25 13:56:44.443: INFO: Pod "pod-projected-secrets-7defa50a-9eef-4e18-a0bc-697ce792b968": Phase="Pending", Reason="", readiness=false. Elapsed: 3.61155ms Mar 25 13:56:46.447: INFO: Pod "pod-projected-secrets-7defa50a-9eef-4e18-a0bc-697ce792b968": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007518666s Mar 25 13:56:48.451: INFO: Pod "pod-projected-secrets-7defa50a-9eef-4e18-a0bc-697ce792b968": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011893747s STEP: Saw pod success Mar 25 13:56:48.451: INFO: Pod "pod-projected-secrets-7defa50a-9eef-4e18-a0bc-697ce792b968" satisfied condition "success or failure" Mar 25 13:56:48.454: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-7defa50a-9eef-4e18-a0bc-697ce792b968 container projected-secret-volume-test: STEP: delete the pod Mar 25 13:56:48.475: INFO: Waiting for pod pod-projected-secrets-7defa50a-9eef-4e18-a0bc-697ce792b968 to disappear Mar 25 13:56:48.499: INFO: Pod pod-projected-secrets-7defa50a-9eef-4e18-a0bc-697ce792b968 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:56:48.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1808" for this suite. Mar 25 13:56:54.512: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:56:54.594: INFO: namespace projected-1808 deletion completed in 6.092235511s • [SLOW TEST:10.244 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:56:54.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-bff72514-aa72-403d-8c85-0e6c194fa6f8 STEP: Creating a pod to test consume secrets Mar 25 13:56:54.667: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-eb423f24-cdd9-4eec-b980-0277df914272" in namespace "projected-7650" to be "success or failure" Mar 25 13:56:54.691: INFO: Pod "pod-projected-secrets-eb423f24-cdd9-4eec-b980-0277df914272": Phase="Pending", Reason="", readiness=false. Elapsed: 23.952813ms Mar 25 13:56:56.707: INFO: Pod "pod-projected-secrets-eb423f24-cdd9-4eec-b980-0277df914272": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039440711s Mar 25 13:56:58.712: INFO: Pod "pod-projected-secrets-eb423f24-cdd9-4eec-b980-0277df914272": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044435076s STEP: Saw pod success Mar 25 13:56:58.712: INFO: Pod "pod-projected-secrets-eb423f24-cdd9-4eec-b980-0277df914272" satisfied condition "success or failure" Mar 25 13:56:58.715: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-eb423f24-cdd9-4eec-b980-0277df914272 container projected-secret-volume-test: STEP: delete the pod Mar 25 13:56:58.731: INFO: Waiting for pod pod-projected-secrets-eb423f24-cdd9-4eec-b980-0277df914272 to disappear Mar 25 13:56:58.737: INFO: Pod pod-projected-secrets-eb423f24-cdd9-4eec-b980-0277df914272 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:56:58.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7650" for this suite. Mar 25 13:57:04.767: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:57:04.845: INFO: namespace projected-7650 deletion completed in 6.10461026s • [SLOW TEST:10.250 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:57:04.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 25 13:57:04.953: INFO: Pod name rollover-pod: Found 0 pods out of 1 Mar 25 13:57:09.957: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 25 13:57:09.957: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Mar 25 13:57:11.985: INFO: Creating deployment "test-rollover-deployment" Mar 25 13:57:11.995: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Mar 25 13:57:14.004: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Mar 25 13:57:14.011: INFO: Ensure that both replica sets have 1 created replica Mar 25 13:57:14.018: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Mar 25 13:57:14.025: INFO: Updating deployment test-rollover-deployment Mar 25 13:57:14.025: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Mar 25 13:57:16.057: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Mar 25 13:57:16.063: INFO: Make sure deployment "test-rollover-deployment" is complete Mar 25 13:57:16.069: INFO: all replica sets need to contain the pod-template-hash label Mar 25 13:57:16.069: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720741432, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720741432, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720741434, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720741432, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 13:57:18.077: INFO: all replica sets need to contain the pod-template-hash label Mar 25 13:57:18.077: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720741432, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720741432, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720741437, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720741432, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 13:57:20.077: INFO: all replica sets need to contain the pod-template-hash label Mar 25 13:57:20.077: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720741432, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720741432, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720741437, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720741432, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 13:57:22.076: INFO: all replica sets need to contain the pod-template-hash label Mar 25 13:57:22.076: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720741432, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720741432, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720741437, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720741432, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 13:57:24.077: INFO: all replica sets need to contain the pod-template-hash label Mar 25 13:57:24.077: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720741432, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720741432, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720741437, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720741432, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 13:57:26.078: INFO: all replica sets need to contain the pod-template-hash label Mar 25 13:57:26.078: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720741432, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720741432, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720741437, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720741432, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 13:57:28.079: INFO: Mar 25 13:57:28.079: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Mar 25 13:57:28.086: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-3091,SelfLink:/apis/apps/v1/namespaces/deployment-3091/deployments/test-rollover-deployment,UID:2ccd1253-0853-4668-ae41-dc29ab601f9c,ResourceVersion:1785292,Generation:2,CreationTimestamp:2020-03-25 13:57:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-03-25 13:57:12 +0000 UTC 2020-03-25 13:57:12 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-03-25 13:57:27 +0000 UTC 2020-03-25 13:57:12 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Mar 25 13:57:28.090: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-3091,SelfLink:/apis/apps/v1/namespaces/deployment-3091/replicasets/test-rollover-deployment-854595fc44,UID:2d27ae70-8c9d-4b90-8699-26282ffcf917,ResourceVersion:1785281,Generation:2,CreationTimestamp:2020-03-25 13:57:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 2ccd1253-0853-4668-ae41-dc29ab601f9c 0xc002ccc597 0xc002ccc598}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Mar 25 13:57:28.090: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Mar 25 13:57:28.090: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-3091,SelfLink:/apis/apps/v1/namespaces/deployment-3091/replicasets/test-rollover-controller,UID:40a6e28e-f5de-4d55-8a30-589bb905d760,ResourceVersion:1785290,Generation:2,CreationTimestamp:2020-03-25 13:57:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 2ccd1253-0853-4668-ae41-dc29ab601f9c 0xc002ccc4af 0xc002ccc4c0}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 25 13:57:28.090: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-3091,SelfLink:/apis/apps/v1/namespaces/deployment-3091/replicasets/test-rollover-deployment-9b8b997cf,UID:015ee400-4738-4332-b2e7-7abe1880d8fe,ResourceVersion:1785245,Generation:2,CreationTimestamp:2020-03-25 13:57:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 2ccd1253-0853-4668-ae41-dc29ab601f9c 0xc002ccc660 0xc002ccc661}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 25 13:57:28.093: INFO: Pod "test-rollover-deployment-854595fc44-m4rzt" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-m4rzt,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-3091,SelfLink:/api/v1/namespaces/deployment-3091/pods/test-rollover-deployment-854595fc44-m4rzt,UID:ae94d75d-08cf-4f9a-bdd4-cebcde224048,ResourceVersion:1785258,Generation:0,CreationTimestamp:2020-03-25 13:57:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 2d27ae70-8c9d-4b90-8699-26282ffcf917 0xc002ccd257 0xc002ccd258}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jmj8d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jmj8d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-jmj8d true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ccd2d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ccd2f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:57:14 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:57:17 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:57:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:57:14 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.202,StartTime:2020-03-25 13:57:14 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-03-25 13:57:16 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://59bb2d3d1ea04e2d03ae8209fad4b4b228a9f921371a4503a895078d3e10eec9}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:57:28.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3091" for this suite. Mar 25 13:57:36.126: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:57:36.239: INFO: namespace deployment-3091 deletion completed in 8.142030587s • [SLOW TEST:31.394 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:57:36.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 25 13:57:36.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-5937' Mar 25 13:57:36.384: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 25 13:57:36.384: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 Mar 25 13:57:36.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-5937' Mar 25 13:57:36.527: INFO: stderr: "" Mar 25 13:57:36.527: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:57:36.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5937" for this suite. Mar 25 13:57:58.562: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:57:58.650: INFO: namespace kubectl-5937 deletion completed in 22.115700663s • [SLOW TEST:22.411 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:57:58.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3948.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3948.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 25 13:58:04.764: INFO: DNS probes using dns-3948/dns-test-c4d58e28-8dfe-421c-86b7-10d85a50bcdf succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:58:04.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3948" for this suite. Mar 25 13:58:10.862: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:58:10.957: INFO: namespace dns-3948 deletion completed in 6.120023742s • [SLOW TEST:12.306 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:58:10.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-87222b4f-86ee-401f-9e66-10681041d099 STEP: Creating a pod to test consume configMaps Mar 25 13:58:11.033: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-22fedff1-52d0-4fc3-8397-9e98b803c9f0" in namespace "projected-9999" to be "success or failure" Mar 25 13:58:11.050: INFO: Pod "pod-projected-configmaps-22fedff1-52d0-4fc3-8397-9e98b803c9f0": Phase="Pending", Reason="", readiness=false. Elapsed: 16.675907ms Mar 25 13:58:13.055: INFO: Pod "pod-projected-configmaps-22fedff1-52d0-4fc3-8397-9e98b803c9f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021186139s Mar 25 13:58:15.059: INFO: Pod "pod-projected-configmaps-22fedff1-52d0-4fc3-8397-9e98b803c9f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025610091s STEP: Saw pod success Mar 25 13:58:15.059: INFO: Pod "pod-projected-configmaps-22fedff1-52d0-4fc3-8397-9e98b803c9f0" satisfied condition "success or failure" Mar 25 13:58:15.062: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-22fedff1-52d0-4fc3-8397-9e98b803c9f0 container projected-configmap-volume-test: STEP: delete the pod Mar 25 13:58:15.112: INFO: Waiting for pod pod-projected-configmaps-22fedff1-52d0-4fc3-8397-9e98b803c9f0 to disappear Mar 25 13:58:15.117: INFO: Pod pod-projected-configmaps-22fedff1-52d0-4fc3-8397-9e98b803c9f0 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:58:15.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9999" for this suite. Mar 25 13:58:21.149: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:58:21.244: INFO: namespace projected-9999 deletion completed in 6.107665279s • [SLOW TEST:10.287 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:58:21.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test use defaults Mar 25 13:58:21.315: INFO: Waiting up to 5m0s for pod "client-containers-7841593b-c760-4817-aed1-3e88adce4b48" in namespace "containers-8536" to be "success or failure" Mar 25 13:58:21.369: INFO: Pod "client-containers-7841593b-c760-4817-aed1-3e88adce4b48": Phase="Pending", Reason="", readiness=false. Elapsed: 54.34006ms Mar 25 13:58:23.381: INFO: Pod "client-containers-7841593b-c760-4817-aed1-3e88adce4b48": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066391001s Mar 25 13:58:25.385: INFO: Pod "client-containers-7841593b-c760-4817-aed1-3e88adce4b48": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.070568681s STEP: Saw pod success Mar 25 13:58:25.385: INFO: Pod "client-containers-7841593b-c760-4817-aed1-3e88adce4b48" satisfied condition "success or failure" Mar 25 13:58:25.388: INFO: Trying to get logs from node iruya-worker pod client-containers-7841593b-c760-4817-aed1-3e88adce4b48 container test-container: STEP: delete the pod Mar 25 13:58:25.419: INFO: Waiting for pod client-containers-7841593b-c760-4817-aed1-3e88adce4b48 to disappear Mar 25 13:58:25.447: INFO: Pod client-containers-7841593b-c760-4817-aed1-3e88adce4b48 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:58:25.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8536" for this suite. Mar 25 13:58:31.467: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:58:31.544: INFO: namespace containers-8536 deletion completed in 6.093190482s • [SLOW TEST:10.300 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:58:31.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 25 13:58:31.579: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-6217' Mar 25 13:58:31.686: INFO: stderr: "" Mar 25 13:58:31.686: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 Mar 25 13:58:31.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-6217' Mar 25 13:58:42.163: INFO: stderr: "" Mar 25 13:58:42.164: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:58:42.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6217" for this suite. Mar 25 13:58:48.211: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:58:48.289: INFO: namespace kubectl-6217 deletion completed in 6.122679494s • [SLOW TEST:16.744 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:58:48.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 25 13:58:48.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-1305' Mar 25 13:58:48.478: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 25 13:58:48.478: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Mar 25 13:58:48.517: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Mar 25 13:58:48.524: INFO: scanned /root for discovery docs: Mar 25 13:58:48.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-1305' Mar 25 13:59:04.391: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Mar 25 13:59:04.392: INFO: stdout: "Created e2e-test-nginx-rc-fb7ec2dd0b2fd3a1b044d44d7cac8d9f\nScaling up e2e-test-nginx-rc-fb7ec2dd0b2fd3a1b044d44d7cac8d9f from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-fb7ec2dd0b2fd3a1b044d44d7cac8d9f up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-fb7ec2dd0b2fd3a1b044d44d7cac8d9f to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Mar 25 13:59:04.392: INFO: stdout: "Created e2e-test-nginx-rc-fb7ec2dd0b2fd3a1b044d44d7cac8d9f\nScaling up e2e-test-nginx-rc-fb7ec2dd0b2fd3a1b044d44d7cac8d9f from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-fb7ec2dd0b2fd3a1b044d44d7cac8d9f up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-fb7ec2dd0b2fd3a1b044d44d7cac8d9f to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Mar 25 13:59:04.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-1305' Mar 25 13:59:04.475: INFO: stderr: "" Mar 25 13:59:04.475: INFO: stdout: "e2e-test-nginx-rc-fb7ec2dd0b2fd3a1b044d44d7cac8d9f-mkbzh " Mar 25 13:59:04.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-fb7ec2dd0b2fd3a1b044d44d7cac8d9f-mkbzh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1305' Mar 25 13:59:04.572: INFO: stderr: "" Mar 25 13:59:04.573: INFO: stdout: "true" Mar 25 13:59:04.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-fb7ec2dd0b2fd3a1b044d44d7cac8d9f-mkbzh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1305' Mar 25 13:59:04.664: INFO: stderr: "" Mar 25 13:59:04.664: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Mar 25 13:59:04.664: INFO: e2e-test-nginx-rc-fb7ec2dd0b2fd3a1b044d44d7cac8d9f-mkbzh is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 Mar 25 13:59:04.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-1305' Mar 25 13:59:04.776: INFO: stderr: "" Mar 25 13:59:04.776: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:59:04.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1305" for this suite. Mar 25 13:59:26.838: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:59:26.924: INFO: namespace kubectl-1305 deletion completed in 22.115889141s • [SLOW TEST:38.634 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:59:26.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy Mar 25 13:59:26.988: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix045740507/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:59:27.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9963" for this suite. Mar 25 13:59:33.085: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 13:59:33.151: INFO: namespace kubectl-9963 deletion completed in 6.080953817s • [SLOW TEST:6.227 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 13:59:33.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Mar 25 13:59:37.252: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-6cc7d9ff-970e-4a53-9fdd-cb76c92d8647,GenerateName:,Namespace:events-1564,SelfLink:/api/v1/namespaces/events-1564/pods/send-events-6cc7d9ff-970e-4a53-9fdd-cb76c92d8647,UID:7649520e-7977-4c55-b60e-a5b97f04d965,ResourceVersion:1785819,Generation:0,CreationTimestamp:2020-03-25 13:59:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 202183075,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mwk5m {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mwk5m,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-mwk5m true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001dd5500} {node.kubernetes.io/unreachable Exists NoExecute 0xc001dd5520}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:59:33 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:59:35 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:59:35 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 13:59:33 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.206,StartTime:2020-03-25 13:59:33 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-03-25 13:59:35 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://f4ed53b3b89fab9c4ade061231a54e903ed3646128f9d153914528f90eebe25a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Mar 25 13:59:39.257: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Mar 25 13:59:41.261: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 13:59:41.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-1564" for this suite. Mar 25 14:00:23.312: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:00:23.398: INFO: namespace events-1564 deletion completed in 42.106916838s • [SLOW TEST:50.247 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:00:23.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 25 14:00:23.465: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:00:27.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-646" for this suite. Mar 25 14:01:17.541: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:01:17.643: INFO: namespace pods-646 deletion completed in 50.121866414s • [SLOW TEST:54.245 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:01:17.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Mar 25 14:01:17.730: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-9539,SelfLink:/api/v1/namespaces/watch-9539/configmaps/e2e-watch-test-watch-closed,UID:9bc74c44-0bce-457a-9cf0-d0df7aab0a46,ResourceVersion:1786064,Generation:0,CreationTimestamp:2020-03-25 14:01:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 25 14:01:17.731: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-9539,SelfLink:/api/v1/namespaces/watch-9539/configmaps/e2e-watch-test-watch-closed,UID:9bc74c44-0bce-457a-9cf0-d0df7aab0a46,ResourceVersion:1786065,Generation:0,CreationTimestamp:2020-03-25 14:01:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Mar 25 14:01:17.774: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-9539,SelfLink:/api/v1/namespaces/watch-9539/configmaps/e2e-watch-test-watch-closed,UID:9bc74c44-0bce-457a-9cf0-d0df7aab0a46,ResourceVersion:1786066,Generation:0,CreationTimestamp:2020-03-25 14:01:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 25 14:01:17.775: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-9539,SelfLink:/api/v1/namespaces/watch-9539/configmaps/e2e-watch-test-watch-closed,UID:9bc74c44-0bce-457a-9cf0-d0df7aab0a46,ResourceVersion:1786067,Generation:0,CreationTimestamp:2020-03-25 14:01:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:01:17.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9539" for this suite. Mar 25 14:01:23.790: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:01:23.869: INFO: namespace watch-9539 deletion completed in 6.090497037s • [SLOW TEST:6.225 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:01:23.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Mar 25 14:01:23.955: INFO: Waiting up to 5m0s for pod "downward-api-177182b9-a01d-4fe5-8ce9-4556a2418c3f" in namespace "downward-api-9639" to be "success or failure" Mar 25 14:01:23.964: INFO: Pod "downward-api-177182b9-a01d-4fe5-8ce9-4556a2418c3f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.566874ms Mar 25 14:01:25.968: INFO: Pod "downward-api-177182b9-a01d-4fe5-8ce9-4556a2418c3f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012778726s Mar 25 14:01:27.972: INFO: Pod "downward-api-177182b9-a01d-4fe5-8ce9-4556a2418c3f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016676934s STEP: Saw pod success Mar 25 14:01:27.972: INFO: Pod "downward-api-177182b9-a01d-4fe5-8ce9-4556a2418c3f" satisfied condition "success or failure" Mar 25 14:01:27.975: INFO: Trying to get logs from node iruya-worker pod downward-api-177182b9-a01d-4fe5-8ce9-4556a2418c3f container dapi-container: STEP: delete the pod Mar 25 14:01:27.995: INFO: Waiting for pod downward-api-177182b9-a01d-4fe5-8ce9-4556a2418c3f to disappear Mar 25 14:01:28.054: INFO: Pod downward-api-177182b9-a01d-4fe5-8ce9-4556a2418c3f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:01:28.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9639" for this suite. Mar 25 14:01:34.069: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:01:34.150: INFO: namespace downward-api-9639 deletion completed in 6.092161901s • [SLOW TEST:10.280 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:01:34.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:01:38.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6793" for this suite. Mar 25 14:02:24.334: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:02:24.405: INFO: namespace kubelet-test-6793 deletion completed in 46.112975233s • [SLOW TEST:50.254 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:02:24.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 25 14:02:28.554: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:02:28.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2520" for this suite. Mar 25 14:02:34.604: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:02:34.689: INFO: namespace container-runtime-2520 deletion completed in 6.098961911s • [SLOW TEST:10.283 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:02:34.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0325 14:03:05.299409 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 25 14:03:05.299: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:03:05.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7780" for this suite. Mar 25 14:03:11.317: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:03:11.430: INFO: namespace gc-7780 deletion completed in 6.126337952s • [SLOW TEST:36.740 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:03:11.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 25 14:03:11.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Mar 25 14:03:11.595: INFO: stderr: "" Mar 25 14:03:11.595: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.10\", GitCommit:\"1bea6c00a7055edef03f1d4bb58b773fa8917f11\", GitTreeState:\"clean\", BuildDate:\"2020-03-18T15:12:55Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:28:37Z\", GoVersion:\"go1.12.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:03:11.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7427" for this suite. Mar 25 14:03:17.617: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:03:17.728: INFO: namespace kubectl-7427 deletion completed in 6.127875063s • [SLOW TEST:6.297 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:03:17.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Mar 25 14:03:17.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2108' Mar 25 14:03:20.455: INFO: stderr: "" Mar 25 14:03:20.455: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 25 14:03:20.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2108' Mar 25 14:03:20.564: INFO: stderr: "" Mar 25 14:03:20.564: INFO: stdout: "update-demo-nautilus-bw4nf update-demo-nautilus-dhd4z " Mar 25 14:03:20.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bw4nf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2108' Mar 25 14:03:20.655: INFO: stderr: "" Mar 25 14:03:20.655: INFO: stdout: "" Mar 25 14:03:20.655: INFO: update-demo-nautilus-bw4nf is created but not running Mar 25 14:03:25.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2108' Mar 25 14:03:25.754: INFO: stderr: "" Mar 25 14:03:25.754: INFO: stdout: "update-demo-nautilus-bw4nf update-demo-nautilus-dhd4z " Mar 25 14:03:25.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bw4nf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2108' Mar 25 14:03:25.847: INFO: stderr: "" Mar 25 14:03:25.847: INFO: stdout: "true" Mar 25 14:03:25.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bw4nf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2108' Mar 25 14:03:25.936: INFO: stderr: "" Mar 25 14:03:25.936: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 25 14:03:25.936: INFO: validating pod update-demo-nautilus-bw4nf Mar 25 14:03:25.940: INFO: got data: { "image": "nautilus.jpg" } Mar 25 14:03:25.940: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 25 14:03:25.940: INFO: update-demo-nautilus-bw4nf is verified up and running Mar 25 14:03:25.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dhd4z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2108' Mar 25 14:03:26.029: INFO: stderr: "" Mar 25 14:03:26.029: INFO: stdout: "true" Mar 25 14:03:26.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dhd4z -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2108' Mar 25 14:03:26.115: INFO: stderr: "" Mar 25 14:03:26.115: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 25 14:03:26.115: INFO: validating pod update-demo-nautilus-dhd4z Mar 25 14:03:26.118: INFO: got data: { "image": "nautilus.jpg" } Mar 25 14:03:26.118: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 25 14:03:26.118: INFO: update-demo-nautilus-dhd4z is verified up and running STEP: using delete to clean up resources Mar 25 14:03:26.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2108' Mar 25 14:03:26.220: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 25 14:03:26.220: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 25 14:03:26.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2108' Mar 25 14:03:26.320: INFO: stderr: "No resources found.\n" Mar 25 14:03:26.320: INFO: stdout: "" Mar 25 14:03:26.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2108 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 25 14:03:26.626: INFO: stderr: "" Mar 25 14:03:26.626: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:03:26.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2108" for this suite. Mar 25 14:03:48.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:03:48.759: INFO: namespace kubectl-2108 deletion completed in 22.129219151s • [SLOW TEST:31.031 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:03:48.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 25 14:03:52.879: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:03:52.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1519" for this suite. Mar 25 14:03:58.928: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:03:58.998: INFO: namespace container-runtime-1519 deletion completed in 6.082676311s • [SLOW TEST:10.239 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:03:58.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 25 14:04:21.112: INFO: Container started at 2020-03-25 14:04:01 +0000 UTC, pod became ready at 2020-03-25 14:04:20 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:04:21.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2160" for this suite. Mar 25 14:04:43.130: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:04:43.236: INFO: namespace container-probe-2160 deletion completed in 22.120565149s • [SLOW TEST:44.238 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:04:43.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 25 14:04:47.363: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:04:47.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9323" for this suite. Mar 25 14:04:53.392: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:04:53.469: INFO: namespace container-runtime-9323 deletion completed in 6.085508468s • [SLOW TEST:10.232 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:04:53.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Mar 25 14:04:58.114: INFO: Successfully updated pod "labelsupdate1dc61593-eb02-4555-83f1-b9d763b617da" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:05:00.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8535" for this suite. Mar 25 14:05:22.176: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:05:22.291: INFO: namespace downward-api-8535 deletion completed in 22.125244037s • [SLOW TEST:28.822 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:05:22.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 25 14:05:30.385: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 25 14:05:30.388: INFO: Pod pod-with-prestop-exec-hook still exists Mar 25 14:05:32.388: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 25 14:05:32.393: INFO: Pod pod-with-prestop-exec-hook still exists Mar 25 14:05:34.388: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 25 14:05:34.392: INFO: Pod pod-with-prestop-exec-hook still exists Mar 25 14:05:36.388: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 25 14:05:36.392: INFO: Pod pod-with-prestop-exec-hook still exists Mar 25 14:05:38.388: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 25 14:05:38.392: INFO: Pod pod-with-prestop-exec-hook still exists Mar 25 14:05:40.388: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 25 14:05:40.393: INFO: Pod pod-with-prestop-exec-hook still exists Mar 25 14:05:42.388: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 25 14:05:42.392: INFO: Pod pod-with-prestop-exec-hook still exists Mar 25 14:05:44.388: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 25 14:05:44.400: INFO: Pod pod-with-prestop-exec-hook still exists Mar 25 14:05:46.388: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 25 14:05:46.392: INFO: Pod pod-with-prestop-exec-hook still exists Mar 25 14:05:48.388: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 25 14:05:48.406: INFO: Pod pod-with-prestop-exec-hook still exists Mar 25 14:05:50.388: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 25 14:05:50.392: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:05:50.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-188" for this suite. Mar 25 14:06:12.415: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:06:12.523: INFO: namespace container-lifecycle-hook-188 deletion completed in 22.117939718s • [SLOW TEST:50.232 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:06:12.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-a7d2d5a1-5e86-4b41-bc0d-a28fa75cddee STEP: Creating a pod to test consume configMaps Mar 25 14:06:12.585: INFO: Waiting up to 5m0s for pod "pod-configmaps-5a0dab88-36f4-4495-8434-663c02d6565b" in namespace "configmap-2220" to be "success or failure" Mar 25 14:06:12.616: INFO: Pod "pod-configmaps-5a0dab88-36f4-4495-8434-663c02d6565b": Phase="Pending", Reason="", readiness=false. Elapsed: 30.283226ms Mar 25 14:06:14.619: INFO: Pod "pod-configmaps-5a0dab88-36f4-4495-8434-663c02d6565b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033965229s Mar 25 14:06:16.623: INFO: Pod "pod-configmaps-5a0dab88-36f4-4495-8434-663c02d6565b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03818915s STEP: Saw pod success Mar 25 14:06:16.624: INFO: Pod "pod-configmaps-5a0dab88-36f4-4495-8434-663c02d6565b" satisfied condition "success or failure" Mar 25 14:06:16.626: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-5a0dab88-36f4-4495-8434-663c02d6565b container configmap-volume-test: STEP: delete the pod Mar 25 14:06:16.645: INFO: Waiting for pod pod-configmaps-5a0dab88-36f4-4495-8434-663c02d6565b to disappear Mar 25 14:06:16.650: INFO: Pod pod-configmaps-5a0dab88-36f4-4495-8434-663c02d6565b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:06:16.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2220" for this suite. Mar 25 14:06:22.665: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:06:22.736: INFO: namespace configmap-2220 deletion completed in 6.082920434s • [SLOW TEST:10.211 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:06:22.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-c90fe18e-d5ca-4bea-a398-8db40c058b3b STEP: Creating a pod to test consume secrets Mar 25 14:06:22.827: INFO: Waiting up to 5m0s for pod "pod-secrets-0361cc08-5a79-4f1c-b734-8925478c90d1" in namespace "secrets-4348" to be "success or failure" Mar 25 14:06:22.849: INFO: Pod "pod-secrets-0361cc08-5a79-4f1c-b734-8925478c90d1": Phase="Pending", Reason="", readiness=false. Elapsed: 22.140607ms Mar 25 14:06:24.853: INFO: Pod "pod-secrets-0361cc08-5a79-4f1c-b734-8925478c90d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026602084s Mar 25 14:06:26.858: INFO: Pod "pod-secrets-0361cc08-5a79-4f1c-b734-8925478c90d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030967882s STEP: Saw pod success Mar 25 14:06:26.858: INFO: Pod "pod-secrets-0361cc08-5a79-4f1c-b734-8925478c90d1" satisfied condition "success or failure" Mar 25 14:06:26.861: INFO: Trying to get logs from node iruya-worker pod pod-secrets-0361cc08-5a79-4f1c-b734-8925478c90d1 container secret-volume-test: STEP: delete the pod Mar 25 14:06:26.895: INFO: Waiting for pod pod-secrets-0361cc08-5a79-4f1c-b734-8925478c90d1 to disappear Mar 25 14:06:26.904: INFO: Pod pod-secrets-0361cc08-5a79-4f1c-b734-8925478c90d1 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:06:26.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4348" for this suite. Mar 25 14:06:32.973: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:06:33.077: INFO: namespace secrets-4348 deletion completed in 6.169676035s • [SLOW TEST:10.340 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:06:33.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:06:33.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3990" for this suite. Mar 25 14:06:39.154: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:06:39.222: INFO: namespace services-3990 deletion completed in 6.080417722s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.144 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:06:39.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Mar 25 14:06:39.296: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:06:46.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2676" for this suite. Mar 25 14:06:52.659: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:06:52.733: INFO: namespace init-container-2676 deletion completed in 6.089355625s • [SLOW TEST:13.512 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:06:52.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 25 14:06:52.803: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7a2ea9bd-c5ab-4a87-b3bc-9820bd0289a5" in namespace "downward-api-7020" to be "success or failure" Mar 25 14:06:52.821: INFO: Pod "downwardapi-volume-7a2ea9bd-c5ab-4a87-b3bc-9820bd0289a5": Phase="Pending", Reason="", readiness=false. Elapsed: 17.605569ms Mar 25 14:06:54.825: INFO: Pod "downwardapi-volume-7a2ea9bd-c5ab-4a87-b3bc-9820bd0289a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021592644s Mar 25 14:06:56.830: INFO: Pod "downwardapi-volume-7a2ea9bd-c5ab-4a87-b3bc-9820bd0289a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026273557s STEP: Saw pod success Mar 25 14:06:56.830: INFO: Pod "downwardapi-volume-7a2ea9bd-c5ab-4a87-b3bc-9820bd0289a5" satisfied condition "success or failure" Mar 25 14:06:56.833: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-7a2ea9bd-c5ab-4a87-b3bc-9820bd0289a5 container client-container: STEP: delete the pod Mar 25 14:06:56.871: INFO: Waiting for pod downwardapi-volume-7a2ea9bd-c5ab-4a87-b3bc-9820bd0289a5 to disappear Mar 25 14:06:56.878: INFO: Pod downwardapi-volume-7a2ea9bd-c5ab-4a87-b3bc-9820bd0289a5 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:06:56.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7020" for this suite. Mar 25 14:07:02.894: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:07:02.978: INFO: namespace downward-api-7020 deletion completed in 6.0977075s • [SLOW TEST:10.244 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:07:02.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 25 14:07:03.055: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Mar 25 14:07:08.059: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 25 14:07:08.059: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Mar 25 14:07:08.080: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-4156,SelfLink:/apis/apps/v1/namespaces/deployment-4156/deployments/test-cleanup-deployment,UID:6818c407-f403-49c8-96cd-4c43e24a1b18,ResourceVersion:1787219,Generation:1,CreationTimestamp:2020-03-25 14:07:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Mar 25 14:07:08.086: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-4156,SelfLink:/apis/apps/v1/namespaces/deployment-4156/replicasets/test-cleanup-deployment-55bbcbc84c,UID:4802e216-1e10-42dc-ba64-fdb99eb68e1c,ResourceVersion:1787221,Generation:1,CreationTimestamp:2020-03-25 14:07:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 6818c407-f403-49c8-96cd-4c43e24a1b18 0xc001f78587 0xc001f78588}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 25 14:07:08.086: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Mar 25 14:07:08.086: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-4156,SelfLink:/apis/apps/v1/namespaces/deployment-4156/replicasets/test-cleanup-controller,UID:3f628102-44c3-4c1d-9b91-433770d821ce,ResourceVersion:1787220,Generation:1,CreationTimestamp:2020-03-25 14:07:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 6818c407-f403-49c8-96cd-4c43e24a1b18 0xc001f784b7 0xc001f784b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Mar 25 14:07:08.144: INFO: Pod "test-cleanup-controller-p2nq2" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-p2nq2,GenerateName:test-cleanup-controller-,Namespace:deployment-4156,SelfLink:/api/v1/namespaces/deployment-4156/pods/test-cleanup-controller-p2nq2,UID:3cf1ffeb-9ddb-4b46-a37e-bb43000bfbc4,ResourceVersion:1787212,Generation:0,CreationTimestamp:2020-03-25 14:07:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 3f628102-44c3-4c1d-9b91-433770d821ce 0xc002fe7d1f 0xc002fe7d30}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hqlfc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hqlfc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hqlfc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002fe7da0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002fe7dc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 14:07:03 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 14:07:05 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 14:07:05 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 14:07:03 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.45,StartTime:2020-03-25 14:07:03 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-25 14:07:05 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://a5fc66f3533576a626daf852519fc43b3b6b97aac78eeca70f171d906dcafa14}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 25 14:07:08.144: INFO: Pod "test-cleanup-deployment-55bbcbc84c-vsvqh" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-vsvqh,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-4156,SelfLink:/api/v1/namespaces/deployment-4156/pods/test-cleanup-deployment-55bbcbc84c-vsvqh,UID:81d27ee1-32e6-43e7-b609-f913fc65f246,ResourceVersion:1787225,Generation:0,CreationTimestamp:2020-03-25 14:07:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 4802e216-1e10-42dc-ba64-fdb99eb68e1c 0xc002fe7ec7 0xc002fe7ec8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hqlfc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hqlfc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-hqlfc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002fe7f40} {node.kubernetes.io/unreachable Exists NoExecute 0xc002fe7f60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-25 14:07:08 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:07:08.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4156" for this suite. Mar 25 14:07:14.233: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:07:14.323: INFO: namespace deployment-4156 deletion completed in 6.175055676s • [SLOW TEST:11.345 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:07:14.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:07:18.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6517" for this suite. Mar 25 14:08:04.444: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:08:04.516: INFO: namespace kubelet-test-6517 deletion completed in 46.085738104s • [SLOW TEST:50.193 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:08:04.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-62f35e2f-c526-4a47-aaa3-aecefddbc31c in namespace container-probe-6426 Mar 25 14:08:08.624: INFO: Started pod liveness-62f35e2f-c526-4a47-aaa3-aecefddbc31c in namespace container-probe-6426 STEP: checking the pod's current state and verifying that restartCount is present Mar 25 14:08:08.627: INFO: Initial restart count of pod liveness-62f35e2f-c526-4a47-aaa3-aecefddbc31c is 0 Mar 25 14:08:24.666: INFO: Restart count of pod container-probe-6426/liveness-62f35e2f-c526-4a47-aaa3-aecefddbc31c is now 1 (16.038734174s elapsed) Mar 25 14:08:44.704: INFO: Restart count of pod container-probe-6426/liveness-62f35e2f-c526-4a47-aaa3-aecefddbc31c is now 2 (36.077024587s elapsed) Mar 25 14:09:04.874: INFO: Restart count of pod container-probe-6426/liveness-62f35e2f-c526-4a47-aaa3-aecefddbc31c is now 3 (56.246765488s elapsed) Mar 25 14:09:24.916: INFO: Restart count of pod container-probe-6426/liveness-62f35e2f-c526-4a47-aaa3-aecefddbc31c is now 4 (1m16.28905955s elapsed) Mar 25 14:10:37.108: INFO: Restart count of pod container-probe-6426/liveness-62f35e2f-c526-4a47-aaa3-aecefddbc31c is now 5 (2m28.481352549s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:10:37.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6426" for this suite. Mar 25 14:10:43.149: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:10:43.224: INFO: namespace container-probe-6426 deletion completed in 6.097896057s • [SLOW TEST:158.707 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:10:43.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1155.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1155.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1155.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1155.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1155.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1155.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1155.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1155.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1155.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1155.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1155.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1155.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1155.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 195.97.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.97.195_udp@PTR;check="$$(dig +tcp +noall +answer +search 195.97.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.97.195_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1155.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1155.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1155.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1155.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1155.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1155.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1155.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1155.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1155.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1155.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1155.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1155.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1155.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 195.97.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.97.195_udp@PTR;check="$$(dig +tcp +noall +answer +search 195.97.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.97.195_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 25 14:10:49.377: INFO: Unable to read wheezy_udp@dns-test-service.dns-1155.svc.cluster.local from pod dns-1155/dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45: the server could not find the requested resource (get pods dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45) Mar 25 14:10:49.380: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1155.svc.cluster.local from pod dns-1155/dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45: the server could not find the requested resource (get pods dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45) Mar 25 14:10:49.384: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1155.svc.cluster.local from pod dns-1155/dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45: the server could not find the requested resource (get pods dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45) Mar 25 14:10:49.387: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1155.svc.cluster.local from pod dns-1155/dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45: the server could not find the requested resource (get pods dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45) Mar 25 14:10:49.411: INFO: Unable to read jessie_udp@dns-test-service.dns-1155.svc.cluster.local from pod dns-1155/dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45: the server could not find the requested resource (get pods dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45) Mar 25 14:10:49.415: INFO: Unable to read jessie_tcp@dns-test-service.dns-1155.svc.cluster.local from pod dns-1155/dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45: the server could not find the requested resource (get pods dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45) Mar 25 14:10:49.420: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1155.svc.cluster.local from pod dns-1155/dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45: the server could not find the requested resource (get pods dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45) Mar 25 14:10:49.423: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1155.svc.cluster.local from pod dns-1155/dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45: the server could not find the requested resource (get pods dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45) Mar 25 14:10:49.438: INFO: Lookups using dns-1155/dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45 failed for: [wheezy_udp@dns-test-service.dns-1155.svc.cluster.local wheezy_tcp@dns-test-service.dns-1155.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1155.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1155.svc.cluster.local jessie_udp@dns-test-service.dns-1155.svc.cluster.local jessie_tcp@dns-test-service.dns-1155.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1155.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1155.svc.cluster.local] Mar 25 14:10:54.443: INFO: Unable to read wheezy_udp@dns-test-service.dns-1155.svc.cluster.local from pod dns-1155/dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45: the server could not find the requested resource (get pods dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45) Mar 25 14:10:54.446: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1155.svc.cluster.local from pod dns-1155/dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45: the server could not find the requested resource (get pods dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45) Mar 25 14:10:54.450: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1155.svc.cluster.local from pod dns-1155/dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45: the server could not find the requested resource (get pods dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45) Mar 25 14:10:54.453: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1155.svc.cluster.local from pod dns-1155/dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45: the server could not find the requested resource (get pods dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45) Mar 25 14:10:54.476: INFO: Unable to read jessie_udp@dns-test-service.dns-1155.svc.cluster.local from pod dns-1155/dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45: the server could not find the requested resource (get pods dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45) Mar 25 14:10:54.480: INFO: Unable to read jessie_tcp@dns-test-service.dns-1155.svc.cluster.local from pod dns-1155/dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45: the server could not find the requested resource (get pods dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45) Mar 25 14:10:54.483: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1155.svc.cluster.local from pod dns-1155/dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45: the server could not find the requested resource (get pods dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45) Mar 25 14:10:54.487: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1155.svc.cluster.local from pod dns-1155/dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45: the server could not find the requested resource (get pods dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45) Mar 25 14:10:54.504: INFO: Lookups using dns-1155/dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45 failed for: [wheezy_udp@dns-test-service.dns-1155.svc.cluster.local wheezy_tcp@dns-test-service.dns-1155.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1155.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1155.svc.cluster.local jessie_udp@dns-test-service.dns-1155.svc.cluster.local jessie_tcp@dns-test-service.dns-1155.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1155.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1155.svc.cluster.local] Mar 25 14:10:59.443: INFO: Unable to read wheezy_udp@dns-test-service.dns-1155.svc.cluster.local from pod dns-1155/dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45: the server could not find the requested resource (get pods dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45) Mar 25 14:10:59.447: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1155.svc.cluster.local from pod dns-1155/dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45: the server could not find the requested resource (get pods dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45) Mar 25 14:10:59.451: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1155.svc.cluster.local from pod dns-1155/dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45: the server could not find the requested resource (get pods dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45) Mar 25 14:10:59.454: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1155.svc.cluster.local from pod dns-1155/dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45: the server could not find the requested resource (get pods dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45) Mar 25 14:10:59.477: INFO: Unable to read jessie_udp@dns-test-service.dns-1155.svc.cluster.local from pod dns-1155/dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45: the server could not find the requested resource (get pods dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45) Mar 25 14:10:59.480: INFO: Unable to read jessie_tcp@dns-test-service.dns-1155.svc.cluster.local from pod dns-1155/dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45: the server could not find the requested resource (get pods dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45) Mar 25 14:10:59.483: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1155.svc.cluster.local from pod dns-1155/dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45: the server could not find the requested resource (get pods dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45) Mar 25 14:10:59.486: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1155.svc.cluster.local from pod dns-1155/dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45: the server could not find the requested resource (get pods dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45) Mar 25 14:10:59.505: INFO: Lookups using dns-1155/dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45 failed for: [wheezy_udp@dns-test-service.dns-1155.svc.cluster.local wheezy_tcp@dns-test-service.dns-1155.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1155.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1155.svc.cluster.local jessie_udp@dns-test-service.dns-1155.svc.cluster.local jessie_tcp@dns-test-service.dns-1155.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1155.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1155.svc.cluster.local] Mar 25 14:11:04.443: INFO: Unable to read wheezy_udp@dns-test-service.dns-1155.svc.cluster.local from pod dns-1155/dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45: the server could not find the requested resource (get pods dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45) Mar 25 14:11:04.447: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1155.svc.cluster.local from pod dns-1155/dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45: the server could not find the requested resource (get pods dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45) Mar 25 14:11:04.451: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1155.svc.cluster.local from pod dns-1155/dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45: the server could not find the requested resource (get pods dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45) Mar 25 14:11:04.455: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1155.svc.cluster.local from pod dns-1155/dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45: the server could not find the requested resource (get pods dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45) Mar 25 14:11:04.477: INFO: Unable to read jessie_udp@dns-test-service.dns-1155.svc.cluster.local from pod dns-1155/dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45: the server could not find the requested resource (get pods dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45) Mar 25 14:11:04.480: INFO: Unable to read jessie_tcp@dns-test-service.dns-1155.svc.cluster.local from pod dns-1155/dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45: the server could not find the requested resource (get pods dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45) Mar 25 14:11:04.484: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1155.svc.cluster.local from pod dns-1155/dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45: the server could not find the requested resource (get pods dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45) Mar 25 14:11:04.487: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1155.svc.cluster.local from pod dns-1155/dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45: the server could not find the requested resource (get pods dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45) Mar 25 14:11:04.507: INFO: Lookups using dns-1155/dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45 failed for: [wheezy_udp@dns-test-service.dns-1155.svc.cluster.local wheezy_tcp@dns-test-service.dns-1155.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1155.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1155.svc.cluster.local jessie_udp@dns-test-service.dns-1155.svc.cluster.local jessie_tcp@dns-test-service.dns-1155.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1155.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1155.svc.cluster.local] Mar 25 14:11:09.443: INFO: Unable to read wheezy_udp@dns-test-service.dns-1155.svc.cluster.local from pod dns-1155/dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45: the server could not find the requested resource (get pods dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45) Mar 25 14:11:09.446: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1155.svc.cluster.local from pod dns-1155/dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45: the server could not find the requested resource (get pods dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45) Mar 25 14:11:09.449: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1155.svc.cluster.local from pod dns-1155/dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45: the server could not find the requested resource (get pods dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45) Mar 25 14:11:09.453: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1155.svc.cluster.local from pod dns-1155/dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45: the server could not find the requested resource (get pods dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45) Mar 25 14:11:09.473: INFO: Unable to read jessie_udp@dns-test-service.dns-1155.svc.cluster.local from pod dns-1155/dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45: the server could not find the requested resource (get pods dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45) Mar 25 14:11:09.476: INFO: Unable to read jessie_tcp@dns-test-service.dns-1155.svc.cluster.local from pod dns-1155/dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45: the server could not find the requested resource (get pods dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45) Mar 25 14:11:09.479: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1155.svc.cluster.local from pod dns-1155/dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45: the server could not find the requested resource (get pods dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45) Mar 25 14:11:09.482: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1155.svc.cluster.local from pod dns-1155/dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45: the server could not find the requested resource (get pods dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45) Mar 25 14:11:09.503: INFO: Lookups using dns-1155/dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45 failed for: [wheezy_udp@dns-test-service.dns-1155.svc.cluster.local wheezy_tcp@dns-test-service.dns-1155.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1155.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1155.svc.cluster.local jessie_udp@dns-test-service.dns-1155.svc.cluster.local jessie_tcp@dns-test-service.dns-1155.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1155.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1155.svc.cluster.local] Mar 25 14:11:14.443: INFO: Unable to read wheezy_udp@dns-test-service.dns-1155.svc.cluster.local from pod dns-1155/dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45: the server could not find the requested resource (get pods dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45) Mar 25 14:11:14.447: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1155.svc.cluster.local from pod dns-1155/dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45: the server could not find the requested resource (get pods dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45) Mar 25 14:11:14.450: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1155.svc.cluster.local from pod dns-1155/dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45: the server could not find the requested resource (get pods dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45) Mar 25 14:11:14.453: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1155.svc.cluster.local from pod dns-1155/dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45: the server could not find the requested resource (get pods dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45) Mar 25 14:11:14.474: INFO: Unable to read jessie_udp@dns-test-service.dns-1155.svc.cluster.local from pod dns-1155/dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45: the server could not find the requested resource (get pods dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45) Mar 25 14:11:14.477: INFO: Unable to read jessie_tcp@dns-test-service.dns-1155.svc.cluster.local from pod dns-1155/dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45: the server could not find the requested resource (get pods dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45) Mar 25 14:11:14.480: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1155.svc.cluster.local from pod dns-1155/dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45: the server could not find the requested resource (get pods dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45) Mar 25 14:11:14.483: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1155.svc.cluster.local from pod dns-1155/dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45: the server could not find the requested resource (get pods dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45) Mar 25 14:11:14.501: INFO: Lookups using dns-1155/dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45 failed for: [wheezy_udp@dns-test-service.dns-1155.svc.cluster.local wheezy_tcp@dns-test-service.dns-1155.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1155.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1155.svc.cluster.local jessie_udp@dns-test-service.dns-1155.svc.cluster.local jessie_tcp@dns-test-service.dns-1155.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1155.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1155.svc.cluster.local] Mar 25 14:11:19.500: INFO: DNS probes using dns-1155/dns-test-e09a08e2-50f4-4d2b-be2d-a89eaa078f45 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:11:19.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1155" for this suite. Mar 25 14:11:26.029: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:11:26.160: INFO: namespace dns-1155 deletion completed in 6.152642904s • [SLOW TEST:42.936 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:11:26.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Mar 25 14:11:26.223: INFO: Pod name pod-release: Found 0 pods out of 1 Mar 25 14:11:31.227: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:11:32.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2847" for this suite. Mar 25 14:11:38.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:11:38.451: INFO: namespace replication-controller-2847 deletion completed in 6.200865816s • [SLOW TEST:12.289 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:11:38.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all Mar 25 14:11:38.518: INFO: Waiting up to 5m0s for pod "client-containers-a5d31d81-5d8c-48dd-a438-5736c84f1081" in namespace "containers-548" to be "success or failure" Mar 25 14:11:38.522: INFO: Pod "client-containers-a5d31d81-5d8c-48dd-a438-5736c84f1081": Phase="Pending", Reason="", readiness=false. Elapsed: 3.187484ms Mar 25 14:11:40.525: INFO: Pod "client-containers-a5d31d81-5d8c-48dd-a438-5736c84f1081": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006795818s Mar 25 14:11:42.529: INFO: Pod "client-containers-a5d31d81-5d8c-48dd-a438-5736c84f1081": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010756349s STEP: Saw pod success Mar 25 14:11:42.529: INFO: Pod "client-containers-a5d31d81-5d8c-48dd-a438-5736c84f1081" satisfied condition "success or failure" Mar 25 14:11:42.532: INFO: Trying to get logs from node iruya-worker2 pod client-containers-a5d31d81-5d8c-48dd-a438-5736c84f1081 container test-container: STEP: delete the pod Mar 25 14:11:42.553: INFO: Waiting for pod client-containers-a5d31d81-5d8c-48dd-a438-5736c84f1081 to disappear Mar 25 14:11:42.557: INFO: Pod client-containers-a5d31d81-5d8c-48dd-a438-5736c84f1081 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:11:42.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-548" for this suite. Mar 25 14:11:48.573: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:11:48.654: INFO: namespace containers-548 deletion completed in 6.09284282s • [SLOW TEST:10.202 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:11:48.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components Mar 25 14:11:48.707: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Mar 25 14:11:48.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4359' Mar 25 14:11:49.009: INFO: stderr: "" Mar 25 14:11:49.009: INFO: stdout: "service/redis-slave created\n" Mar 25 14:11:49.009: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Mar 25 14:11:49.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4359' Mar 25 14:11:49.303: INFO: stderr: "" Mar 25 14:11:49.303: INFO: stdout: "service/redis-master created\n" Mar 25 14:11:49.303: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Mar 25 14:11:49.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4359' Mar 25 14:11:49.586: INFO: stderr: "" Mar 25 14:11:49.586: INFO: stdout: "service/frontend created\n" Mar 25 14:11:49.586: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Mar 25 14:11:49.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4359' Mar 25 14:11:49.826: INFO: stderr: "" Mar 25 14:11:49.826: INFO: stdout: "deployment.apps/frontend created\n" Mar 25 14:11:49.826: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 25 14:11:49.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4359' Mar 25 14:11:50.204: INFO: stderr: "" Mar 25 14:11:50.204: INFO: stdout: "deployment.apps/redis-master created\n" Mar 25 14:11:50.205: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Mar 25 14:11:50.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4359' Mar 25 14:11:50.534: INFO: stderr: "" Mar 25 14:11:50.534: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app Mar 25 14:11:50.534: INFO: Waiting for all frontend pods to be Running. Mar 25 14:12:00.585: INFO: Waiting for frontend to serve content. Mar 25 14:12:00.610: INFO: Trying to add a new entry to the guestbook. Mar 25 14:12:00.627: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Mar 25 14:12:00.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4359' Mar 25 14:12:00.768: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 25 14:12:00.768: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Mar 25 14:12:00.768: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4359' Mar 25 14:12:00.893: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 25 14:12:00.893: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Mar 25 14:12:00.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4359' Mar 25 14:12:01.014: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 25 14:12:01.014: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 25 14:12:01.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4359' Mar 25 14:12:01.102: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 25 14:12:01.102: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 25 14:12:01.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4359' Mar 25 14:12:01.210: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 25 14:12:01.210: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources Mar 25 14:12:01.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4359' Mar 25 14:12:01.332: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 25 14:12:01.332: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:12:01.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4359" for this suite. Mar 25 14:12:43.373: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:12:43.448: INFO: namespace kubectl-4359 deletion completed in 42.108770463s • [SLOW TEST:54.794 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:12:43.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Mar 25 14:12:50.697: INFO: 2 pods remaining Mar 25 14:12:50.697: INFO: 0 pods has nil DeletionTimestamp Mar 25 14:12:50.697: INFO: Mar 25 14:12:51.155: INFO: 0 pods remaining Mar 25 14:12:51.155: INFO: 0 pods has nil DeletionTimestamp Mar 25 14:12:51.155: INFO: STEP: Gathering metrics W0325 14:12:51.828305 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 25 14:12:51.828: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:12:51.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9642" for this suite. Mar 25 14:12:57.871: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:12:57.951: INFO: namespace gc-9642 deletion completed in 6.119523543s • [SLOW TEST:14.503 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:12:57.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Mar 25 14:12:58.050: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-734,SelfLink:/api/v1/namespaces/watch-734/configmaps/e2e-watch-test-label-changed,UID:14a95417-15e6-4913-af06-81eadac17643,ResourceVersion:1788500,Generation:0,CreationTimestamp:2020-03-25 14:12:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 25 14:12:58.050: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-734,SelfLink:/api/v1/namespaces/watch-734/configmaps/e2e-watch-test-label-changed,UID:14a95417-15e6-4913-af06-81eadac17643,ResourceVersion:1788501,Generation:0,CreationTimestamp:2020-03-25 14:12:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Mar 25 14:12:58.050: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-734,SelfLink:/api/v1/namespaces/watch-734/configmaps/e2e-watch-test-label-changed,UID:14a95417-15e6-4913-af06-81eadac17643,ResourceVersion:1788502,Generation:0,CreationTimestamp:2020-03-25 14:12:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Mar 25 14:13:08.078: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-734,SelfLink:/api/v1/namespaces/watch-734/configmaps/e2e-watch-test-label-changed,UID:14a95417-15e6-4913-af06-81eadac17643,ResourceVersion:1788523,Generation:0,CreationTimestamp:2020-03-25 14:12:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 25 14:13:08.078: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-734,SelfLink:/api/v1/namespaces/watch-734/configmaps/e2e-watch-test-label-changed,UID:14a95417-15e6-4913-af06-81eadac17643,ResourceVersion:1788524,Generation:0,CreationTimestamp:2020-03-25 14:12:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Mar 25 14:13:08.078: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-734,SelfLink:/api/v1/namespaces/watch-734/configmaps/e2e-watch-test-label-changed,UID:14a95417-15e6-4913-af06-81eadac17643,ResourceVersion:1788525,Generation:0,CreationTimestamp:2020-03-25 14:12:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:13:08.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-734" for this suite. Mar 25 14:13:14.093: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:13:14.180: INFO: namespace watch-734 deletion completed in 6.096829034s • [SLOW TEST:16.229 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:13:14.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Mar 25 14:13:14.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1377' Mar 25 14:13:14.530: INFO: stderr: "" Mar 25 14:13:14.530: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 25 14:13:14.530: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1377' Mar 25 14:13:14.620: INFO: stderr: "" Mar 25 14:13:14.620: INFO: stdout: "update-demo-nautilus-hp98w update-demo-nautilus-xqt2f " Mar 25 14:13:14.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hp98w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1377' Mar 25 14:13:14.715: INFO: stderr: "" Mar 25 14:13:14.715: INFO: stdout: "" Mar 25 14:13:14.715: INFO: update-demo-nautilus-hp98w is created but not running Mar 25 14:13:19.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1377' Mar 25 14:13:22.230: INFO: stderr: "" Mar 25 14:13:22.230: INFO: stdout: "update-demo-nautilus-hp98w update-demo-nautilus-xqt2f " Mar 25 14:13:22.230: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hp98w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1377' Mar 25 14:13:22.328: INFO: stderr: "" Mar 25 14:13:22.328: INFO: stdout: "true" Mar 25 14:13:22.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hp98w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1377' Mar 25 14:13:22.424: INFO: stderr: "" Mar 25 14:13:22.424: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 25 14:13:22.424: INFO: validating pod update-demo-nautilus-hp98w Mar 25 14:13:22.428: INFO: got data: { "image": "nautilus.jpg" } Mar 25 14:13:22.428: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 25 14:13:22.428: INFO: update-demo-nautilus-hp98w is verified up and running Mar 25 14:13:22.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xqt2f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1377' Mar 25 14:13:22.532: INFO: stderr: "" Mar 25 14:13:22.532: INFO: stdout: "true" Mar 25 14:13:22.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xqt2f -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1377' Mar 25 14:13:22.629: INFO: stderr: "" Mar 25 14:13:22.630: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 25 14:13:22.630: INFO: validating pod update-demo-nautilus-xqt2f Mar 25 14:13:22.634: INFO: got data: { "image": "nautilus.jpg" } Mar 25 14:13:22.634: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 25 14:13:22.634: INFO: update-demo-nautilus-xqt2f is verified up and running STEP: scaling down the replication controller Mar 25 14:13:22.636: INFO: scanned /root for discovery docs: Mar 25 14:13:22.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-1377' Mar 25 14:13:23.775: INFO: stderr: "" Mar 25 14:13:23.775: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 25 14:13:23.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1377' Mar 25 14:13:23.896: INFO: stderr: "" Mar 25 14:13:23.896: INFO: stdout: "update-demo-nautilus-hp98w update-demo-nautilus-xqt2f " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 25 14:13:28.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1377' Mar 25 14:13:29.001: INFO: stderr: "" Mar 25 14:13:29.001: INFO: stdout: "update-demo-nautilus-xqt2f " Mar 25 14:13:29.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xqt2f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1377' Mar 25 14:13:29.105: INFO: stderr: "" Mar 25 14:13:29.105: INFO: stdout: "true" Mar 25 14:13:29.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xqt2f -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1377' Mar 25 14:13:29.196: INFO: stderr: "" Mar 25 14:13:29.196: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 25 14:13:29.196: INFO: validating pod update-demo-nautilus-xqt2f Mar 25 14:13:29.199: INFO: got data: { "image": "nautilus.jpg" } Mar 25 14:13:29.199: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 25 14:13:29.199: INFO: update-demo-nautilus-xqt2f is verified up and running STEP: scaling up the replication controller Mar 25 14:13:29.201: INFO: scanned /root for discovery docs: Mar 25 14:13:29.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-1377' Mar 25 14:13:30.321: INFO: stderr: "" Mar 25 14:13:30.321: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 25 14:13:30.321: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1377' Mar 25 14:13:30.417: INFO: stderr: "" Mar 25 14:13:30.418: INFO: stdout: "update-demo-nautilus-nf4qh update-demo-nautilus-xqt2f " Mar 25 14:13:30.418: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nf4qh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1377' Mar 25 14:13:30.505: INFO: stderr: "" Mar 25 14:13:30.505: INFO: stdout: "" Mar 25 14:13:30.505: INFO: update-demo-nautilus-nf4qh is created but not running Mar 25 14:13:35.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1377' Mar 25 14:13:35.598: INFO: stderr: "" Mar 25 14:13:35.598: INFO: stdout: "update-demo-nautilus-nf4qh update-demo-nautilus-xqt2f " Mar 25 14:13:35.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nf4qh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1377' Mar 25 14:13:35.755: INFO: stderr: "" Mar 25 14:13:35.755: INFO: stdout: "true" Mar 25 14:13:35.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nf4qh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1377' Mar 25 14:13:35.848: INFO: stderr: "" Mar 25 14:13:35.848: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 25 14:13:35.848: INFO: validating pod update-demo-nautilus-nf4qh Mar 25 14:13:35.852: INFO: got data: { "image": "nautilus.jpg" } Mar 25 14:13:35.852: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 25 14:13:35.852: INFO: update-demo-nautilus-nf4qh is verified up and running Mar 25 14:13:35.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xqt2f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1377' Mar 25 14:13:35.944: INFO: stderr: "" Mar 25 14:13:35.944: INFO: stdout: "true" Mar 25 14:13:35.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xqt2f -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1377' Mar 25 14:13:36.033: INFO: stderr: "" Mar 25 14:13:36.033: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 25 14:13:36.033: INFO: validating pod update-demo-nautilus-xqt2f Mar 25 14:13:36.036: INFO: got data: { "image": "nautilus.jpg" } Mar 25 14:13:36.036: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 25 14:13:36.036: INFO: update-demo-nautilus-xqt2f is verified up and running STEP: using delete to clean up resources Mar 25 14:13:36.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1377' Mar 25 14:13:36.214: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 25 14:13:36.214: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 25 14:13:36.214: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1377' Mar 25 14:13:36.308: INFO: stderr: "No resources found.\n" Mar 25 14:13:36.308: INFO: stdout: "" Mar 25 14:13:36.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1377 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 25 14:13:36.403: INFO: stderr: "" Mar 25 14:13:36.403: INFO: stdout: "update-demo-nautilus-nf4qh\nupdate-demo-nautilus-xqt2f\n" Mar 25 14:13:36.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1377' Mar 25 14:13:36.999: INFO: stderr: "No resources found.\n" Mar 25 14:13:36.999: INFO: stdout: "" Mar 25 14:13:36.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1377 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 25 14:13:37.084: INFO: stderr: "" Mar 25 14:13:37.084: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:13:37.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1377" for this suite. Mar 25 14:13:43.104: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:13:43.189: INFO: namespace kubectl-1377 deletion completed in 6.101584175s • [SLOW TEST:29.008 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:13:43.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-ac107e42-b1e1-4f32-885a-15185e74021c STEP: Creating a pod to test consume secrets Mar 25 14:13:43.271: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4d6dc37d-949b-4f4f-90f2-38a77a181569" in namespace "projected-7217" to be "success or failure" Mar 25 14:13:43.287: INFO: Pod "pod-projected-secrets-4d6dc37d-949b-4f4f-90f2-38a77a181569": Phase="Pending", Reason="", readiness=false. Elapsed: 15.582699ms Mar 25 14:13:45.291: INFO: Pod "pod-projected-secrets-4d6dc37d-949b-4f4f-90f2-38a77a181569": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020144473s Mar 25 14:13:47.296: INFO: Pod "pod-projected-secrets-4d6dc37d-949b-4f4f-90f2-38a77a181569": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024683043s STEP: Saw pod success Mar 25 14:13:47.296: INFO: Pod "pod-projected-secrets-4d6dc37d-949b-4f4f-90f2-38a77a181569" satisfied condition "success or failure" Mar 25 14:13:47.299: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-4d6dc37d-949b-4f4f-90f2-38a77a181569 container secret-volume-test: STEP: delete the pod Mar 25 14:13:47.318: INFO: Waiting for pod pod-projected-secrets-4d6dc37d-949b-4f4f-90f2-38a77a181569 to disappear Mar 25 14:13:47.352: INFO: Pod pod-projected-secrets-4d6dc37d-949b-4f4f-90f2-38a77a181569 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:13:47.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7217" for this suite. Mar 25 14:13:53.368: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:13:53.448: INFO: namespace projected-7217 deletion completed in 6.092831218s • [SLOW TEST:10.258 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:13:53.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 25 14:13:57.539: INFO: Waiting up to 5m0s for pod "client-envvars-f4dc9625-735f-4912-b480-ab00d4de047d" in namespace "pods-9477" to be "success or failure" Mar 25 14:13:57.559: INFO: Pod "client-envvars-f4dc9625-735f-4912-b480-ab00d4de047d": Phase="Pending", Reason="", readiness=false. Elapsed: 19.550876ms Mar 25 14:13:59.562: INFO: Pod "client-envvars-f4dc9625-735f-4912-b480-ab00d4de047d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023332182s Mar 25 14:14:01.570: INFO: Pod "client-envvars-f4dc9625-735f-4912-b480-ab00d4de047d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030490628s STEP: Saw pod success Mar 25 14:14:01.570: INFO: Pod "client-envvars-f4dc9625-735f-4912-b480-ab00d4de047d" satisfied condition "success or failure" Mar 25 14:14:01.572: INFO: Trying to get logs from node iruya-worker pod client-envvars-f4dc9625-735f-4912-b480-ab00d4de047d container env3cont: STEP: delete the pod Mar 25 14:14:01.587: INFO: Waiting for pod client-envvars-f4dc9625-735f-4912-b480-ab00d4de047d to disappear Mar 25 14:14:01.592: INFO: Pod client-envvars-f4dc9625-735f-4912-b480-ab00d4de047d no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:14:01.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9477" for this suite. Mar 25 14:14:43.608: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:14:43.682: INFO: namespace pods-9477 deletion completed in 42.086418919s • [SLOW TEST:50.233 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:14:43.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 25 14:14:43.799: INFO: Waiting up to 5m0s for pod "downwardapi-volume-16998b19-3307-42de-885f-f768312f0787" in namespace "projected-6801" to be "success or failure" Mar 25 14:14:43.802: INFO: Pod "downwardapi-volume-16998b19-3307-42de-885f-f768312f0787": Phase="Pending", Reason="", readiness=false. Elapsed: 3.356397ms Mar 25 14:14:45.806: INFO: Pod "downwardapi-volume-16998b19-3307-42de-885f-f768312f0787": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007351317s Mar 25 14:14:47.811: INFO: Pod "downwardapi-volume-16998b19-3307-42de-885f-f768312f0787": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011678853s STEP: Saw pod success Mar 25 14:14:47.811: INFO: Pod "downwardapi-volume-16998b19-3307-42de-885f-f768312f0787" satisfied condition "success or failure" Mar 25 14:14:47.814: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-16998b19-3307-42de-885f-f768312f0787 container client-container: STEP: delete the pod Mar 25 14:14:47.843: INFO: Waiting for pod downwardapi-volume-16998b19-3307-42de-885f-f768312f0787 to disappear Mar 25 14:14:47.857: INFO: Pod downwardapi-volume-16998b19-3307-42de-885f-f768312f0787 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:14:47.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6801" for this suite. Mar 25 14:14:53.872: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:14:53.965: INFO: namespace projected-6801 deletion completed in 6.104389264s • [SLOW TEST:10.283 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:14:53.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 25 14:14:54.058: INFO: Create a RollingUpdate DaemonSet Mar 25 14:14:54.062: INFO: Check that daemon pods launch on every node of the cluster Mar 25 14:14:54.067: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 14:14:54.069: INFO: Number of nodes with available pods: 0 Mar 25 14:14:54.069: INFO: Node iruya-worker is running more than one daemon pod Mar 25 14:14:55.075: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 14:14:55.079: INFO: Number of nodes with available pods: 0 Mar 25 14:14:55.079: INFO: Node iruya-worker is running more than one daemon pod Mar 25 14:14:56.075: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 14:14:56.079: INFO: Number of nodes with available pods: 0 Mar 25 14:14:56.079: INFO: Node iruya-worker is running more than one daemon pod Mar 25 14:14:57.074: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 14:14:57.077: INFO: Number of nodes with available pods: 1 Mar 25 14:14:57.077: INFO: Node iruya-worker2 is running more than one daemon pod Mar 25 14:14:58.074: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 14:14:58.078: INFO: Number of nodes with available pods: 2 Mar 25 14:14:58.078: INFO: Number of running nodes: 2, number of available pods: 2 Mar 25 14:14:58.078: INFO: Update the DaemonSet to trigger a rollout Mar 25 14:14:58.086: INFO: Updating DaemonSet daemon-set Mar 25 14:15:12.104: INFO: Roll back the DaemonSet before rollout is complete Mar 25 14:15:12.111: INFO: Updating DaemonSet daemon-set Mar 25 14:15:12.111: INFO: Make sure DaemonSet rollback is complete Mar 25 14:15:12.162: INFO: Wrong image for pod: daemon-set-vz7sw. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Mar 25 14:15:12.162: INFO: Pod daemon-set-vz7sw is not available Mar 25 14:15:12.166: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 14:15:13.170: INFO: Wrong image for pod: daemon-set-vz7sw. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Mar 25 14:15:13.170: INFO: Pod daemon-set-vz7sw is not available Mar 25 14:15:13.174: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 14:15:14.170: INFO: Wrong image for pod: daemon-set-vz7sw. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Mar 25 14:15:14.170: INFO: Pod daemon-set-vz7sw is not available Mar 25 14:15:14.174: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 14:15:15.170: INFO: Pod daemon-set-9cwdh is not available Mar 25 14:15:15.174: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8520, will wait for the garbage collector to delete the pods Mar 25 14:15:15.242: INFO: Deleting DaemonSet.extensions daemon-set took: 7.373159ms Mar 25 14:15:15.542: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.272538ms Mar 25 14:15:22.245: INFO: Number of nodes with available pods: 0 Mar 25 14:15:22.245: INFO: Number of running nodes: 0, number of available pods: 0 Mar 25 14:15:22.248: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8520/daemonsets","resourceVersion":"1789040"},"items":null} Mar 25 14:15:22.251: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8520/pods","resourceVersion":"1789040"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:15:22.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8520" for this suite. Mar 25 14:15:28.280: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:15:28.361: INFO: namespace daemonsets-8520 deletion completed in 6.095536444s • [SLOW TEST:34.395 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:15:28.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 25 14:15:28.453: INFO: (0) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 5.09221ms) Mar 25 14:15:28.456: INFO: (1) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.965778ms) Mar 25 14:15:28.459: INFO: (2) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.816904ms) Mar 25 14:15:28.461: INFO: (3) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.369212ms) Mar 25 14:15:28.464: INFO: (4) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.535744ms) Mar 25 14:15:28.467: INFO: (5) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.910334ms) Mar 25 14:15:28.470: INFO: (6) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.792082ms) Mar 25 14:15:28.472: INFO: (7) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.795568ms) Mar 25 14:15:28.475: INFO: (8) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.952362ms) Mar 25 14:15:28.479: INFO: (9) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.560738ms) Mar 25 14:15:28.482: INFO: (10) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.186141ms) Mar 25 14:15:28.486: INFO: (11) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.251745ms) Mar 25 14:15:28.489: INFO: (12) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.261319ms) Mar 25 14:15:28.492: INFO: (13) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.950326ms) Mar 25 14:15:28.495: INFO: (14) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.106215ms) Mar 25 14:15:28.499: INFO: (15) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.740791ms) Mar 25 14:15:28.502: INFO: (16) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.363281ms) Mar 25 14:15:28.506: INFO: (17) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.530781ms) Mar 25 14:15:28.509: INFO: (18) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.273186ms) Mar 25 14:15:28.512: INFO: (19) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.235128ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:15:28.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-2050" for this suite. Mar 25 14:15:34.546: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:15:34.619: INFO: namespace proxy-2050 deletion completed in 6.103352662s • [SLOW TEST:6.258 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:15:34.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token Mar 25 14:15:35.211: INFO: created pod pod-service-account-defaultsa Mar 25 14:15:35.211: INFO: pod pod-service-account-defaultsa service account token volume mount: true Mar 25 14:15:35.218: INFO: created pod pod-service-account-mountsa Mar 25 14:15:35.218: INFO: pod pod-service-account-mountsa service account token volume mount: true Mar 25 14:15:35.243: INFO: created pod pod-service-account-nomountsa Mar 25 14:15:35.243: INFO: pod pod-service-account-nomountsa service account token volume mount: false Mar 25 14:15:35.260: INFO: created pod pod-service-account-defaultsa-mountspec Mar 25 14:15:35.261: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Mar 25 14:15:35.331: INFO: created pod pod-service-account-mountsa-mountspec Mar 25 14:15:35.331: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Mar 25 14:15:35.350: INFO: created pod pod-service-account-nomountsa-mountspec Mar 25 14:15:35.350: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Mar 25 14:15:35.395: INFO: created pod pod-service-account-defaultsa-nomountspec Mar 25 14:15:35.395: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Mar 25 14:15:35.468: INFO: created pod pod-service-account-mountsa-nomountspec Mar 25 14:15:35.468: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Mar 25 14:15:35.474: INFO: created pod pod-service-account-nomountsa-nomountspec Mar 25 14:15:35.474: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:15:35.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1466" for this suite. Mar 25 14:16:03.591: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:16:03.671: INFO: namespace svcaccounts-1466 deletion completed in 28.135434213s • [SLOW TEST:29.052 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:16:03.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Mar 25 14:16:13.802: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4318 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 25 14:16:13.802: INFO: >>> kubeConfig: /root/.kube/config I0325 14:16:13.840669 6 log.go:172] (0xc0009aa580) (0xc003361ae0) Create stream I0325 14:16:13.840702 6 log.go:172] (0xc0009aa580) (0xc003361ae0) Stream added, broadcasting: 1 I0325 14:16:13.843273 6 log.go:172] (0xc0009aa580) Reply frame received for 1 I0325 14:16:13.843324 6 log.go:172] (0xc0009aa580) (0xc003361b80) Create stream I0325 14:16:13.843339 6 log.go:172] (0xc0009aa580) (0xc003361b80) Stream added, broadcasting: 3 I0325 14:16:13.844556 6 log.go:172] (0xc0009aa580) Reply frame received for 3 I0325 14:16:13.844599 6 log.go:172] (0xc0009aa580) (0xc002a5c0a0) Create stream I0325 14:16:13.844613 6 log.go:172] (0xc0009aa580) (0xc002a5c0a0) Stream added, broadcasting: 5 I0325 14:16:13.845734 6 log.go:172] (0xc0009aa580) Reply frame received for 5 I0325 14:16:13.925672 6 log.go:172] (0xc0009aa580) Data frame received for 5 I0325 14:16:13.925712 6 log.go:172] (0xc002a5c0a0) (5) Data frame handling I0325 14:16:13.927027 6 log.go:172] (0xc0009aa580) Data frame received for 3 I0325 14:16:13.927064 6 log.go:172] (0xc003361b80) (3) Data frame handling I0325 14:16:13.927104 6 log.go:172] (0xc003361b80) (3) Data frame sent I0325 14:16:13.927124 6 log.go:172] (0xc0009aa580) Data frame received for 3 I0325 14:16:13.927142 6 log.go:172] (0xc003361b80) (3) Data frame handling I0325 14:16:13.929010 6 log.go:172] (0xc0009aa580) Data frame received for 1 I0325 14:16:13.929046 6 log.go:172] (0xc003361ae0) (1) Data frame handling I0325 14:16:13.929073 6 log.go:172] (0xc003361ae0) (1) Data frame sent I0325 14:16:13.929096 6 log.go:172] (0xc0009aa580) (0xc003361ae0) Stream removed, broadcasting: 1 I0325 14:16:13.929286 6 log.go:172] (0xc0009aa580) Go away received I0325 14:16:13.929380 6 log.go:172] (0xc0009aa580) (0xc003361ae0) Stream removed, broadcasting: 1 I0325 14:16:13.929409 6 log.go:172] (0xc0009aa580) (0xc003361b80) Stream removed, broadcasting: 3 I0325 14:16:13.929430 6 log.go:172] (0xc0009aa580) (0xc002a5c0a0) Stream removed, broadcasting: 5 Mar 25 14:16:13.929: INFO: Exec stderr: "" Mar 25 14:16:13.929: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4318 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 25 14:16:13.929: INFO: >>> kubeConfig: /root/.kube/config I0325 14:16:13.963013 6 log.go:172] (0xc0010aa840) (0xc000679c20) Create stream I0325 14:16:13.963035 6 log.go:172] (0xc0010aa840) (0xc000679c20) Stream added, broadcasting: 1 I0325 14:16:13.964969 6 log.go:172] (0xc0010aa840) Reply frame received for 1 I0325 14:16:13.965010 6 log.go:172] (0xc0010aa840) (0xc0014ba1e0) Create stream I0325 14:16:13.965029 6 log.go:172] (0xc0010aa840) (0xc0014ba1e0) Stream added, broadcasting: 3 I0325 14:16:13.966223 6 log.go:172] (0xc0010aa840) Reply frame received for 3 I0325 14:16:13.966279 6 log.go:172] (0xc0010aa840) (0xc0014ba280) Create stream I0325 14:16:13.966295 6 log.go:172] (0xc0010aa840) (0xc0014ba280) Stream added, broadcasting: 5 I0325 14:16:13.967349 6 log.go:172] (0xc0010aa840) Reply frame received for 5 I0325 14:16:14.033437 6 log.go:172] (0xc0010aa840) Data frame received for 5 I0325 14:16:14.033475 6 log.go:172] (0xc0014ba280) (5) Data frame handling I0325 14:16:14.033506 6 log.go:172] (0xc0010aa840) Data frame received for 3 I0325 14:16:14.033521 6 log.go:172] (0xc0014ba1e0) (3) Data frame handling I0325 14:16:14.033538 6 log.go:172] (0xc0014ba1e0) (3) Data frame sent I0325 14:16:14.033551 6 log.go:172] (0xc0010aa840) Data frame received for 3 I0325 14:16:14.033561 6 log.go:172] (0xc0014ba1e0) (3) Data frame handling I0325 14:16:14.035319 6 log.go:172] (0xc0010aa840) Data frame received for 1 I0325 14:16:14.035334 6 log.go:172] (0xc000679c20) (1) Data frame handling I0325 14:16:14.035343 6 log.go:172] (0xc000679c20) (1) Data frame sent I0325 14:16:14.035361 6 log.go:172] (0xc0010aa840) (0xc000679c20) Stream removed, broadcasting: 1 I0325 14:16:14.035415 6 log.go:172] (0xc0010aa840) Go away received I0325 14:16:14.035480 6 log.go:172] (0xc0010aa840) (0xc000679c20) Stream removed, broadcasting: 1 I0325 14:16:14.035497 6 log.go:172] (0xc0010aa840) (0xc0014ba1e0) Stream removed, broadcasting: 3 I0325 14:16:14.035506 6 log.go:172] (0xc0010aa840) (0xc0014ba280) Stream removed, broadcasting: 5 Mar 25 14:16:14.035: INFO: Exec stderr: "" Mar 25 14:16:14.035: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4318 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 25 14:16:14.035: INFO: >>> kubeConfig: /root/.kube/config I0325 14:16:14.071929 6 log.go:172] (0xc001032210) (0xc00124d220) Create stream I0325 14:16:14.071956 6 log.go:172] (0xc001032210) (0xc00124d220) Stream added, broadcasting: 1 I0325 14:16:14.074496 6 log.go:172] (0xc001032210) Reply frame received for 1 I0325 14:16:14.074551 6 log.go:172] (0xc001032210) (0xc000679cc0) Create stream I0325 14:16:14.074573 6 log.go:172] (0xc001032210) (0xc000679cc0) Stream added, broadcasting: 3 I0325 14:16:14.075375 6 log.go:172] (0xc001032210) Reply frame received for 3 I0325 14:16:14.075396 6 log.go:172] (0xc001032210) (0xc00124d680) Create stream I0325 14:16:14.075410 6 log.go:172] (0xc001032210) (0xc00124d680) Stream added, broadcasting: 5 I0325 14:16:14.076213 6 log.go:172] (0xc001032210) Reply frame received for 5 I0325 14:16:14.132531 6 log.go:172] (0xc001032210) Data frame received for 5 I0325 14:16:14.132570 6 log.go:172] (0xc00124d680) (5) Data frame handling I0325 14:16:14.132597 6 log.go:172] (0xc001032210) Data frame received for 3 I0325 14:16:14.132621 6 log.go:172] (0xc000679cc0) (3) Data frame handling I0325 14:16:14.132645 6 log.go:172] (0xc000679cc0) (3) Data frame sent I0325 14:16:14.132672 6 log.go:172] (0xc001032210) Data frame received for 3 I0325 14:16:14.132688 6 log.go:172] (0xc000679cc0) (3) Data frame handling I0325 14:16:14.134488 6 log.go:172] (0xc001032210) Data frame received for 1 I0325 14:16:14.134514 6 log.go:172] (0xc00124d220) (1) Data frame handling I0325 14:16:14.134546 6 log.go:172] (0xc00124d220) (1) Data frame sent I0325 14:16:14.134576 6 log.go:172] (0xc001032210) (0xc00124d220) Stream removed, broadcasting: 1 I0325 14:16:14.134608 6 log.go:172] (0xc001032210) Go away received I0325 14:16:14.134738 6 log.go:172] (0xc001032210) (0xc00124d220) Stream removed, broadcasting: 1 I0325 14:16:14.134769 6 log.go:172] (0xc001032210) (0xc000679cc0) Stream removed, broadcasting: 3 I0325 14:16:14.134792 6 log.go:172] (0xc001032210) (0xc00124d680) Stream removed, broadcasting: 5 Mar 25 14:16:14.134: INFO: Exec stderr: "" Mar 25 14:16:14.134: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4318 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 25 14:16:14.134: INFO: >>> kubeConfig: /root/.kube/config I0325 14:16:14.161981 6 log.go:172] (0xc000febad0) (0xc002a5c640) Create stream I0325 14:16:14.162003 6 log.go:172] (0xc000febad0) (0xc002a5c640) Stream added, broadcasting: 1 I0325 14:16:14.164024 6 log.go:172] (0xc000febad0) Reply frame received for 1 I0325 14:16:14.164060 6 log.go:172] (0xc000febad0) (0xc00124d860) Create stream I0325 14:16:14.164073 6 log.go:172] (0xc000febad0) (0xc00124d860) Stream added, broadcasting: 3 I0325 14:16:14.165347 6 log.go:172] (0xc000febad0) Reply frame received for 3 I0325 14:16:14.165427 6 log.go:172] (0xc000febad0) (0xc000dcc0a0) Create stream I0325 14:16:14.165456 6 log.go:172] (0xc000febad0) (0xc000dcc0a0) Stream added, broadcasting: 5 I0325 14:16:14.166454 6 log.go:172] (0xc000febad0) Reply frame received for 5 I0325 14:16:14.231617 6 log.go:172] (0xc000febad0) Data frame received for 5 I0325 14:16:14.231672 6 log.go:172] (0xc000dcc0a0) (5) Data frame handling I0325 14:16:14.231697 6 log.go:172] (0xc000febad0) Data frame received for 3 I0325 14:16:14.231711 6 log.go:172] (0xc00124d860) (3) Data frame handling I0325 14:16:14.231734 6 log.go:172] (0xc00124d860) (3) Data frame sent I0325 14:16:14.231750 6 log.go:172] (0xc000febad0) Data frame received for 3 I0325 14:16:14.231761 6 log.go:172] (0xc00124d860) (3) Data frame handling I0325 14:16:14.233562 6 log.go:172] (0xc000febad0) Data frame received for 1 I0325 14:16:14.233586 6 log.go:172] (0xc002a5c640) (1) Data frame handling I0325 14:16:14.233600 6 log.go:172] (0xc002a5c640) (1) Data frame sent I0325 14:16:14.233627 6 log.go:172] (0xc000febad0) (0xc002a5c640) Stream removed, broadcasting: 1 I0325 14:16:14.233655 6 log.go:172] (0xc000febad0) Go away received I0325 14:16:14.233825 6 log.go:172] (0xc000febad0) (0xc002a5c640) Stream removed, broadcasting: 1 I0325 14:16:14.233863 6 log.go:172] (0xc000febad0) (0xc00124d860) Stream removed, broadcasting: 3 I0325 14:16:14.233875 6 log.go:172] (0xc000febad0) (0xc000dcc0a0) Stream removed, broadcasting: 5 Mar 25 14:16:14.233: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Mar 25 14:16:14.233: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4318 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 25 14:16:14.233: INFO: >>> kubeConfig: /root/.kube/config I0325 14:16:14.271972 6 log.go:172] (0xc001cc4420) (0xc002a5c960) Create stream I0325 14:16:14.271999 6 log.go:172] (0xc001cc4420) (0xc002a5c960) Stream added, broadcasting: 1 I0325 14:16:14.277637 6 log.go:172] (0xc001cc4420) Reply frame received for 1 I0325 14:16:14.277732 6 log.go:172] (0xc001cc4420) (0xc00124db80) Create stream I0325 14:16:14.277759 6 log.go:172] (0xc001cc4420) (0xc00124db80) Stream added, broadcasting: 3 I0325 14:16:14.279134 6 log.go:172] (0xc001cc4420) Reply frame received for 3 I0325 14:16:14.279175 6 log.go:172] (0xc001cc4420) (0xc00124df40) Create stream I0325 14:16:14.279187 6 log.go:172] (0xc001cc4420) (0xc00124df40) Stream added, broadcasting: 5 I0325 14:16:14.280741 6 log.go:172] (0xc001cc4420) Reply frame received for 5 I0325 14:16:14.340969 6 log.go:172] (0xc001cc4420) Data frame received for 5 I0325 14:16:14.341006 6 log.go:172] (0xc00124df40) (5) Data frame handling I0325 14:16:14.341047 6 log.go:172] (0xc001cc4420) Data frame received for 3 I0325 14:16:14.341098 6 log.go:172] (0xc00124db80) (3) Data frame handling I0325 14:16:14.341267 6 log.go:172] (0xc00124db80) (3) Data frame sent I0325 14:16:14.341294 6 log.go:172] (0xc001cc4420) Data frame received for 3 I0325 14:16:14.341304 6 log.go:172] (0xc00124db80) (3) Data frame handling I0325 14:16:14.342705 6 log.go:172] (0xc001cc4420) Data frame received for 1 I0325 14:16:14.342721 6 log.go:172] (0xc002a5c960) (1) Data frame handling I0325 14:16:14.342728 6 log.go:172] (0xc002a5c960) (1) Data frame sent I0325 14:16:14.342736 6 log.go:172] (0xc001cc4420) (0xc002a5c960) Stream removed, broadcasting: 1 I0325 14:16:14.342742 6 log.go:172] (0xc001cc4420) Go away received I0325 14:16:14.342922 6 log.go:172] (0xc001cc4420) (0xc002a5c960) Stream removed, broadcasting: 1 I0325 14:16:14.342971 6 log.go:172] (0xc001cc4420) (0xc00124db80) Stream removed, broadcasting: 3 I0325 14:16:14.342993 6 log.go:172] (0xc001cc4420) (0xc00124df40) Stream removed, broadcasting: 5 Mar 25 14:16:14.343: INFO: Exec stderr: "" Mar 25 14:16:14.343: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4318 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 25 14:16:14.343: INFO: >>> kubeConfig: /root/.kube/config I0325 14:16:14.371993 6 log.go:172] (0xc000ac2c60) (0xc0014baa00) Create stream I0325 14:16:14.372019 6 log.go:172] (0xc000ac2c60) (0xc0014baa00) Stream added, broadcasting: 1 I0325 14:16:14.374410 6 log.go:172] (0xc000ac2c60) Reply frame received for 1 I0325 14:16:14.374447 6 log.go:172] (0xc000ac2c60) (0xc001b781e0) Create stream I0325 14:16:14.374460 6 log.go:172] (0xc000ac2c60) (0xc001b781e0) Stream added, broadcasting: 3 I0325 14:16:14.375379 6 log.go:172] (0xc000ac2c60) Reply frame received for 3 I0325 14:16:14.375413 6 log.go:172] (0xc000ac2c60) (0xc0014babe0) Create stream I0325 14:16:14.375424 6 log.go:172] (0xc000ac2c60) (0xc0014babe0) Stream added, broadcasting: 5 I0325 14:16:14.376232 6 log.go:172] (0xc000ac2c60) Reply frame received for 5 I0325 14:16:14.458598 6 log.go:172] (0xc000ac2c60) Data frame received for 5 I0325 14:16:14.458629 6 log.go:172] (0xc0014babe0) (5) Data frame handling I0325 14:16:14.458673 6 log.go:172] (0xc000ac2c60) Data frame received for 3 I0325 14:16:14.458708 6 log.go:172] (0xc001b781e0) (3) Data frame handling I0325 14:16:14.458733 6 log.go:172] (0xc001b781e0) (3) Data frame sent I0325 14:16:14.458746 6 log.go:172] (0xc000ac2c60) Data frame received for 3 I0325 14:16:14.458757 6 log.go:172] (0xc001b781e0) (3) Data frame handling I0325 14:16:14.459786 6 log.go:172] (0xc000ac2c60) Data frame received for 1 I0325 14:16:14.459858 6 log.go:172] (0xc0014baa00) (1) Data frame handling I0325 14:16:14.459888 6 log.go:172] (0xc0014baa00) (1) Data frame sent I0325 14:16:14.459909 6 log.go:172] (0xc000ac2c60) (0xc0014baa00) Stream removed, broadcasting: 1 I0325 14:16:14.459939 6 log.go:172] (0xc000ac2c60) Go away received I0325 14:16:14.460019 6 log.go:172] (0xc000ac2c60) (0xc0014baa00) Stream removed, broadcasting: 1 I0325 14:16:14.460039 6 log.go:172] (0xc000ac2c60) (0xc001b781e0) Stream removed, broadcasting: 3 I0325 14:16:14.460047 6 log.go:172] (0xc000ac2c60) (0xc0014babe0) Stream removed, broadcasting: 5 Mar 25 14:16:14.460: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Mar 25 14:16:14.460: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4318 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 25 14:16:14.460: INFO: >>> kubeConfig: /root/.kube/config I0325 14:16:14.489555 6 log.go:172] (0xc0009ab600) (0xc00296e000) Create stream I0325 14:16:14.489588 6 log.go:172] (0xc0009ab600) (0xc00296e000) Stream added, broadcasting: 1 I0325 14:16:14.492085 6 log.go:172] (0xc0009ab600) Reply frame received for 1 I0325 14:16:14.492117 6 log.go:172] (0xc0009ab600) (0xc002a5ca00) Create stream I0325 14:16:14.492127 6 log.go:172] (0xc0009ab600) (0xc002a5ca00) Stream added, broadcasting: 3 I0325 14:16:14.493098 6 log.go:172] (0xc0009ab600) Reply frame received for 3 I0325 14:16:14.493266 6 log.go:172] (0xc0009ab600) (0xc001b78280) Create stream I0325 14:16:14.493292 6 log.go:172] (0xc0009ab600) (0xc001b78280) Stream added, broadcasting: 5 I0325 14:16:14.494058 6 log.go:172] (0xc0009ab600) Reply frame received for 5 I0325 14:16:14.560936 6 log.go:172] (0xc0009ab600) Data frame received for 5 I0325 14:16:14.560981 6 log.go:172] (0xc001b78280) (5) Data frame handling I0325 14:16:14.561011 6 log.go:172] (0xc0009ab600) Data frame received for 3 I0325 14:16:14.561028 6 log.go:172] (0xc002a5ca00) (3) Data frame handling I0325 14:16:14.561036 6 log.go:172] (0xc002a5ca00) (3) Data frame sent I0325 14:16:14.561048 6 log.go:172] (0xc0009ab600) Data frame received for 3 I0325 14:16:14.561061 6 log.go:172] (0xc002a5ca00) (3) Data frame handling I0325 14:16:14.562759 6 log.go:172] (0xc0009ab600) Data frame received for 1 I0325 14:16:14.562776 6 log.go:172] (0xc00296e000) (1) Data frame handling I0325 14:16:14.562786 6 log.go:172] (0xc00296e000) (1) Data frame sent I0325 14:16:14.562803 6 log.go:172] (0xc0009ab600) (0xc00296e000) Stream removed, broadcasting: 1 I0325 14:16:14.562821 6 log.go:172] (0xc0009ab600) Go away received I0325 14:16:14.562976 6 log.go:172] (0xc0009ab600) (0xc00296e000) Stream removed, broadcasting: 1 I0325 14:16:14.562993 6 log.go:172] (0xc0009ab600) (0xc002a5ca00) Stream removed, broadcasting: 3 I0325 14:16:14.563003 6 log.go:172] (0xc0009ab600) (0xc001b78280) Stream removed, broadcasting: 5 Mar 25 14:16:14.563: INFO: Exec stderr: "" Mar 25 14:16:14.563: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4318 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 25 14:16:14.563: INFO: >>> kubeConfig: /root/.kube/config I0325 14:16:14.588067 6 log.go:172] (0xc001f4cf20) (0xc001b78500) Create stream I0325 14:16:14.588096 6 log.go:172] (0xc001f4cf20) (0xc001b78500) Stream added, broadcasting: 1 I0325 14:16:14.590736 6 log.go:172] (0xc001f4cf20) Reply frame received for 1 I0325 14:16:14.590783 6 log.go:172] (0xc001f4cf20) (0xc000dcc640) Create stream I0325 14:16:14.590799 6 log.go:172] (0xc001f4cf20) (0xc000dcc640) Stream added, broadcasting: 3 I0325 14:16:14.591792 6 log.go:172] (0xc001f4cf20) Reply frame received for 3 I0325 14:16:14.591847 6 log.go:172] (0xc001f4cf20) (0xc000dcc780) Create stream I0325 14:16:14.591863 6 log.go:172] (0xc001f4cf20) (0xc000dcc780) Stream added, broadcasting: 5 I0325 14:16:14.592669 6 log.go:172] (0xc001f4cf20) Reply frame received for 5 I0325 14:16:14.640716 6 log.go:172] (0xc001f4cf20) Data frame received for 5 I0325 14:16:14.640743 6 log.go:172] (0xc000dcc780) (5) Data frame handling I0325 14:16:14.641032 6 log.go:172] (0xc001f4cf20) Data frame received for 3 I0325 14:16:14.641050 6 log.go:172] (0xc000dcc640) (3) Data frame handling I0325 14:16:14.641059 6 log.go:172] (0xc000dcc640) (3) Data frame sent I0325 14:16:14.641078 6 log.go:172] (0xc001f4cf20) Data frame received for 3 I0325 14:16:14.641085 6 log.go:172] (0xc000dcc640) (3) Data frame handling I0325 14:16:14.647779 6 log.go:172] (0xc001f4cf20) Data frame received for 1 I0325 14:16:14.647797 6 log.go:172] (0xc001b78500) (1) Data frame handling I0325 14:16:14.647819 6 log.go:172] (0xc001b78500) (1) Data frame sent I0325 14:16:14.647846 6 log.go:172] (0xc001f4cf20) (0xc001b78500) Stream removed, broadcasting: 1 I0325 14:16:14.647866 6 log.go:172] (0xc001f4cf20) Go away received I0325 14:16:14.647971 6 log.go:172] (0xc001f4cf20) (0xc001b78500) Stream removed, broadcasting: 1 I0325 14:16:14.647991 6 log.go:172] (0xc001f4cf20) (0xc000dcc640) Stream removed, broadcasting: 3 I0325 14:16:14.648002 6 log.go:172] (0xc001f4cf20) (0xc000dcc780) Stream removed, broadcasting: 5 Mar 25 14:16:14.648: INFO: Exec stderr: "" Mar 25 14:16:14.648: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4318 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 25 14:16:14.648: INFO: >>> kubeConfig: /root/.kube/config I0325 14:16:14.675129 6 log.go:172] (0xc001cc5080) (0xc002a5cd20) Create stream I0325 14:16:14.675149 6 log.go:172] (0xc001cc5080) (0xc002a5cd20) Stream added, broadcasting: 1 I0325 14:16:14.677378 6 log.go:172] (0xc001cc5080) Reply frame received for 1 I0325 14:16:14.677485 6 log.go:172] (0xc001cc5080) (0xc001b786e0) Create stream I0325 14:16:14.677508 6 log.go:172] (0xc001cc5080) (0xc001b786e0) Stream added, broadcasting: 3 I0325 14:16:14.678516 6 log.go:172] (0xc001cc5080) Reply frame received for 3 I0325 14:16:14.678556 6 log.go:172] (0xc001cc5080) (0xc001b78820) Create stream I0325 14:16:14.678570 6 log.go:172] (0xc001cc5080) (0xc001b78820) Stream added, broadcasting: 5 I0325 14:16:14.679610 6 log.go:172] (0xc001cc5080) Reply frame received for 5 I0325 14:16:14.730116 6 log.go:172] (0xc001cc5080) Data frame received for 3 I0325 14:16:14.730163 6 log.go:172] (0xc001b786e0) (3) Data frame handling I0325 14:16:14.730215 6 log.go:172] (0xc001b786e0) (3) Data frame sent I0325 14:16:14.730254 6 log.go:172] (0xc001cc5080) Data frame received for 3 I0325 14:16:14.730281 6 log.go:172] (0xc001b786e0) (3) Data frame handling I0325 14:16:14.730318 6 log.go:172] (0xc001cc5080) Data frame received for 5 I0325 14:16:14.730362 6 log.go:172] (0xc001b78820) (5) Data frame handling I0325 14:16:14.732098 6 log.go:172] (0xc001cc5080) Data frame received for 1 I0325 14:16:14.732125 6 log.go:172] (0xc002a5cd20) (1) Data frame handling I0325 14:16:14.732158 6 log.go:172] (0xc002a5cd20) (1) Data frame sent I0325 14:16:14.732185 6 log.go:172] (0xc001cc5080) (0xc002a5cd20) Stream removed, broadcasting: 1 I0325 14:16:14.732268 6 log.go:172] (0xc001cc5080) Go away received I0325 14:16:14.732293 6 log.go:172] (0xc001cc5080) (0xc002a5cd20) Stream removed, broadcasting: 1 I0325 14:16:14.732303 6 log.go:172] (0xc001cc5080) (0xc001b786e0) Stream removed, broadcasting: 3 I0325 14:16:14.732311 6 log.go:172] (0xc001cc5080) (0xc001b78820) Stream removed, broadcasting: 5 Mar 25 14:16:14.732: INFO: Exec stderr: "" Mar 25 14:16:14.732: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4318 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 25 14:16:14.732: INFO: >>> kubeConfig: /root/.kube/config I0325 14:16:14.769101 6 log.go:172] (0xc00223aa50) (0xc00296e320) Create stream I0325 14:16:14.769288 6 log.go:172] (0xc00223aa50) (0xc00296e320) Stream added, broadcasting: 1 I0325 14:16:14.778568 6 log.go:172] (0xc00223aa50) Reply frame received for 1 I0325 14:16:14.778636 6 log.go:172] (0xc00223aa50) (0xc002a5cdc0) Create stream I0325 14:16:14.778685 6 log.go:172] (0xc00223aa50) (0xc002a5cdc0) Stream added, broadcasting: 3 I0325 14:16:14.779794 6 log.go:172] (0xc00223aa50) Reply frame received for 3 I0325 14:16:14.779831 6 log.go:172] (0xc00223aa50) (0xc002a5ce60) Create stream I0325 14:16:14.779936 6 log.go:172] (0xc00223aa50) (0xc002a5ce60) Stream added, broadcasting: 5 I0325 14:16:14.780881 6 log.go:172] (0xc00223aa50) Reply frame received for 5 I0325 14:16:14.843439 6 log.go:172] (0xc00223aa50) Data frame received for 3 I0325 14:16:14.843464 6 log.go:172] (0xc002a5cdc0) (3) Data frame handling I0325 14:16:14.843486 6 log.go:172] (0xc002a5cdc0) (3) Data frame sent I0325 14:16:14.843494 6 log.go:172] (0xc00223aa50) Data frame received for 3 I0325 14:16:14.843503 6 log.go:172] (0xc002a5cdc0) (3) Data frame handling I0325 14:16:14.844396 6 log.go:172] (0xc00223aa50) Data frame received for 5 I0325 14:16:14.844418 6 log.go:172] (0xc002a5ce60) (5) Data frame handling I0325 14:16:14.845053 6 log.go:172] (0xc00223aa50) Data frame received for 1 I0325 14:16:14.845068 6 log.go:172] (0xc00296e320) (1) Data frame handling I0325 14:16:14.845076 6 log.go:172] (0xc00296e320) (1) Data frame sent I0325 14:16:14.845083 6 log.go:172] (0xc00223aa50) (0xc00296e320) Stream removed, broadcasting: 1 I0325 14:16:14.845221 6 log.go:172] (0xc00223aa50) (0xc00296e320) Stream removed, broadcasting: 1 I0325 14:16:14.845242 6 log.go:172] (0xc00223aa50) (0xc002a5cdc0) Stream removed, broadcasting: 3 I0325 14:16:14.845448 6 log.go:172] (0xc00223aa50) (0xc002a5ce60) Stream removed, broadcasting: 5 I0325 14:16:14.845501 6 log.go:172] (0xc00223aa50) Go away received Mar 25 14:16:14.845: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:16:14.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-4318" for this suite. Mar 25 14:17:04.864: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:17:04.940: INFO: namespace e2e-kubelet-etc-hosts-4318 deletion completed in 50.090328404s • [SLOW TEST:61.268 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:17:04.941: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:17:05.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1312" for this suite. Mar 25 14:17:27.079: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:17:27.170: INFO: namespace pods-1312 deletion completed in 22.119944904s • [SLOW TEST:22.229 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:17:27.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-bbb84f43-16c8-4edd-9004-357bdfb58e76 STEP: Creating secret with name s-test-opt-upd-0460c311-85a0-4217-86f8-8cfb2b4d2735 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-bbb84f43-16c8-4edd-9004-357bdfb58e76 STEP: Updating secret s-test-opt-upd-0460c311-85a0-4217-86f8-8cfb2b4d2735 STEP: Creating secret with name s-test-opt-create-c8faec62-2210-49af-a932-800e678a3da2 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:18:49.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4820" for this suite. Mar 25 14:19:11.725: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:19:11.859: INFO: namespace projected-4820 deletion completed in 22.147499148s • [SLOW TEST:104.689 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:19:11.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition Mar 25 14:19:11.914: INFO: Waiting up to 5m0s for pod "var-expansion-463884f0-1390-4dff-9318-834ba898de27" in namespace "var-expansion-1170" to be "success or failure" Mar 25 14:19:11.929: INFO: Pod "var-expansion-463884f0-1390-4dff-9318-834ba898de27": Phase="Pending", Reason="", readiness=false. Elapsed: 15.382598ms Mar 25 14:19:13.933: INFO: Pod "var-expansion-463884f0-1390-4dff-9318-834ba898de27": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019403525s Mar 25 14:19:15.937: INFO: Pod "var-expansion-463884f0-1390-4dff-9318-834ba898de27": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02374838s STEP: Saw pod success Mar 25 14:19:15.937: INFO: Pod "var-expansion-463884f0-1390-4dff-9318-834ba898de27" satisfied condition "success or failure" Mar 25 14:19:15.941: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-463884f0-1390-4dff-9318-834ba898de27 container dapi-container: STEP: delete the pod Mar 25 14:19:15.973: INFO: Waiting for pod var-expansion-463884f0-1390-4dff-9318-834ba898de27 to disappear Mar 25 14:19:15.989: INFO: Pod var-expansion-463884f0-1390-4dff-9318-834ba898de27 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:19:15.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1170" for this suite. Mar 25 14:19:22.005: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:19:22.102: INFO: namespace var-expansion-1170 deletion completed in 6.109922005s • [SLOW TEST:10.242 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:19:22.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-5029 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5029 to expose endpoints map[] Mar 25 14:19:22.234: INFO: Get endpoints failed (2.93107ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Mar 25 14:19:23.238: INFO: successfully validated that service endpoint-test2 in namespace services-5029 exposes endpoints map[] (1.006294127s elapsed) STEP: Creating pod pod1 in namespace services-5029 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5029 to expose endpoints map[pod1:[80]] Mar 25 14:19:26.276: INFO: successfully validated that service endpoint-test2 in namespace services-5029 exposes endpoints map[pod1:[80]] (3.031654684s elapsed) STEP: Creating pod pod2 in namespace services-5029 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5029 to expose endpoints map[pod1:[80] pod2:[80]] Mar 25 14:19:29.336: INFO: successfully validated that service endpoint-test2 in namespace services-5029 exposes endpoints map[pod1:[80] pod2:[80]] (3.055950621s elapsed) STEP: Deleting pod pod1 in namespace services-5029 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5029 to expose endpoints map[pod2:[80]] Mar 25 14:19:30.383: INFO: successfully validated that service endpoint-test2 in namespace services-5029 exposes endpoints map[pod2:[80]] (1.042510574s elapsed) STEP: Deleting pod pod2 in namespace services-5029 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5029 to expose endpoints map[] Mar 25 14:19:31.398: INFO: successfully validated that service endpoint-test2 in namespace services-5029 exposes endpoints map[] (1.009336278s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:19:31.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5029" for this suite. Mar 25 14:19:47.440: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:19:47.510: INFO: namespace services-5029 deletion completed in 16.078072321s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:25.407 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:19:47.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-5a3640b9-b238-4c4f-b53e-6b6742902265 in namespace container-probe-3239 Mar 25 14:19:51.649: INFO: Started pod busybox-5a3640b9-b238-4c4f-b53e-6b6742902265 in namespace container-probe-3239 STEP: checking the pod's current state and verifying that restartCount is present Mar 25 14:19:51.652: INFO: Initial restart count of pod busybox-5a3640b9-b238-4c4f-b53e-6b6742902265 is 0 Mar 25 14:20:41.761: INFO: Restart count of pod container-probe-3239/busybox-5a3640b9-b238-4c4f-b53e-6b6742902265 is now 1 (50.109011527s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:20:41.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3239" for this suite. Mar 25 14:20:47.838: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:20:47.940: INFO: namespace container-probe-3239 deletion completed in 6.140695413s • [SLOW TEST:60.429 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:20:47.941: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-8212 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-8212 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8212 Mar 25 14:20:48.033: INFO: Found 0 stateful pods, waiting for 1 Mar 25 14:20:58.037: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Mar 25 14:20:58.041: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8212 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 25 14:20:58.281: INFO: stderr: "I0325 14:20:58.165892 2921 log.go:172] (0xc000730420) (0xc0004101e0) Create stream\nI0325 14:20:58.165945 2921 log.go:172] (0xc000730420) (0xc0004101e0) Stream added, broadcasting: 1\nI0325 14:20:58.170626 2921 log.go:172] (0xc000730420) Reply frame received for 1\nI0325 14:20:58.170786 2921 log.go:172] (0xc000730420) (0xc00056a000) Create stream\nI0325 14:20:58.170953 2921 log.go:172] (0xc000730420) (0xc00056a000) Stream added, broadcasting: 3\nI0325 14:20:58.173887 2921 log.go:172] (0xc000730420) Reply frame received for 3\nI0325 14:20:58.173937 2921 log.go:172] (0xc000730420) (0xc00056a500) Create stream\nI0325 14:20:58.173950 2921 log.go:172] (0xc000730420) (0xc00056a500) Stream added, broadcasting: 5\nI0325 14:20:58.174876 2921 log.go:172] (0xc000730420) Reply frame received for 5\nI0325 14:20:58.246782 2921 log.go:172] (0xc000730420) Data frame received for 5\nI0325 14:20:58.246806 2921 log.go:172] (0xc00056a500) (5) Data frame handling\nI0325 14:20:58.246837 2921 log.go:172] (0xc00056a500) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0325 14:20:58.273970 2921 log.go:172] (0xc000730420) Data frame received for 3\nI0325 14:20:58.273994 2921 log.go:172] (0xc00056a000) (3) Data frame handling\nI0325 14:20:58.274103 2921 log.go:172] (0xc00056a000) (3) Data frame sent\nI0325 14:20:58.274121 2921 log.go:172] (0xc000730420) Data frame received for 3\nI0325 14:20:58.274129 2921 log.go:172] (0xc00056a000) (3) Data frame handling\nI0325 14:20:58.274354 2921 log.go:172] (0xc000730420) Data frame received for 5\nI0325 14:20:58.274367 2921 log.go:172] (0xc00056a500) (5) Data frame handling\nI0325 14:20:58.276149 2921 log.go:172] (0xc000730420) Data frame received for 1\nI0325 14:20:58.276161 2921 log.go:172] (0xc0004101e0) (1) Data frame handling\nI0325 14:20:58.276166 2921 log.go:172] (0xc0004101e0) (1) Data frame sent\nI0325 14:20:58.276174 2921 log.go:172] (0xc000730420) (0xc0004101e0) Stream removed, broadcasting: 1\nI0325 14:20:58.276310 2921 log.go:172] (0xc000730420) Go away received\nI0325 14:20:58.276397 2921 log.go:172] (0xc000730420) (0xc0004101e0) Stream removed, broadcasting: 1\nI0325 14:20:58.276408 2921 log.go:172] (0xc000730420) (0xc00056a000) Stream removed, broadcasting: 3\nI0325 14:20:58.276415 2921 log.go:172] (0xc000730420) (0xc00056a500) Stream removed, broadcasting: 5\n" Mar 25 14:20:58.281: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 25 14:20:58.281: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 25 14:20:58.285: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 25 14:21:08.290: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 25 14:21:08.290: INFO: Waiting for statefulset status.replicas updated to 0 Mar 25 14:21:08.305: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999305s Mar 25 14:21:09.313: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.996080608s Mar 25 14:21:10.317: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.988397187s Mar 25 14:21:11.322: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.983662806s Mar 25 14:21:12.327: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.978975602s Mar 25 14:21:13.331: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.974100194s Mar 25 14:21:14.336: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.969615293s Mar 25 14:21:15.340: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.965148579s Mar 25 14:21:16.345: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.96112748s Mar 25 14:21:17.349: INFO: Verifying statefulset ss doesn't scale past 1 for another 956.001045ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8212 Mar 25 14:21:18.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8212 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 25 14:21:18.556: INFO: stderr: "I0325 14:21:18.475726 2942 log.go:172] (0xc000119080) (0xc0001dce60) Create stream\nI0325 14:21:18.475775 2942 log.go:172] (0xc000119080) (0xc0001dce60) Stream added, broadcasting: 1\nI0325 14:21:18.479112 2942 log.go:172] (0xc000119080) Reply frame received for 1\nI0325 14:21:18.479138 2942 log.go:172] (0xc000119080) (0xc0001dc5a0) Create stream\nI0325 14:21:18.479147 2942 log.go:172] (0xc000119080) (0xc0001dc5a0) Stream added, broadcasting: 3\nI0325 14:21:18.480031 2942 log.go:172] (0xc000119080) Reply frame received for 3\nI0325 14:21:18.480077 2942 log.go:172] (0xc000119080) (0xc00001c000) Create stream\nI0325 14:21:18.480093 2942 log.go:172] (0xc000119080) (0xc00001c000) Stream added, broadcasting: 5\nI0325 14:21:18.481015 2942 log.go:172] (0xc000119080) Reply frame received for 5\nI0325 14:21:18.548872 2942 log.go:172] (0xc000119080) Data frame received for 5\nI0325 14:21:18.548903 2942 log.go:172] (0xc00001c000) (5) Data frame handling\nI0325 14:21:18.548937 2942 log.go:172] (0xc00001c000) (5) Data frame sent\nI0325 14:21:18.548959 2942 log.go:172] (0xc000119080) Data frame received for 5\nI0325 14:21:18.548976 2942 log.go:172] (0xc00001c000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0325 14:21:18.549339 2942 log.go:172] (0xc000119080) Data frame received for 3\nI0325 14:21:18.549373 2942 log.go:172] (0xc0001dc5a0) (3) Data frame handling\nI0325 14:21:18.549386 2942 log.go:172] (0xc0001dc5a0) (3) Data frame sent\nI0325 14:21:18.549397 2942 log.go:172] (0xc000119080) Data frame received for 3\nI0325 14:21:18.549406 2942 log.go:172] (0xc0001dc5a0) (3) Data frame handling\nI0325 14:21:18.551028 2942 log.go:172] (0xc000119080) Data frame received for 1\nI0325 14:21:18.551052 2942 log.go:172] (0xc0001dce60) (1) Data frame handling\nI0325 14:21:18.551078 2942 log.go:172] (0xc0001dce60) (1) Data frame sent\nI0325 14:21:18.551099 2942 log.go:172] (0xc000119080) (0xc0001dce60) Stream removed, broadcasting: 1\nI0325 14:21:18.551165 2942 log.go:172] (0xc000119080) Go away received\nI0325 14:21:18.551546 2942 log.go:172] (0xc000119080) (0xc0001dce60) Stream removed, broadcasting: 1\nI0325 14:21:18.551569 2942 log.go:172] (0xc000119080) (0xc0001dc5a0) Stream removed, broadcasting: 3\nI0325 14:21:18.551581 2942 log.go:172] (0xc000119080) (0xc00001c000) Stream removed, broadcasting: 5\n" Mar 25 14:21:18.556: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 25 14:21:18.556: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 25 14:21:18.560: INFO: Found 1 stateful pods, waiting for 3 Mar 25 14:21:28.564: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 25 14:21:28.564: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 25 14:21:28.564: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Mar 25 14:21:28.570: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8212 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 25 14:21:28.776: INFO: stderr: "I0325 14:21:28.694123 2963 log.go:172] (0xc0009aa420) (0xc0003846e0) Create stream\nI0325 14:21:28.694181 2963 log.go:172] (0xc0009aa420) (0xc0003846e0) Stream added, broadcasting: 1\nI0325 14:21:28.697915 2963 log.go:172] (0xc0009aa420) Reply frame received for 1\nI0325 14:21:28.698051 2963 log.go:172] (0xc0009aa420) (0xc0001ec320) Create stream\nI0325 14:21:28.698077 2963 log.go:172] (0xc0009aa420) (0xc0001ec320) Stream added, broadcasting: 3\nI0325 14:21:28.698935 2963 log.go:172] (0xc0009aa420) Reply frame received for 3\nI0325 14:21:28.698957 2963 log.go:172] (0xc0009aa420) (0xc0001ec3c0) Create stream\nI0325 14:21:28.698964 2963 log.go:172] (0xc0009aa420) (0xc0001ec3c0) Stream added, broadcasting: 5\nI0325 14:21:28.699837 2963 log.go:172] (0xc0009aa420) Reply frame received for 5\nI0325 14:21:28.771150 2963 log.go:172] (0xc0009aa420) Data frame received for 5\nI0325 14:21:28.771205 2963 log.go:172] (0xc0001ec3c0) (5) Data frame handling\nI0325 14:21:28.771227 2963 log.go:172] (0xc0001ec3c0) (5) Data frame sent\nI0325 14:21:28.771244 2963 log.go:172] (0xc0009aa420) Data frame received for 5\nI0325 14:21:28.771259 2963 log.go:172] (0xc0001ec3c0) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0325 14:21:28.771294 2963 log.go:172] (0xc0009aa420) Data frame received for 3\nI0325 14:21:28.771317 2963 log.go:172] (0xc0001ec320) (3) Data frame handling\nI0325 14:21:28.771330 2963 log.go:172] (0xc0001ec320) (3) Data frame sent\nI0325 14:21:28.771337 2963 log.go:172] (0xc0009aa420) Data frame received for 3\nI0325 14:21:28.771343 2963 log.go:172] (0xc0001ec320) (3) Data frame handling\nI0325 14:21:28.772809 2963 log.go:172] (0xc0009aa420) Data frame received for 1\nI0325 14:21:28.772841 2963 log.go:172] (0xc0003846e0) (1) Data frame handling\nI0325 14:21:28.772859 2963 log.go:172] (0xc0003846e0) (1) Data frame sent\nI0325 14:21:28.772878 2963 log.go:172] (0xc0009aa420) (0xc0003846e0) Stream removed, broadcasting: 1\nI0325 14:21:28.772898 2963 log.go:172] (0xc0009aa420) Go away received\nI0325 14:21:28.773262 2963 log.go:172] (0xc0009aa420) (0xc0003846e0) Stream removed, broadcasting: 1\nI0325 14:21:28.773278 2963 log.go:172] (0xc0009aa420) (0xc0001ec320) Stream removed, broadcasting: 3\nI0325 14:21:28.773284 2963 log.go:172] (0xc0009aa420) (0xc0001ec3c0) Stream removed, broadcasting: 5\n" Mar 25 14:21:28.777: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 25 14:21:28.777: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 25 14:21:28.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8212 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 25 14:21:29.011: INFO: stderr: "I0325 14:21:28.906728 2986 log.go:172] (0xc0009f6420) (0xc0009ec5a0) Create stream\nI0325 14:21:28.906775 2986 log.go:172] (0xc0009f6420) (0xc0009ec5a0) Stream added, broadcasting: 1\nI0325 14:21:28.909299 2986 log.go:172] (0xc0009f6420) Reply frame received for 1\nI0325 14:21:28.909362 2986 log.go:172] (0xc0009f6420) (0xc000926000) Create stream\nI0325 14:21:28.909379 2986 log.go:172] (0xc0009f6420) (0xc000926000) Stream added, broadcasting: 3\nI0325 14:21:28.910186 2986 log.go:172] (0xc0009f6420) Reply frame received for 3\nI0325 14:21:28.910221 2986 log.go:172] (0xc0009f6420) (0xc0009ec640) Create stream\nI0325 14:21:28.910229 2986 log.go:172] (0xc0009f6420) (0xc0009ec640) Stream added, broadcasting: 5\nI0325 14:21:28.911068 2986 log.go:172] (0xc0009f6420) Reply frame received for 5\nI0325 14:21:28.976345 2986 log.go:172] (0xc0009f6420) Data frame received for 5\nI0325 14:21:28.976377 2986 log.go:172] (0xc0009ec640) (5) Data frame handling\nI0325 14:21:28.976399 2986 log.go:172] (0xc0009ec640) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0325 14:21:29.004893 2986 log.go:172] (0xc0009f6420) Data frame received for 3\nI0325 14:21:29.004929 2986 log.go:172] (0xc000926000) (3) Data frame handling\nI0325 14:21:29.004953 2986 log.go:172] (0xc000926000) (3) Data frame sent\nI0325 14:21:29.004967 2986 log.go:172] (0xc0009f6420) Data frame received for 3\nI0325 14:21:29.004978 2986 log.go:172] (0xc000926000) (3) Data frame handling\nI0325 14:21:29.005002 2986 log.go:172] (0xc0009f6420) Data frame received for 5\nI0325 14:21:29.005041 2986 log.go:172] (0xc0009ec640) (5) Data frame handling\nI0325 14:21:29.007098 2986 log.go:172] (0xc0009f6420) Data frame received for 1\nI0325 14:21:29.007120 2986 log.go:172] (0xc0009ec5a0) (1) Data frame handling\nI0325 14:21:29.007132 2986 log.go:172] (0xc0009ec5a0) (1) Data frame sent\nI0325 14:21:29.007163 2986 log.go:172] (0xc0009f6420) (0xc0009ec5a0) Stream removed, broadcasting: 1\nI0325 14:21:29.007202 2986 log.go:172] (0xc0009f6420) Go away received\nI0325 14:21:29.007502 2986 log.go:172] (0xc0009f6420) (0xc0009ec5a0) Stream removed, broadcasting: 1\nI0325 14:21:29.007516 2986 log.go:172] (0xc0009f6420) (0xc000926000) Stream removed, broadcasting: 3\nI0325 14:21:29.007523 2986 log.go:172] (0xc0009f6420) (0xc0009ec640) Stream removed, broadcasting: 5\n" Mar 25 14:21:29.011: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 25 14:21:29.011: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 25 14:21:29.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8212 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 25 14:21:29.273: INFO: stderr: "I0325 14:21:29.161584 3007 log.go:172] (0xc0009080b0) (0xc00094c640) Create stream\nI0325 14:21:29.161637 3007 log.go:172] (0xc0009080b0) (0xc00094c640) Stream added, broadcasting: 1\nI0325 14:21:29.163766 3007 log.go:172] (0xc0009080b0) Reply frame received for 1\nI0325 14:21:29.163823 3007 log.go:172] (0xc0009080b0) (0xc00096c000) Create stream\nI0325 14:21:29.163844 3007 log.go:172] (0xc0009080b0) (0xc00096c000) Stream added, broadcasting: 3\nI0325 14:21:29.164757 3007 log.go:172] (0xc0009080b0) Reply frame received for 3\nI0325 14:21:29.164790 3007 log.go:172] (0xc0009080b0) (0xc000612280) Create stream\nI0325 14:21:29.164801 3007 log.go:172] (0xc0009080b0) (0xc000612280) Stream added, broadcasting: 5\nI0325 14:21:29.165608 3007 log.go:172] (0xc0009080b0) Reply frame received for 5\nI0325 14:21:29.224190 3007 log.go:172] (0xc0009080b0) Data frame received for 5\nI0325 14:21:29.224225 3007 log.go:172] (0xc000612280) (5) Data frame handling\nI0325 14:21:29.224247 3007 log.go:172] (0xc000612280) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0325 14:21:29.266714 3007 log.go:172] (0xc0009080b0) Data frame received for 5\nI0325 14:21:29.266745 3007 log.go:172] (0xc000612280) (5) Data frame handling\nI0325 14:21:29.266764 3007 log.go:172] (0xc0009080b0) Data frame received for 3\nI0325 14:21:29.266768 3007 log.go:172] (0xc00096c000) (3) Data frame handling\nI0325 14:21:29.266786 3007 log.go:172] (0xc00096c000) (3) Data frame sent\nI0325 14:21:29.266793 3007 log.go:172] (0xc0009080b0) Data frame received for 3\nI0325 14:21:29.266799 3007 log.go:172] (0xc00096c000) (3) Data frame handling\nI0325 14:21:29.268688 3007 log.go:172] (0xc0009080b0) Data frame received for 1\nI0325 14:21:29.268717 3007 log.go:172] (0xc00094c640) (1) Data frame handling\nI0325 14:21:29.268739 3007 log.go:172] (0xc00094c640) (1) Data frame sent\nI0325 14:21:29.268757 3007 log.go:172] (0xc0009080b0) (0xc00094c640) Stream removed, broadcasting: 1\nI0325 14:21:29.268780 3007 log.go:172] (0xc0009080b0) Go away received\nI0325 14:21:29.269061 3007 log.go:172] (0xc0009080b0) (0xc00094c640) Stream removed, broadcasting: 1\nI0325 14:21:29.269077 3007 log.go:172] (0xc0009080b0) (0xc00096c000) Stream removed, broadcasting: 3\nI0325 14:21:29.269082 3007 log.go:172] (0xc0009080b0) (0xc000612280) Stream removed, broadcasting: 5\n" Mar 25 14:21:29.274: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 25 14:21:29.274: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 25 14:21:29.274: INFO: Waiting for statefulset status.replicas updated to 0 Mar 25 14:21:29.278: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Mar 25 14:21:39.286: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 25 14:21:39.286: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 25 14:21:39.286: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 25 14:21:39.304: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999364s Mar 25 14:21:40.308: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.989956001s Mar 25 14:21:41.313: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.986058354s Mar 25 14:21:42.319: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.980807648s Mar 25 14:21:43.324: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.97519891s Mar 25 14:21:44.329: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.969812416s Mar 25 14:21:45.335: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.964393518s Mar 25 14:21:46.340: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.95934964s Mar 25 14:21:47.345: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.953973244s Mar 25 14:21:48.350: INFO: Verifying statefulset ss doesn't scale past 3 for another 948.709248ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8212 Mar 25 14:21:49.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8212 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 25 14:21:49.564: INFO: stderr: "I0325 14:21:49.472871 3029 log.go:172] (0xc0008c6420) (0xc00096c640) Create stream\nI0325 14:21:49.472922 3029 log.go:172] (0xc0008c6420) (0xc00096c640) Stream added, broadcasting: 1\nI0325 14:21:49.474968 3029 log.go:172] (0xc0008c6420) Reply frame received for 1\nI0325 14:21:49.474995 3029 log.go:172] (0xc0008c6420) (0xc00065e1e0) Create stream\nI0325 14:21:49.475004 3029 log.go:172] (0xc0008c6420) (0xc00065e1e0) Stream added, broadcasting: 3\nI0325 14:21:49.475710 3029 log.go:172] (0xc0008c6420) Reply frame received for 3\nI0325 14:21:49.475732 3029 log.go:172] (0xc0008c6420) (0xc00096c6e0) Create stream\nI0325 14:21:49.475744 3029 log.go:172] (0xc0008c6420) (0xc00096c6e0) Stream added, broadcasting: 5\nI0325 14:21:49.476487 3029 log.go:172] (0xc0008c6420) Reply frame received for 5\nI0325 14:21:49.556948 3029 log.go:172] (0xc0008c6420) Data frame received for 5\nI0325 14:21:49.557009 3029 log.go:172] (0xc00096c6e0) (5) Data frame handling\nI0325 14:21:49.557032 3029 log.go:172] (0xc00096c6e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0325 14:21:49.557061 3029 log.go:172] (0xc0008c6420) Data frame received for 3\nI0325 14:21:49.557079 3029 log.go:172] (0xc00065e1e0) (3) Data frame handling\nI0325 14:21:49.557265 3029 log.go:172] (0xc00065e1e0) (3) Data frame sent\nI0325 14:21:49.557300 3029 log.go:172] (0xc0008c6420) Data frame received for 3\nI0325 14:21:49.557316 3029 log.go:172] (0xc00065e1e0) (3) Data frame handling\nI0325 14:21:49.557391 3029 log.go:172] (0xc0008c6420) Data frame received for 5\nI0325 14:21:49.557431 3029 log.go:172] (0xc00096c6e0) (5) Data frame handling\nI0325 14:21:49.559198 3029 log.go:172] (0xc0008c6420) Data frame received for 1\nI0325 14:21:49.559221 3029 log.go:172] (0xc00096c640) (1) Data frame handling\nI0325 14:21:49.559241 3029 log.go:172] (0xc00096c640) (1) Data frame sent\nI0325 14:21:49.559280 3029 log.go:172] (0xc0008c6420) (0xc00096c640) Stream removed, broadcasting: 1\nI0325 14:21:49.559295 3029 log.go:172] (0xc0008c6420) Go away received\nI0325 14:21:49.559721 3029 log.go:172] (0xc0008c6420) (0xc00096c640) Stream removed, broadcasting: 1\nI0325 14:21:49.559745 3029 log.go:172] (0xc0008c6420) (0xc00065e1e0) Stream removed, broadcasting: 3\nI0325 14:21:49.559757 3029 log.go:172] (0xc0008c6420) (0xc00096c6e0) Stream removed, broadcasting: 5\n" Mar 25 14:21:49.564: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 25 14:21:49.564: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 25 14:21:49.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8212 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 25 14:21:49.765: INFO: stderr: "I0325 14:21:49.703501 3050 log.go:172] (0xc000518420) (0xc0004ae6e0) Create stream\nI0325 14:21:49.703575 3050 log.go:172] (0xc000518420) (0xc0004ae6e0) Stream added, broadcasting: 1\nI0325 14:21:49.706037 3050 log.go:172] (0xc000518420) Reply frame received for 1\nI0325 14:21:49.706096 3050 log.go:172] (0xc000518420) (0xc0007fe000) Create stream\nI0325 14:21:49.706114 3050 log.go:172] (0xc000518420) (0xc0007fe000) Stream added, broadcasting: 3\nI0325 14:21:49.706965 3050 log.go:172] (0xc000518420) Reply frame received for 3\nI0325 14:21:49.706988 3050 log.go:172] (0xc000518420) (0xc0007fe0a0) Create stream\nI0325 14:21:49.706995 3050 log.go:172] (0xc000518420) (0xc0007fe0a0) Stream added, broadcasting: 5\nI0325 14:21:49.707850 3050 log.go:172] (0xc000518420) Reply frame received for 5\nI0325 14:21:49.758780 3050 log.go:172] (0xc000518420) Data frame received for 3\nI0325 14:21:49.758811 3050 log.go:172] (0xc0007fe000) (3) Data frame handling\nI0325 14:21:49.758823 3050 log.go:172] (0xc0007fe000) (3) Data frame sent\nI0325 14:21:49.758833 3050 log.go:172] (0xc000518420) Data frame received for 3\nI0325 14:21:49.758850 3050 log.go:172] (0xc0007fe000) (3) Data frame handling\nI0325 14:21:49.758930 3050 log.go:172] (0xc000518420) Data frame received for 5\nI0325 14:21:49.758946 3050 log.go:172] (0xc0007fe0a0) (5) Data frame handling\nI0325 14:21:49.758957 3050 log.go:172] (0xc0007fe0a0) (5) Data frame sent\nI0325 14:21:49.758967 3050 log.go:172] (0xc000518420) Data frame received for 5\nI0325 14:21:49.758981 3050 log.go:172] (0xc0007fe0a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0325 14:21:49.760056 3050 log.go:172] (0xc000518420) Data frame received for 1\nI0325 14:21:49.760068 3050 log.go:172] (0xc0004ae6e0) (1) Data frame handling\nI0325 14:21:49.760075 3050 log.go:172] (0xc0004ae6e0) (1) Data frame sent\nI0325 14:21:49.760480 3050 log.go:172] (0xc000518420) (0xc0004ae6e0) Stream removed, broadcasting: 1\nI0325 14:21:49.760775 3050 log.go:172] (0xc000518420) Go away received\nI0325 14:21:49.761067 3050 log.go:172] (0xc000518420) (0xc0004ae6e0) Stream removed, broadcasting: 1\nI0325 14:21:49.761088 3050 log.go:172] (0xc000518420) (0xc0007fe000) Stream removed, broadcasting: 3\nI0325 14:21:49.761103 3050 log.go:172] (0xc000518420) (0xc0007fe0a0) Stream removed, broadcasting: 5\n" Mar 25 14:21:49.765: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 25 14:21:49.765: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 25 14:21:49.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8212 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 25 14:21:49.977: INFO: stderr: "I0325 14:21:49.895231 3070 log.go:172] (0xc000116f20) (0xc00081c820) Create stream\nI0325 14:21:49.895294 3070 log.go:172] (0xc000116f20) (0xc00081c820) Stream added, broadcasting: 1\nI0325 14:21:49.899509 3070 log.go:172] (0xc000116f20) Reply frame received for 1\nI0325 14:21:49.899561 3070 log.go:172] (0xc000116f20) (0xc00081c000) Create stream\nI0325 14:21:49.899583 3070 log.go:172] (0xc000116f20) (0xc00081c000) Stream added, broadcasting: 3\nI0325 14:21:49.900482 3070 log.go:172] (0xc000116f20) Reply frame received for 3\nI0325 14:21:49.900515 3070 log.go:172] (0xc000116f20) (0xc00081c140) Create stream\nI0325 14:21:49.900524 3070 log.go:172] (0xc000116f20) (0xc00081c140) Stream added, broadcasting: 5\nI0325 14:21:49.901693 3070 log.go:172] (0xc000116f20) Reply frame received for 5\nI0325 14:21:49.971479 3070 log.go:172] (0xc000116f20) Data frame received for 5\nI0325 14:21:49.971539 3070 log.go:172] (0xc00081c140) (5) Data frame handling\nI0325 14:21:49.971557 3070 log.go:172] (0xc00081c140) (5) Data frame sent\nI0325 14:21:49.971581 3070 log.go:172] (0xc000116f20) Data frame received for 5\nI0325 14:21:49.971595 3070 log.go:172] (0xc00081c140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0325 14:21:49.971620 3070 log.go:172] (0xc000116f20) Data frame received for 3\nI0325 14:21:49.971646 3070 log.go:172] (0xc00081c000) (3) Data frame handling\nI0325 14:21:49.971675 3070 log.go:172] (0xc00081c000) (3) Data frame sent\nI0325 14:21:49.971689 3070 log.go:172] (0xc000116f20) Data frame received for 3\nI0325 14:21:49.971701 3070 log.go:172] (0xc00081c000) (3) Data frame handling\nI0325 14:21:49.973236 3070 log.go:172] (0xc000116f20) Data frame received for 1\nI0325 14:21:49.973262 3070 log.go:172] (0xc00081c820) (1) Data frame handling\nI0325 14:21:49.973272 3070 log.go:172] (0xc00081c820) (1) Data frame sent\nI0325 14:21:49.973286 3070 log.go:172] (0xc000116f20) (0xc00081c820) Stream removed, broadcasting: 1\nI0325 14:21:49.973301 3070 log.go:172] (0xc000116f20) Go away received\nI0325 14:21:49.973920 3070 log.go:172] (0xc000116f20) (0xc00081c820) Stream removed, broadcasting: 1\nI0325 14:21:49.973955 3070 log.go:172] (0xc000116f20) (0xc00081c000) Stream removed, broadcasting: 3\nI0325 14:21:49.973977 3070 log.go:172] (0xc000116f20) (0xc00081c140) Stream removed, broadcasting: 5\n" Mar 25 14:21:49.977: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 25 14:21:49.977: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 25 14:21:49.977: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Mar 25 14:22:19.994: INFO: Deleting all statefulset in ns statefulset-8212 Mar 25 14:22:19.996: INFO: Scaling statefulset ss to 0 Mar 25 14:22:20.004: INFO: Waiting for statefulset status.replicas updated to 0 Mar 25 14:22:20.006: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:22:20.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8212" for this suite. Mar 25 14:22:26.050: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:22:26.150: INFO: namespace statefulset-8212 deletion completed in 6.114661872s • [SLOW TEST:98.209 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:22:26.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-f9d75efe-baec-4237-a8fa-e53567632f19 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-f9d75efe-baec-4237-a8fa-e53567632f19 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:23:34.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8680" for this suite. Mar 25 14:23:56.549: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:23:56.634: INFO: namespace configmap-8680 deletion completed in 22.100745194s • [SLOW TEST:90.484 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:23:56.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-399a3550-59f1-4f47-8b28-afe17794f4c6 Mar 25 14:23:56.736: INFO: Pod name my-hostname-basic-399a3550-59f1-4f47-8b28-afe17794f4c6: Found 0 pods out of 1 Mar 25 14:24:01.740: INFO: Pod name my-hostname-basic-399a3550-59f1-4f47-8b28-afe17794f4c6: Found 1 pods out of 1 Mar 25 14:24:01.740: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-399a3550-59f1-4f47-8b28-afe17794f4c6" are running Mar 25 14:24:01.744: INFO: Pod "my-hostname-basic-399a3550-59f1-4f47-8b28-afe17794f4c6-dgvqf" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-25 14:23:56 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-25 14:23:59 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-25 14:23:59 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-25 14:23:56 +0000 UTC Reason: Message:}]) Mar 25 14:24:01.744: INFO: Trying to dial the pod Mar 25 14:24:06.756: INFO: Controller my-hostname-basic-399a3550-59f1-4f47-8b28-afe17794f4c6: Got expected result from replica 1 [my-hostname-basic-399a3550-59f1-4f47-8b28-afe17794f4c6-dgvqf]: "my-hostname-basic-399a3550-59f1-4f47-8b28-afe17794f4c6-dgvqf", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:24:06.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-329" for this suite. Mar 25 14:24:12.798: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:24:12.878: INFO: namespace replication-controller-329 deletion completed in 6.118381663s • [SLOW TEST:16.244 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:24:12.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-eb8b1811-1569-454b-b9bf-402797bebdb1 STEP: Creating a pod to test consume configMaps Mar 25 14:24:12.973: INFO: Waiting up to 5m0s for pod "pod-configmaps-f8b75f5d-837a-48a0-a6a0-4e648e048295" in namespace "configmap-8516" to be "success or failure" Mar 25 14:24:12.989: INFO: Pod "pod-configmaps-f8b75f5d-837a-48a0-a6a0-4e648e048295": Phase="Pending", Reason="", readiness=false. Elapsed: 16.422546ms Mar 25 14:24:15.014: INFO: Pod "pod-configmaps-f8b75f5d-837a-48a0-a6a0-4e648e048295": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041315438s Mar 25 14:24:17.018: INFO: Pod "pod-configmaps-f8b75f5d-837a-48a0-a6a0-4e648e048295": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045692227s STEP: Saw pod success Mar 25 14:24:17.018: INFO: Pod "pod-configmaps-f8b75f5d-837a-48a0-a6a0-4e648e048295" satisfied condition "success or failure" Mar 25 14:24:17.022: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-f8b75f5d-837a-48a0-a6a0-4e648e048295 container configmap-volume-test: STEP: delete the pod Mar 25 14:24:17.075: INFO: Waiting for pod pod-configmaps-f8b75f5d-837a-48a0-a6a0-4e648e048295 to disappear Mar 25 14:24:17.103: INFO: Pod pod-configmaps-f8b75f5d-837a-48a0-a6a0-4e648e048295 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:24:17.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8516" for this suite. Mar 25 14:24:23.124: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:24:23.206: INFO: namespace configmap-8516 deletion completed in 6.098988733s • [SLOW TEST:10.327 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:24:23.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-05ec154c-b1e6-4f07-b97b-8c4dbc8fc3ad in namespace container-probe-555 Mar 25 14:24:27.305: INFO: Started pod busybox-05ec154c-b1e6-4f07-b97b-8c4dbc8fc3ad in namespace container-probe-555 STEP: checking the pod's current state and verifying that restartCount is present Mar 25 14:24:27.308: INFO: Initial restart count of pod busybox-05ec154c-b1e6-4f07-b97b-8c4dbc8fc3ad is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:28:27.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-555" for this suite. Mar 25 14:28:34.021: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:28:34.096: INFO: namespace container-probe-555 deletion completed in 6.107741028s • [SLOW TEST:250.890 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:28:34.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-fa0addff-810a-412e-9f9b-40a46c1dd843 STEP: Creating a pod to test consume secrets Mar 25 14:28:34.180: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e8f2530a-8cb4-4e5d-a686-a8bb67e2238f" in namespace "projected-7114" to be "success or failure" Mar 25 14:28:34.185: INFO: Pod "pod-projected-secrets-e8f2530a-8cb4-4e5d-a686-a8bb67e2238f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.930226ms Mar 25 14:28:36.209: INFO: Pod "pod-projected-secrets-e8f2530a-8cb4-4e5d-a686-a8bb67e2238f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028385362s Mar 25 14:28:38.213: INFO: Pod "pod-projected-secrets-e8f2530a-8cb4-4e5d-a686-a8bb67e2238f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032714004s STEP: Saw pod success Mar 25 14:28:38.213: INFO: Pod "pod-projected-secrets-e8f2530a-8cb4-4e5d-a686-a8bb67e2238f" satisfied condition "success or failure" Mar 25 14:28:38.216: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-e8f2530a-8cb4-4e5d-a686-a8bb67e2238f container projected-secret-volume-test: STEP: delete the pod Mar 25 14:28:38.234: INFO: Waiting for pod pod-projected-secrets-e8f2530a-8cb4-4e5d-a686-a8bb67e2238f to disappear Mar 25 14:28:38.239: INFO: Pod pod-projected-secrets-e8f2530a-8cb4-4e5d-a686-a8bb67e2238f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:28:38.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7114" for this suite. Mar 25 14:28:44.275: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:28:44.355: INFO: namespace projected-7114 deletion completed in 6.112175539s • [SLOW TEST:10.259 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:28:44.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Mar 25 14:28:49.467: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:28:50.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-7724" for this suite. Mar 25 14:29:12.518: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:29:12.600: INFO: namespace replicaset-7724 deletion completed in 22.094287771s • [SLOW TEST:28.244 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:29:12.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 25 14:29:17.208: INFO: Successfully updated pod "pod-update-bc7edfab-79f4-44e3-984c-682c269e4742" STEP: verifying the updated pod is in kubernetes Mar 25 14:29:17.219: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:29:17.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9593" for this suite. Mar 25 14:29:39.231: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:29:39.309: INFO: namespace pods-9593 deletion completed in 22.086945109s • [SLOW TEST:26.708 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:29:39.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 25 14:29:39.375: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:29:40.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4350" for this suite. Mar 25 14:29:46.448: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:29:46.528: INFO: namespace custom-resource-definition-4350 deletion completed in 6.096869954s • [SLOW TEST:7.218 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:29:46.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-8ec931b7-10e9-45d7-a5b9-d56edb7cb66f STEP: Creating a pod to test consume secrets Mar 25 14:29:46.608: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-415be7dd-2555-4859-ab9c-e8f5e00681f6" in namespace "projected-9971" to be "success or failure" Mar 25 14:29:46.611: INFO: Pod "pod-projected-secrets-415be7dd-2555-4859-ab9c-e8f5e00681f6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.112425ms Mar 25 14:29:48.616: INFO: Pod "pod-projected-secrets-415be7dd-2555-4859-ab9c-e8f5e00681f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00732658s Mar 25 14:29:50.620: INFO: Pod "pod-projected-secrets-415be7dd-2555-4859-ab9c-e8f5e00681f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011358361s STEP: Saw pod success Mar 25 14:29:50.620: INFO: Pod "pod-projected-secrets-415be7dd-2555-4859-ab9c-e8f5e00681f6" satisfied condition "success or failure" Mar 25 14:29:50.623: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-415be7dd-2555-4859-ab9c-e8f5e00681f6 container projected-secret-volume-test: STEP: delete the pod Mar 25 14:29:50.659: INFO: Waiting for pod pod-projected-secrets-415be7dd-2555-4859-ab9c-e8f5e00681f6 to disappear Mar 25 14:29:50.684: INFO: Pod pod-projected-secrets-415be7dd-2555-4859-ab9c-e8f5e00681f6 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:29:50.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9971" for this suite. Mar 25 14:29:56.712: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:29:56.796: INFO: namespace projected-9971 deletion completed in 6.108906994s • [SLOW TEST:10.268 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:29:56.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-z9fv STEP: Creating a pod to test atomic-volume-subpath Mar 25 14:29:56.878: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-z9fv" in namespace "subpath-3679" to be "success or failure" Mar 25 14:29:56.882: INFO: Pod "pod-subpath-test-downwardapi-z9fv": Phase="Pending", Reason="", readiness=false. Elapsed: 3.321602ms Mar 25 14:29:58.885: INFO: Pod "pod-subpath-test-downwardapi-z9fv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006780963s Mar 25 14:30:00.898: INFO: Pod "pod-subpath-test-downwardapi-z9fv": Phase="Running", Reason="", readiness=true. Elapsed: 4.020174532s Mar 25 14:30:02.902: INFO: Pod "pod-subpath-test-downwardapi-z9fv": Phase="Running", Reason="", readiness=true. Elapsed: 6.024054645s Mar 25 14:30:04.906: INFO: Pod "pod-subpath-test-downwardapi-z9fv": Phase="Running", Reason="", readiness=true. Elapsed: 8.027802484s Mar 25 14:30:06.916: INFO: Pod "pod-subpath-test-downwardapi-z9fv": Phase="Running", Reason="", readiness=true. Elapsed: 10.037294303s Mar 25 14:30:08.919: INFO: Pod "pod-subpath-test-downwardapi-z9fv": Phase="Running", Reason="", readiness=true. Elapsed: 12.040975719s Mar 25 14:30:10.922: INFO: Pod "pod-subpath-test-downwardapi-z9fv": Phase="Running", Reason="", readiness=true. Elapsed: 14.044126869s Mar 25 14:30:12.927: INFO: Pod "pod-subpath-test-downwardapi-z9fv": Phase="Running", Reason="", readiness=true. Elapsed: 16.048394222s Mar 25 14:30:14.930: INFO: Pod "pod-subpath-test-downwardapi-z9fv": Phase="Running", Reason="", readiness=true. Elapsed: 18.051468503s Mar 25 14:30:16.941: INFO: Pod "pod-subpath-test-downwardapi-z9fv": Phase="Running", Reason="", readiness=true. Elapsed: 20.063096778s Mar 25 14:30:18.960: INFO: Pod "pod-subpath-test-downwardapi-z9fv": Phase="Running", Reason="", readiness=true. Elapsed: 22.082230633s Mar 25 14:30:20.964: INFO: Pod "pod-subpath-test-downwardapi-z9fv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.085852793s STEP: Saw pod success Mar 25 14:30:20.964: INFO: Pod "pod-subpath-test-downwardapi-z9fv" satisfied condition "success or failure" Mar 25 14:30:20.968: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-downwardapi-z9fv container test-container-subpath-downwardapi-z9fv: STEP: delete the pod Mar 25 14:30:21.008: INFO: Waiting for pod pod-subpath-test-downwardapi-z9fv to disappear Mar 25 14:30:21.013: INFO: Pod pod-subpath-test-downwardapi-z9fv no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-z9fv Mar 25 14:30:21.013: INFO: Deleting pod "pod-subpath-test-downwardapi-z9fv" in namespace "subpath-3679" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:30:21.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3679" for this suite. Mar 25 14:30:27.030: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:30:27.113: INFO: namespace subpath-3679 deletion completed in 6.094780473s • [SLOW TEST:30.316 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:30:27.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Mar 25 14:30:27.189: INFO: Waiting up to 5m0s for pod "downward-api-00648433-3e87-4aff-a776-190840aee2c2" in namespace "downward-api-7239" to be "success or failure" Mar 25 14:30:27.204: INFO: Pod "downward-api-00648433-3e87-4aff-a776-190840aee2c2": Phase="Pending", Reason="", readiness=false. Elapsed: 14.679583ms Mar 25 14:30:29.208: INFO: Pod "downward-api-00648433-3e87-4aff-a776-190840aee2c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018711464s Mar 25 14:30:31.213: INFO: Pod "downward-api-00648433-3e87-4aff-a776-190840aee2c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023130647s STEP: Saw pod success Mar 25 14:30:31.213: INFO: Pod "downward-api-00648433-3e87-4aff-a776-190840aee2c2" satisfied condition "success or failure" Mar 25 14:30:31.215: INFO: Trying to get logs from node iruya-worker pod downward-api-00648433-3e87-4aff-a776-190840aee2c2 container dapi-container: STEP: delete the pod Mar 25 14:30:31.243: INFO: Waiting for pod downward-api-00648433-3e87-4aff-a776-190840aee2c2 to disappear Mar 25 14:30:31.247: INFO: Pod downward-api-00648433-3e87-4aff-a776-190840aee2c2 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:30:31.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7239" for this suite. Mar 25 14:30:37.262: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:30:37.344: INFO: namespace downward-api-7239 deletion completed in 6.094654068s • [SLOW TEST:10.231 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:30:37.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 25 14:30:37.398: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a406d1ff-de8e-4f24-ac94-ffe876514dbd" in namespace "projected-4204" to be "success or failure" Mar 25 14:30:37.414: INFO: Pod "downwardapi-volume-a406d1ff-de8e-4f24-ac94-ffe876514dbd": Phase="Pending", Reason="", readiness=false. Elapsed: 15.398961ms Mar 25 14:30:39.418: INFO: Pod "downwardapi-volume-a406d1ff-de8e-4f24-ac94-ffe876514dbd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019234499s Mar 25 14:30:41.422: INFO: Pod "downwardapi-volume-a406d1ff-de8e-4f24-ac94-ffe876514dbd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023770319s STEP: Saw pod success Mar 25 14:30:41.422: INFO: Pod "downwardapi-volume-a406d1ff-de8e-4f24-ac94-ffe876514dbd" satisfied condition "success or failure" Mar 25 14:30:41.426: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-a406d1ff-de8e-4f24-ac94-ffe876514dbd container client-container: STEP: delete the pod Mar 25 14:30:41.447: INFO: Waiting for pod downwardapi-volume-a406d1ff-de8e-4f24-ac94-ffe876514dbd to disappear Mar 25 14:30:41.451: INFO: Pod downwardapi-volume-a406d1ff-de8e-4f24-ac94-ffe876514dbd no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:30:41.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4204" for this suite. Mar 25 14:30:47.467: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:30:47.539: INFO: namespace projected-4204 deletion completed in 6.085274601s • [SLOW TEST:10.195 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:30:47.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Mar 25 14:30:51.652: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Mar 25 14:30:56.758: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:30:56.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9169" for this suite. Mar 25 14:31:02.778: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:31:02.850: INFO: namespace pods-9169 deletion completed in 6.084096693s • [SLOW TEST:15.311 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:31:02.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:31:06.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2830" for this suite. Mar 25 14:31:56.981: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:31:57.060: INFO: namespace kubelet-test-2830 deletion completed in 50.087932413s • [SLOW TEST:54.209 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:31:57.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 25 14:31:57.149: INFO: Waiting up to 5m0s for pod "pod-51156f3c-0334-4d52-bdbd-1fbdd33cb758" in namespace "emptydir-9543" to be "success or failure" Mar 25 14:31:57.165: INFO: Pod "pod-51156f3c-0334-4d52-bdbd-1fbdd33cb758": Phase="Pending", Reason="", readiness=false. Elapsed: 16.176535ms Mar 25 14:31:59.170: INFO: Pod "pod-51156f3c-0334-4d52-bdbd-1fbdd33cb758": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020378134s Mar 25 14:32:01.174: INFO: Pod "pod-51156f3c-0334-4d52-bdbd-1fbdd33cb758": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02475549s STEP: Saw pod success Mar 25 14:32:01.174: INFO: Pod "pod-51156f3c-0334-4d52-bdbd-1fbdd33cb758" satisfied condition "success or failure" Mar 25 14:32:01.177: INFO: Trying to get logs from node iruya-worker pod pod-51156f3c-0334-4d52-bdbd-1fbdd33cb758 container test-container: STEP: delete the pod Mar 25 14:32:01.196: INFO: Waiting for pod pod-51156f3c-0334-4d52-bdbd-1fbdd33cb758 to disappear Mar 25 14:32:01.212: INFO: Pod pod-51156f3c-0334-4d52-bdbd-1fbdd33cb758 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:32:01.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9543" for this suite. Mar 25 14:32:07.235: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:32:07.310: INFO: namespace emptydir-9543 deletion completed in 6.093838651s • [SLOW TEST:10.249 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:32:07.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-7787 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-7787 STEP: Creating statefulset with conflicting port in namespace statefulset-7787 STEP: Waiting until pod test-pod will start running in namespace statefulset-7787 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-7787 Mar 25 14:32:11.470: INFO: Observed stateful pod in namespace: statefulset-7787, name: ss-0, uid: b9fd59f1-0c52-4400-8b73-f24ff2e241d7, status phase: Pending. Waiting for statefulset controller to delete. Mar 25 14:32:12.026: INFO: Observed stateful pod in namespace: statefulset-7787, name: ss-0, uid: b9fd59f1-0c52-4400-8b73-f24ff2e241d7, status phase: Failed. Waiting for statefulset controller to delete. Mar 25 14:32:12.042: INFO: Observed stateful pod in namespace: statefulset-7787, name: ss-0, uid: b9fd59f1-0c52-4400-8b73-f24ff2e241d7, status phase: Failed. Waiting for statefulset controller to delete. Mar 25 14:32:12.047: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-7787 STEP: Removing pod with conflicting port in namespace statefulset-7787 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-7787 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Mar 25 14:32:16.131: INFO: Deleting all statefulset in ns statefulset-7787 Mar 25 14:32:16.134: INFO: Scaling statefulset ss to 0 Mar 25 14:32:26.160: INFO: Waiting for statefulset status.replicas updated to 0 Mar 25 14:32:26.163: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:32:26.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7787" for this suite. Mar 25 14:32:32.218: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:32:32.291: INFO: namespace statefulset-7787 deletion completed in 6.114919319s • [SLOW TEST:24.981 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:32:32.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:32:38.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8837" for this suite. Mar 25 14:32:44.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:32:44.643: INFO: namespace namespaces-8837 deletion completed in 6.097462942s STEP: Destroying namespace "nsdeletetest-662" for this suite. Mar 25 14:32:44.646: INFO: Namespace nsdeletetest-662 was already deleted STEP: Destroying namespace "nsdeletetest-1908" for this suite. Mar 25 14:32:50.660: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:32:50.732: INFO: namespace nsdeletetest-1908 deletion completed in 6.085326443s • [SLOW TEST:18.440 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:32:50.732: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-xpwk STEP: Creating a pod to test atomic-volume-subpath Mar 25 14:32:50.814: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-xpwk" in namespace "subpath-8210" to be "success or failure" Mar 25 14:32:50.833: INFO: Pod "pod-subpath-test-configmap-xpwk": Phase="Pending", Reason="", readiness=false. Elapsed: 19.159491ms Mar 25 14:32:52.837: INFO: Pod "pod-subpath-test-configmap-xpwk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0229502s Mar 25 14:32:54.841: INFO: Pod "pod-subpath-test-configmap-xpwk": Phase="Running", Reason="", readiness=true. Elapsed: 4.027127828s Mar 25 14:32:56.846: INFO: Pod "pod-subpath-test-configmap-xpwk": Phase="Running", Reason="", readiness=true. Elapsed: 6.031658257s Mar 25 14:32:58.850: INFO: Pod "pod-subpath-test-configmap-xpwk": Phase="Running", Reason="", readiness=true. Elapsed: 8.036041219s Mar 25 14:33:00.855: INFO: Pod "pod-subpath-test-configmap-xpwk": Phase="Running", Reason="", readiness=true. Elapsed: 10.040504428s Mar 25 14:33:02.859: INFO: Pod "pod-subpath-test-configmap-xpwk": Phase="Running", Reason="", readiness=true. Elapsed: 12.044740373s Mar 25 14:33:04.863: INFO: Pod "pod-subpath-test-configmap-xpwk": Phase="Running", Reason="", readiness=true. Elapsed: 14.049128052s Mar 25 14:33:06.867: INFO: Pod "pod-subpath-test-configmap-xpwk": Phase="Running", Reason="", readiness=true. Elapsed: 16.053439687s Mar 25 14:33:08.872: INFO: Pod "pod-subpath-test-configmap-xpwk": Phase="Running", Reason="", readiness=true. Elapsed: 18.058107642s Mar 25 14:33:10.877: INFO: Pod "pod-subpath-test-configmap-xpwk": Phase="Running", Reason="", readiness=true. Elapsed: 20.062803235s Mar 25 14:33:12.881: INFO: Pod "pod-subpath-test-configmap-xpwk": Phase="Running", Reason="", readiness=true. Elapsed: 22.067020671s Mar 25 14:33:14.885: INFO: Pod "pod-subpath-test-configmap-xpwk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.070777231s STEP: Saw pod success Mar 25 14:33:14.885: INFO: Pod "pod-subpath-test-configmap-xpwk" satisfied condition "success or failure" Mar 25 14:33:14.888: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-xpwk container test-container-subpath-configmap-xpwk: STEP: delete the pod Mar 25 14:33:14.905: INFO: Waiting for pod pod-subpath-test-configmap-xpwk to disappear Mar 25 14:33:14.910: INFO: Pod pod-subpath-test-configmap-xpwk no longer exists STEP: Deleting pod pod-subpath-test-configmap-xpwk Mar 25 14:33:14.910: INFO: Deleting pod "pod-subpath-test-configmap-xpwk" in namespace "subpath-8210" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:33:14.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8210" for this suite. Mar 25 14:33:20.926: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:33:21.002: INFO: namespace subpath-8210 deletion completed in 6.086784614s • [SLOW TEST:30.270 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:33:21.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-4097 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4097 to expose endpoints map[] Mar 25 14:33:21.089: INFO: successfully validated that service multi-endpoint-test in namespace services-4097 exposes endpoints map[] (10.77784ms elapsed) STEP: Creating pod pod1 in namespace services-4097 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4097 to expose endpoints map[pod1:[100]] Mar 25 14:33:24.146: INFO: successfully validated that service multi-endpoint-test in namespace services-4097 exposes endpoints map[pod1:[100]] (3.050211946s elapsed) STEP: Creating pod pod2 in namespace services-4097 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4097 to expose endpoints map[pod1:[100] pod2:[101]] Mar 25 14:33:27.242: INFO: successfully validated that service multi-endpoint-test in namespace services-4097 exposes endpoints map[pod1:[100] pod2:[101]] (3.090738293s elapsed) STEP: Deleting pod pod1 in namespace services-4097 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4097 to expose endpoints map[pod2:[101]] Mar 25 14:33:28.302: INFO: successfully validated that service multi-endpoint-test in namespace services-4097 exposes endpoints map[pod2:[101]] (1.055735357s elapsed) STEP: Deleting pod pod2 in namespace services-4097 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4097 to expose endpoints map[] Mar 25 14:33:29.344: INFO: successfully validated that service multi-endpoint-test in namespace services-4097 exposes endpoints map[] (1.037060793s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:33:29.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4097" for this suite. Mar 25 14:33:51.437: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:33:51.520: INFO: namespace services-4097 deletion completed in 22.123892508s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:30.518 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:33:51.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-4122 I0325 14:33:51.596588 6 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-4122, replica count: 1 I0325 14:33:52.647129 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 14:33:53.647348 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 14:33:54.647623 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 25 14:33:54.778: INFO: Created: latency-svc-qgcgf Mar 25 14:33:54.783: INFO: Got endpoints: latency-svc-qgcgf [35.849375ms] Mar 25 14:33:54.833: INFO: Created: latency-svc-zrp2z Mar 25 14:33:54.843: INFO: Got endpoints: latency-svc-zrp2z [59.634492ms] Mar 25 14:33:54.864: INFO: Created: latency-svc-5w5gd Mar 25 14:33:54.880: INFO: Got endpoints: latency-svc-5w5gd [96.030074ms] Mar 25 14:33:54.900: INFO: Created: latency-svc-x7gkd Mar 25 14:33:54.986: INFO: Got endpoints: latency-svc-x7gkd [201.25822ms] Mar 25 14:33:54.990: INFO: Created: latency-svc-q4lsx Mar 25 14:33:54.998: INFO: Got endpoints: latency-svc-q4lsx [213.460208ms] Mar 25 14:33:55.018: INFO: Created: latency-svc-bgs5z Mar 25 14:33:55.031: INFO: Got endpoints: latency-svc-bgs5z [246.156698ms] Mar 25 14:33:55.055: INFO: Created: latency-svc-m6sxv Mar 25 14:33:55.067: INFO: Got endpoints: latency-svc-m6sxv [282.565257ms] Mar 25 14:33:55.117: INFO: Created: latency-svc-r6m6p Mar 25 14:33:55.120: INFO: Got endpoints: latency-svc-r6m6p [335.428384ms] Mar 25 14:33:55.146: INFO: Created: latency-svc-gmswm Mar 25 14:33:55.158: INFO: Got endpoints: latency-svc-gmswm [372.380041ms] Mar 25 14:33:55.184: INFO: Created: latency-svc-9dgv4 Mar 25 14:33:55.206: INFO: Got endpoints: latency-svc-9dgv4 [420.704317ms] Mar 25 14:33:55.262: INFO: Created: latency-svc-wg8jw Mar 25 14:33:55.275: INFO: Got endpoints: latency-svc-wg8jw [490.210948ms] Mar 25 14:33:55.330: INFO: Created: latency-svc-dkr68 Mar 25 14:33:55.435: INFO: Got endpoints: latency-svc-dkr68 [649.515458ms] Mar 25 14:33:55.467: INFO: Created: latency-svc-9whk4 Mar 25 14:33:55.483: INFO: Got endpoints: latency-svc-9whk4 [697.459184ms] Mar 25 14:33:55.503: INFO: Created: latency-svc-98qvk Mar 25 14:33:55.519: INFO: Got endpoints: latency-svc-98qvk [733.074568ms] Mar 25 14:33:55.573: INFO: Created: latency-svc-hvfmp Mar 25 14:33:55.597: INFO: Created: latency-svc-hsw5c Mar 25 14:33:55.597: INFO: Got endpoints: latency-svc-hvfmp [810.935323ms] Mar 25 14:33:55.623: INFO: Got endpoints: latency-svc-hsw5c [837.075901ms] Mar 25 14:33:55.666: INFO: Created: latency-svc-rwp87 Mar 25 14:33:55.728: INFO: Got endpoints: latency-svc-rwp87 [884.615517ms] Mar 25 14:33:55.735: INFO: Created: latency-svc-r9mvh Mar 25 14:33:55.747: INFO: Got endpoints: latency-svc-r9mvh [867.066365ms] Mar 25 14:33:55.770: INFO: Created: latency-svc-5x4tl Mar 25 14:33:55.784: INFO: Got endpoints: latency-svc-5x4tl [55.853202ms] Mar 25 14:33:55.806: INFO: Created: latency-svc-vs7q8 Mar 25 14:33:55.814: INFO: Got endpoints: latency-svc-vs7q8 [828.483402ms] Mar 25 14:33:55.878: INFO: Created: latency-svc-z7kpz Mar 25 14:33:55.899: INFO: Created: latency-svc-8qpjr Mar 25 14:33:55.899: INFO: Got endpoints: latency-svc-z7kpz [901.689065ms] Mar 25 14:33:55.923: INFO: Got endpoints: latency-svc-8qpjr [892.601879ms] Mar 25 14:33:55.954: INFO: Created: latency-svc-2qb7q Mar 25 14:33:55.965: INFO: Got endpoints: latency-svc-2qb7q [897.618656ms] Mar 25 14:33:56.016: INFO: Created: latency-svc-gv97w Mar 25 14:33:56.019: INFO: Got endpoints: latency-svc-gv97w [898.85041ms] Mar 25 14:33:56.046: INFO: Created: latency-svc-sd5vj Mar 25 14:33:56.056: INFO: Got endpoints: latency-svc-sd5vj [897.946365ms] Mar 25 14:33:56.076: INFO: Created: latency-svc-jf6qn Mar 25 14:33:56.086: INFO: Got endpoints: latency-svc-jf6qn [880.217679ms] Mar 25 14:33:56.109: INFO: Created: latency-svc-2lzr4 Mar 25 14:33:56.165: INFO: Got endpoints: latency-svc-2lzr4 [889.463155ms] Mar 25 14:33:56.167: INFO: Created: latency-svc-sdxbd Mar 25 14:33:56.176: INFO: Got endpoints: latency-svc-sdxbd [741.381393ms] Mar 25 14:33:56.199: INFO: Created: latency-svc-srmkx Mar 25 14:33:56.213: INFO: Got endpoints: latency-svc-srmkx [730.083985ms] Mar 25 14:33:56.232: INFO: Created: latency-svc-6px2g Mar 25 14:33:56.250: INFO: Got endpoints: latency-svc-6px2g [730.739545ms] Mar 25 14:33:56.303: INFO: Created: latency-svc-mjkqt Mar 25 14:33:56.307: INFO: Got endpoints: latency-svc-mjkqt [709.845509ms] Mar 25 14:33:56.331: INFO: Created: latency-svc-x78nz Mar 25 14:33:56.346: INFO: Got endpoints: latency-svc-x78nz [722.632217ms] Mar 25 14:33:56.373: INFO: Created: latency-svc-kjjrw Mar 25 14:33:56.388: INFO: Got endpoints: latency-svc-kjjrw [640.65961ms] Mar 25 14:33:56.453: INFO: Created: latency-svc-wsqmt Mar 25 14:33:56.477: INFO: Got endpoints: latency-svc-wsqmt [693.503135ms] Mar 25 14:33:56.478: INFO: Created: latency-svc-xxxnb Mar 25 14:33:56.490: INFO: Got endpoints: latency-svc-xxxnb [676.137648ms] Mar 25 14:33:56.514: INFO: Created: latency-svc-fbh75 Mar 25 14:33:56.526: INFO: Got endpoints: latency-svc-fbh75 [627.109324ms] Mar 25 14:33:56.550: INFO: Created: latency-svc-jmmz9 Mar 25 14:33:56.596: INFO: Got endpoints: latency-svc-jmmz9 [672.566886ms] Mar 25 14:33:56.607: INFO: Created: latency-svc-6rlxs Mar 25 14:33:56.623: INFO: Got endpoints: latency-svc-6rlxs [657.993157ms] Mar 25 14:33:56.652: INFO: Created: latency-svc-qhwvh Mar 25 14:33:56.666: INFO: Got endpoints: latency-svc-qhwvh [646.996779ms] Mar 25 14:33:56.688: INFO: Created: latency-svc-vbbqr Mar 25 14:33:56.752: INFO: Got endpoints: latency-svc-vbbqr [696.278063ms] Mar 25 14:33:56.754: INFO: Created: latency-svc-gk7tt Mar 25 14:33:56.762: INFO: Got endpoints: latency-svc-gk7tt [675.565654ms] Mar 25 14:33:56.787: INFO: Created: latency-svc-dlhzk Mar 25 14:33:56.804: INFO: Got endpoints: latency-svc-dlhzk [639.268991ms] Mar 25 14:33:56.823: INFO: Created: latency-svc-qc5q7 Mar 25 14:33:56.841: INFO: Got endpoints: latency-svc-qc5q7 [664.498074ms] Mar 25 14:33:56.902: INFO: Created: latency-svc-kkq8v Mar 25 14:33:56.905: INFO: Got endpoints: latency-svc-kkq8v [692.081881ms] Mar 25 14:33:56.952: INFO: Created: latency-svc-s56qx Mar 25 14:33:56.961: INFO: Got endpoints: latency-svc-s56qx [711.531114ms] Mar 25 14:33:56.981: INFO: Created: latency-svc-l25nc Mar 25 14:33:56.991: INFO: Got endpoints: latency-svc-l25nc [684.37096ms] Mar 25 14:33:57.046: INFO: Created: latency-svc-v7qdz Mar 25 14:33:57.049: INFO: Got endpoints: latency-svc-v7qdz [703.324739ms] Mar 25 14:33:57.075: INFO: Created: latency-svc-qxthp Mar 25 14:33:57.088: INFO: Got endpoints: latency-svc-qxthp [700.006958ms] Mar 25 14:33:57.106: INFO: Created: latency-svc-59d65 Mar 25 14:33:57.118: INFO: Got endpoints: latency-svc-59d65 [640.197498ms] Mar 25 14:33:57.137: INFO: Created: latency-svc-c8h2p Mar 25 14:33:57.195: INFO: Got endpoints: latency-svc-c8h2p [704.544651ms] Mar 25 14:33:57.198: INFO: Created: latency-svc-lp2d5 Mar 25 14:33:57.202: INFO: Got endpoints: latency-svc-lp2d5 [675.878364ms] Mar 25 14:33:57.226: INFO: Created: latency-svc-xhl2d Mar 25 14:33:57.239: INFO: Got endpoints: latency-svc-xhl2d [643.237087ms] Mar 25 14:33:57.261: INFO: Created: latency-svc-gwxtv Mar 25 14:33:57.275: INFO: Got endpoints: latency-svc-gwxtv [652.057078ms] Mar 25 14:33:57.369: INFO: Created: latency-svc-87lm8 Mar 25 14:33:57.371: INFO: Got endpoints: latency-svc-87lm8 [705.190996ms] Mar 25 14:33:57.408: INFO: Created: latency-svc-tx49z Mar 25 14:33:57.420: INFO: Got endpoints: latency-svc-tx49z [667.991822ms] Mar 25 14:33:57.443: INFO: Created: latency-svc-m9g6m Mar 25 14:33:57.456: INFO: Got endpoints: latency-svc-m9g6m [694.657701ms] Mar 25 14:33:57.514: INFO: Created: latency-svc-t2v4c Mar 25 14:33:57.516: INFO: Got endpoints: latency-svc-t2v4c [711.119711ms] Mar 25 14:33:57.555: INFO: Created: latency-svc-hx7w4 Mar 25 14:33:57.578: INFO: Got endpoints: latency-svc-hx7w4 [737.303501ms] Mar 25 14:33:57.650: INFO: Created: latency-svc-4n2z4 Mar 25 14:33:57.654: INFO: Got endpoints: latency-svc-4n2z4 [748.587852ms] Mar 25 14:33:57.702: INFO: Created: latency-svc-z7xj2 Mar 25 14:33:57.740: INFO: Got endpoints: latency-svc-z7xj2 [779.148044ms] Mar 25 14:33:57.797: INFO: Created: latency-svc-hdng9 Mar 25 14:33:57.801: INFO: Got endpoints: latency-svc-hdng9 [809.732242ms] Mar 25 14:33:57.827: INFO: Created: latency-svc-2nhs7 Mar 25 14:33:57.843: INFO: Got endpoints: latency-svc-2nhs7 [793.919886ms] Mar 25 14:33:57.863: INFO: Created: latency-svc-ttvl6 Mar 25 14:33:57.879: INFO: Got endpoints: latency-svc-ttvl6 [790.952984ms] Mar 25 14:33:57.968: INFO: Created: latency-svc-98sh9 Mar 25 14:33:57.986: INFO: Got endpoints: latency-svc-98sh9 [868.072716ms] Mar 25 14:33:58.017: INFO: Created: latency-svc-n8wfn Mar 25 14:33:58.030: INFO: Got endpoints: latency-svc-n8wfn [834.549405ms] Mar 25 14:33:58.048: INFO: Created: latency-svc-f7rck Mar 25 14:33:58.060: INFO: Got endpoints: latency-svc-f7rck [857.452193ms] Mar 25 14:33:58.142: INFO: Created: latency-svc-82q2l Mar 25 14:33:58.144: INFO: Got endpoints: latency-svc-82q2l [905.144776ms] Mar 25 14:33:58.176: INFO: Created: latency-svc-c2p7d Mar 25 14:33:58.193: INFO: Got endpoints: latency-svc-c2p7d [917.795072ms] Mar 25 14:33:58.233: INFO: Created: latency-svc-nhfz8 Mar 25 14:33:58.291: INFO: Got endpoints: latency-svc-nhfz8 [919.251486ms] Mar 25 14:33:58.292: INFO: Created: latency-svc-l5r57 Mar 25 14:33:58.295: INFO: Got endpoints: latency-svc-l5r57 [874.679091ms] Mar 25 14:33:58.329: INFO: Created: latency-svc-vlsgz Mar 25 14:33:58.349: INFO: Got endpoints: latency-svc-vlsgz [892.479632ms] Mar 25 14:33:58.379: INFO: Created: latency-svc-w56qd Mar 25 14:33:58.452: INFO: Got endpoints: latency-svc-w56qd [936.811067ms] Mar 25 14:33:58.455: INFO: Created: latency-svc-mm9wp Mar 25 14:33:58.459: INFO: Got endpoints: latency-svc-mm9wp [880.521406ms] Mar 25 14:33:58.491: INFO: Created: latency-svc-6nr65 Mar 25 14:33:58.500: INFO: Got endpoints: latency-svc-6nr65 [845.922085ms] Mar 25 14:33:58.520: INFO: Created: latency-svc-8qvwr Mar 25 14:33:58.541: INFO: Got endpoints: latency-svc-8qvwr [800.521262ms] Mar 25 14:33:58.609: INFO: Created: latency-svc-xl7k6 Mar 25 14:33:58.616: INFO: Got endpoints: latency-svc-xl7k6 [814.704915ms] Mar 25 14:33:58.655: INFO: Created: latency-svc-bfwpl Mar 25 14:33:58.670: INFO: Got endpoints: latency-svc-bfwpl [826.929638ms] Mar 25 14:33:58.695: INFO: Created: latency-svc-n6flc Mar 25 14:33:58.740: INFO: Got endpoints: latency-svc-n6flc [861.023946ms] Mar 25 14:33:58.748: INFO: Created: latency-svc-6czkc Mar 25 14:33:58.765: INFO: Got endpoints: latency-svc-6czkc [779.229572ms] Mar 25 14:33:58.793: INFO: Created: latency-svc-qc4cb Mar 25 14:33:58.808: INFO: Got endpoints: latency-svc-qc4cb [778.070269ms] Mar 25 14:33:58.926: INFO: Created: latency-svc-5z27f Mar 25 14:33:58.934: INFO: Got endpoints: latency-svc-5z27f [873.872953ms] Mar 25 14:33:58.956: INFO: Created: latency-svc-gxsbj Mar 25 14:33:58.970: INFO: Got endpoints: latency-svc-gxsbj [825.724384ms] Mar 25 14:33:58.991: INFO: Created: latency-svc-8cbcg Mar 25 14:33:59.006: INFO: Got endpoints: latency-svc-8cbcg [813.20797ms] Mar 25 14:33:59.025: INFO: Created: latency-svc-hv6mb Mar 25 14:33:59.063: INFO: Got endpoints: latency-svc-hv6mb [772.420325ms] Mar 25 14:33:59.067: INFO: Created: latency-svc-xg2lw Mar 25 14:33:59.094: INFO: Got endpoints: latency-svc-xg2lw [798.846751ms] Mar 25 14:33:59.112: INFO: Created: latency-svc-x4fcz Mar 25 14:33:59.127: INFO: Got endpoints: latency-svc-x4fcz [778.401081ms] Mar 25 14:33:59.148: INFO: Created: latency-svc-wgpft Mar 25 14:33:59.195: INFO: Got endpoints: latency-svc-wgpft [742.72866ms] Mar 25 14:33:59.223: INFO: Created: latency-svc-kgwdh Mar 25 14:33:59.230: INFO: Got endpoints: latency-svc-kgwdh [770.922827ms] Mar 25 14:33:59.253: INFO: Created: latency-svc-mx9sc Mar 25 14:33:59.289: INFO: Got endpoints: latency-svc-mx9sc [788.731288ms] Mar 25 14:33:59.346: INFO: Created: latency-svc-l6w9l Mar 25 14:33:59.350: INFO: Got endpoints: latency-svc-l6w9l [809.231398ms] Mar 25 14:33:59.387: INFO: Created: latency-svc-4zxfq Mar 25 14:33:59.399: INFO: Got endpoints: latency-svc-4zxfq [782.769194ms] Mar 25 14:33:59.415: INFO: Created: latency-svc-x6tdp Mar 25 14:33:59.429: INFO: Got endpoints: latency-svc-x6tdp [758.725329ms] Mar 25 14:33:59.478: INFO: Created: latency-svc-wspgk Mar 25 14:33:59.489: INFO: Got endpoints: latency-svc-wspgk [748.448674ms] Mar 25 14:33:59.538: INFO: Created: latency-svc-ljk7h Mar 25 14:33:59.549: INFO: Got endpoints: latency-svc-ljk7h [783.787217ms] Mar 25 14:33:59.564: INFO: Created: latency-svc-h7t97 Mar 25 14:33:59.626: INFO: Got endpoints: latency-svc-h7t97 [818.297904ms] Mar 25 14:33:59.628: INFO: Created: latency-svc-rkm5q Mar 25 14:33:59.652: INFO: Got endpoints: latency-svc-rkm5q [718.540843ms] Mar 25 14:33:59.679: INFO: Created: latency-svc-xj82z Mar 25 14:33:59.688: INFO: Got endpoints: latency-svc-xj82z [717.938426ms] Mar 25 14:33:59.712: INFO: Created: latency-svc-pjgnl Mar 25 14:33:59.758: INFO: Got endpoints: latency-svc-pjgnl [751.626345ms] Mar 25 14:33:59.771: INFO: Created: latency-svc-zxl8v Mar 25 14:33:59.785: INFO: Got endpoints: latency-svc-zxl8v [721.580731ms] Mar 25 14:33:59.805: INFO: Created: latency-svc-b4q98 Mar 25 14:33:59.822: INFO: Got endpoints: latency-svc-b4q98 [727.895318ms] Mar 25 14:33:59.841: INFO: Created: latency-svc-bztrr Mar 25 14:33:59.852: INFO: Got endpoints: latency-svc-bztrr [724.238178ms] Mar 25 14:33:59.903: INFO: Created: latency-svc-g42fp Mar 25 14:33:59.906: INFO: Got endpoints: latency-svc-g42fp [710.446633ms] Mar 25 14:33:59.933: INFO: Created: latency-svc-fkxm9 Mar 25 14:33:59.948: INFO: Got endpoints: latency-svc-fkxm9 [718.313294ms] Mar 25 14:33:59.969: INFO: Created: latency-svc-nhngb Mar 25 14:33:59.985: INFO: Got endpoints: latency-svc-nhngb [696.156136ms] Mar 25 14:34:00.046: INFO: Created: latency-svc-r4zdd Mar 25 14:34:00.068: INFO: Got endpoints: latency-svc-r4zdd [717.747721ms] Mar 25 14:34:00.068: INFO: Created: latency-svc-d76c5 Mar 25 14:34:00.082: INFO: Got endpoints: latency-svc-d76c5 [682.968302ms] Mar 25 14:34:00.104: INFO: Created: latency-svc-fvflb Mar 25 14:34:00.117: INFO: Got endpoints: latency-svc-fvflb [688.585449ms] Mar 25 14:34:00.143: INFO: Created: latency-svc-25qwg Mar 25 14:34:00.201: INFO: Got endpoints: latency-svc-25qwg [712.149957ms] Mar 25 14:34:00.221: INFO: Created: latency-svc-2vpvz Mar 25 14:34:00.238: INFO: Got endpoints: latency-svc-2vpvz [688.517084ms] Mar 25 14:34:00.255: INFO: Created: latency-svc-ffkx5 Mar 25 14:34:00.268: INFO: Got endpoints: latency-svc-ffkx5 [641.94089ms] Mar 25 14:34:00.291: INFO: Created: latency-svc-w27m2 Mar 25 14:34:00.333: INFO: Got endpoints: latency-svc-w27m2 [680.38549ms] Mar 25 14:34:00.344: INFO: Created: latency-svc-4r96q Mar 25 14:34:00.372: INFO: Got endpoints: latency-svc-4r96q [683.453904ms] Mar 25 14:34:00.401: INFO: Created: latency-svc-4p275 Mar 25 14:34:00.413: INFO: Got endpoints: latency-svc-4p275 [654.856469ms] Mar 25 14:34:00.431: INFO: Created: latency-svc-fpd7h Mar 25 14:34:00.507: INFO: Got endpoints: latency-svc-fpd7h [721.79668ms] Mar 25 14:34:00.509: INFO: Created: latency-svc-bhcg7 Mar 25 14:34:00.515: INFO: Got endpoints: latency-svc-bhcg7 [693.628438ms] Mar 25 14:34:00.537: INFO: Created: latency-svc-7b6f8 Mar 25 14:34:00.557: INFO: Got endpoints: latency-svc-7b6f8 [705.019135ms] Mar 25 14:34:00.587: INFO: Created: latency-svc-b4kzz Mar 25 14:34:00.600: INFO: Got endpoints: latency-svc-b4kzz [694.415821ms] Mar 25 14:34:00.688: INFO: Created: latency-svc-527zn Mar 25 14:34:00.702: INFO: Got endpoints: latency-svc-527zn [754.06896ms] Mar 25 14:34:00.729: INFO: Created: latency-svc-dxbpz Mar 25 14:34:00.739: INFO: Got endpoints: latency-svc-dxbpz [753.521029ms] Mar 25 14:34:00.825: INFO: Created: latency-svc-2lcxx Mar 25 14:34:00.827: INFO: Got endpoints: latency-svc-2lcxx [758.930426ms] Mar 25 14:34:00.857: INFO: Created: latency-svc-h7nmf Mar 25 14:34:00.871: INFO: Got endpoints: latency-svc-h7nmf [789.801878ms] Mar 25 14:34:00.890: INFO: Created: latency-svc-28724 Mar 25 14:34:00.901: INFO: Got endpoints: latency-svc-28724 [783.826509ms] Mar 25 14:34:00.920: INFO: Created: latency-svc-zdhjk Mar 25 14:34:00.979: INFO: Got endpoints: latency-svc-zdhjk [778.198477ms] Mar 25 14:34:00.982: INFO: Created: latency-svc-482cd Mar 25 14:34:00.992: INFO: Got endpoints: latency-svc-482cd [754.002662ms] Mar 25 14:34:01.013: INFO: Created: latency-svc-j6n6m Mar 25 14:34:01.028: INFO: Got endpoints: latency-svc-j6n6m [760.127616ms] Mar 25 14:34:01.049: INFO: Created: latency-svc-rwrl7 Mar 25 14:34:01.064: INFO: Got endpoints: latency-svc-rwrl7 [731.5369ms] Mar 25 14:34:01.123: INFO: Created: latency-svc-xmtb8 Mar 25 14:34:01.127: INFO: Got endpoints: latency-svc-xmtb8 [754.772377ms] Mar 25 14:34:01.179: INFO: Created: latency-svc-chr8v Mar 25 14:34:01.191: INFO: Got endpoints: latency-svc-chr8v [777.606015ms] Mar 25 14:34:01.217: INFO: Created: latency-svc-qdkld Mar 25 14:34:01.267: INFO: Got endpoints: latency-svc-qdkld [760.06914ms] Mar 25 14:34:01.283: INFO: Created: latency-svc-4jrf2 Mar 25 14:34:01.299: INFO: Got endpoints: latency-svc-4jrf2 [784.056893ms] Mar 25 14:34:01.319: INFO: Created: latency-svc-8rnhr Mar 25 14:34:01.336: INFO: Got endpoints: latency-svc-8rnhr [778.957549ms] Mar 25 14:34:01.358: INFO: Created: latency-svc-qq2jj Mar 25 14:34:01.400: INFO: Got endpoints: latency-svc-qq2jj [799.56007ms] Mar 25 14:34:01.421: INFO: Created: latency-svc-k4vw9 Mar 25 14:34:01.432: INFO: Got endpoints: latency-svc-k4vw9 [729.939989ms] Mar 25 14:34:01.451: INFO: Created: latency-svc-8gqg6 Mar 25 14:34:01.463: INFO: Got endpoints: latency-svc-8gqg6 [724.119095ms] Mar 25 14:34:01.499: INFO: Created: latency-svc-k4qk9 Mar 25 14:34:01.566: INFO: Got endpoints: latency-svc-k4qk9 [738.443774ms] Mar 25 14:34:01.586: INFO: Created: latency-svc-2wpbw Mar 25 14:34:01.634: INFO: Got endpoints: latency-svc-2wpbw [762.847823ms] Mar 25 14:34:01.715: INFO: Created: latency-svc-gvhfl Mar 25 14:34:01.739: INFO: Got endpoints: latency-svc-gvhfl [837.993976ms] Mar 25 14:34:01.757: INFO: Created: latency-svc-xkxlg Mar 25 14:34:01.770: INFO: Got endpoints: latency-svc-xkxlg [790.230941ms] Mar 25 14:34:01.790: INFO: Created: latency-svc-smnpw Mar 25 14:34:01.830: INFO: Got endpoints: latency-svc-smnpw [837.674166ms] Mar 25 14:34:01.838: INFO: Created: latency-svc-ttbqm Mar 25 14:34:01.855: INFO: Got endpoints: latency-svc-ttbqm [826.536553ms] Mar 25 14:34:01.877: INFO: Created: latency-svc-llld5 Mar 25 14:34:01.890: INFO: Got endpoints: latency-svc-llld5 [825.797669ms] Mar 25 14:34:01.913: INFO: Created: latency-svc-94ckz Mar 25 14:34:01.927: INFO: Got endpoints: latency-svc-94ckz [800.194001ms] Mar 25 14:34:01.974: INFO: Created: latency-svc-5bqzn Mar 25 14:34:01.981: INFO: Got endpoints: latency-svc-5bqzn [790.486755ms] Mar 25 14:34:02.002: INFO: Created: latency-svc-xwns9 Mar 25 14:34:02.014: INFO: Got endpoints: latency-svc-xwns9 [746.901157ms] Mar 25 14:34:02.055: INFO: Created: latency-svc-dnfd8 Mar 25 14:34:02.068: INFO: Got endpoints: latency-svc-dnfd8 [768.689008ms] Mar 25 14:34:02.117: INFO: Created: latency-svc-gfjtj Mar 25 14:34:02.120: INFO: Got endpoints: latency-svc-gfjtj [783.951679ms] Mar 25 14:34:02.147: INFO: Created: latency-svc-mflzn Mar 25 14:34:02.195: INFO: Got endpoints: latency-svc-mflzn [794.834758ms] Mar 25 14:34:02.255: INFO: Created: latency-svc-d9tz7 Mar 25 14:34:02.258: INFO: Got endpoints: latency-svc-d9tz7 [826.043261ms] Mar 25 14:34:02.282: INFO: Created: latency-svc-9fssn Mar 25 14:34:02.297: INFO: Got endpoints: latency-svc-9fssn [834.694438ms] Mar 25 14:34:02.319: INFO: Created: latency-svc-sjtmz Mar 25 14:34:02.334: INFO: Got endpoints: latency-svc-sjtmz [767.817121ms] Mar 25 14:34:02.351: INFO: Created: latency-svc-cxlcj Mar 25 14:34:02.405: INFO: Got endpoints: latency-svc-cxlcj [770.17146ms] Mar 25 14:34:02.427: INFO: Created: latency-svc-c52xr Mar 25 14:34:02.442: INFO: Got endpoints: latency-svc-c52xr [702.673337ms] Mar 25 14:34:02.475: INFO: Created: latency-svc-nbz9h Mar 25 14:34:02.491: INFO: Got endpoints: latency-svc-nbz9h [721.207182ms] Mar 25 14:34:02.549: INFO: Created: latency-svc-tvbs5 Mar 25 14:34:02.549: INFO: Got endpoints: latency-svc-tvbs5 [719.369767ms] Mar 25 14:34:02.573: INFO: Created: latency-svc-97flg Mar 25 14:34:02.591: INFO: Got endpoints: latency-svc-97flg [735.72204ms] Mar 25 14:34:02.615: INFO: Created: latency-svc-5rq6q Mar 25 14:34:02.623: INFO: Got endpoints: latency-svc-5rq6q [732.887751ms] Mar 25 14:34:02.710: INFO: Created: latency-svc-gpvr4 Mar 25 14:34:02.752: INFO: Got endpoints: latency-svc-gpvr4 [825.449238ms] Mar 25 14:34:02.753: INFO: Created: latency-svc-48qnl Mar 25 14:34:02.774: INFO: Got endpoints: latency-svc-48qnl [792.477926ms] Mar 25 14:34:02.795: INFO: Created: latency-svc-6bg4g Mar 25 14:34:02.878: INFO: Got endpoints: latency-svc-6bg4g [864.297184ms] Mar 25 14:34:02.898: INFO: Created: latency-svc-mz8tj Mar 25 14:34:02.910: INFO: Got endpoints: latency-svc-mz8tj [842.389889ms] Mar 25 14:34:02.944: INFO: Created: latency-svc-zb569 Mar 25 14:34:02.961: INFO: Got endpoints: latency-svc-zb569 [841.027014ms] Mar 25 14:34:03.010: INFO: Created: latency-svc-s7lnz Mar 25 14:34:03.012: INFO: Got endpoints: latency-svc-s7lnz [816.879219ms] Mar 25 14:34:03.056: INFO: Created: latency-svc-n6plv Mar 25 14:34:03.069: INFO: Got endpoints: latency-svc-n6plv [810.924464ms] Mar 25 14:34:03.092: INFO: Created: latency-svc-xjvhq Mar 25 14:34:03.106: INFO: Got endpoints: latency-svc-xjvhq [808.569679ms] Mar 25 14:34:03.159: INFO: Created: latency-svc-s49c9 Mar 25 14:34:03.162: INFO: Got endpoints: latency-svc-s49c9 [828.187219ms] Mar 25 14:34:03.191: INFO: Created: latency-svc-jnbnf Mar 25 14:34:03.202: INFO: Got endpoints: latency-svc-jnbnf [797.592066ms] Mar 25 14:34:03.227: INFO: Created: latency-svc-zksnz Mar 25 14:34:03.238: INFO: Got endpoints: latency-svc-zksnz [796.153331ms] Mar 25 14:34:03.291: INFO: Created: latency-svc-vz484 Mar 25 14:34:03.294: INFO: Got endpoints: latency-svc-vz484 [802.763035ms] Mar 25 14:34:03.321: INFO: Created: latency-svc-qnhhj Mar 25 14:34:03.341: INFO: Got endpoints: latency-svc-qnhhj [792.097943ms] Mar 25 14:34:03.362: INFO: Created: latency-svc-xggn6 Mar 25 14:34:03.378: INFO: Got endpoints: latency-svc-xggn6 [787.16557ms] Mar 25 14:34:03.425: INFO: Created: latency-svc-7fhvb Mar 25 14:34:03.438: INFO: Got endpoints: latency-svc-7fhvb [814.521872ms] Mar 25 14:34:03.461: INFO: Created: latency-svc-pntsp Mar 25 14:34:03.474: INFO: Got endpoints: latency-svc-pntsp [721.622126ms] Mar 25 14:34:03.494: INFO: Created: latency-svc-28vdf Mar 25 14:34:03.572: INFO: Got endpoints: latency-svc-28vdf [798.209101ms] Mar 25 14:34:03.574: INFO: Created: latency-svc-rqs9l Mar 25 14:34:03.589: INFO: Got endpoints: latency-svc-rqs9l [710.444692ms] Mar 25 14:34:03.665: INFO: Created: latency-svc-f9vgx Mar 25 14:34:03.716: INFO: Got endpoints: latency-svc-f9vgx [805.281373ms] Mar 25 14:34:03.728: INFO: Created: latency-svc-57dn2 Mar 25 14:34:03.741: INFO: Got endpoints: latency-svc-57dn2 [780.113498ms] Mar 25 14:34:03.764: INFO: Created: latency-svc-s9frg Mar 25 14:34:03.775: INFO: Got endpoints: latency-svc-s9frg [763.416067ms] Mar 25 14:34:03.806: INFO: Created: latency-svc-jbcz2 Mar 25 14:34:03.866: INFO: Got endpoints: latency-svc-jbcz2 [796.039173ms] Mar 25 14:34:03.887: INFO: Created: latency-svc-z4jn2 Mar 25 14:34:03.939: INFO: Got endpoints: latency-svc-z4jn2 [832.847724ms] Mar 25 14:34:04.004: INFO: Created: latency-svc-h5ntb Mar 25 14:34:04.006: INFO: Got endpoints: latency-svc-h5ntb [844.214034ms] Mar 25 14:34:04.034: INFO: Created: latency-svc-kkfml Mar 25 14:34:04.046: INFO: Got endpoints: latency-svc-kkfml [844.040909ms] Mar 25 14:34:04.064: INFO: Created: latency-svc-46kdp Mar 25 14:34:04.077: INFO: Got endpoints: latency-svc-46kdp [838.630131ms] Mar 25 14:34:04.096: INFO: Created: latency-svc-9r4s6 Mar 25 14:34:04.159: INFO: Got endpoints: latency-svc-9r4s6 [865.251109ms] Mar 25 14:34:04.160: INFO: Created: latency-svc-q985d Mar 25 14:34:04.167: INFO: Got endpoints: latency-svc-q985d [825.875092ms] Mar 25 14:34:04.190: INFO: Created: latency-svc-2flws Mar 25 14:34:04.214: INFO: Got endpoints: latency-svc-2flws [836.176997ms] Mar 25 14:34:04.244: INFO: Created: latency-svc-p4z9l Mar 25 14:34:04.258: INFO: Got endpoints: latency-svc-p4z9l [820.188393ms] Mar 25 14:34:04.297: INFO: Created: latency-svc-rgbrr Mar 25 14:34:04.300: INFO: Got endpoints: latency-svc-rgbrr [826.000588ms] Mar 25 14:34:04.331: INFO: Created: latency-svc-wr965 Mar 25 14:34:04.343: INFO: Got endpoints: latency-svc-wr965 [771.128402ms] Mar 25 14:34:04.367: INFO: Created: latency-svc-6zb7j Mar 25 14:34:04.441: INFO: Got endpoints: latency-svc-6zb7j [852.242722ms] Mar 25 14:34:04.460: INFO: Created: latency-svc-5s5nc Mar 25 14:34:04.482: INFO: Got endpoints: latency-svc-5s5nc [765.805096ms] Mar 25 14:34:04.535: INFO: Created: latency-svc-h7cn8 Mar 25 14:34:04.566: INFO: Got endpoints: latency-svc-h7cn8 [824.866633ms] Mar 25 14:34:04.589: INFO: Created: latency-svc-lv2mj Mar 25 14:34:04.602: INFO: Got endpoints: latency-svc-lv2mj [827.03288ms] Mar 25 14:34:04.622: INFO: Created: latency-svc-jv2zk Mar 25 14:34:04.651: INFO: Got endpoints: latency-svc-jv2zk [785.834216ms] Mar 25 14:34:04.704: INFO: Created: latency-svc-fbnlg Mar 25 14:34:04.707: INFO: Got endpoints: latency-svc-fbnlg [767.942017ms] Mar 25 14:34:04.732: INFO: Created: latency-svc-6jpp9 Mar 25 14:34:04.747: INFO: Got endpoints: latency-svc-6jpp9 [741.015054ms] Mar 25 14:34:04.769: INFO: Created: latency-svc-hpn9t Mar 25 14:34:04.789: INFO: Got endpoints: latency-svc-hpn9t [742.992545ms] Mar 25 14:34:04.866: INFO: Created: latency-svc-wkgsw Mar 25 14:34:04.892: INFO: Got endpoints: latency-svc-wkgsw [814.411895ms] Mar 25 14:34:04.892: INFO: Created: latency-svc-9t2vf Mar 25 14:34:04.915: INFO: Got endpoints: latency-svc-9t2vf [756.451751ms] Mar 25 14:34:04.943: INFO: Created: latency-svc-xdswl Mar 25 14:34:04.958: INFO: Got endpoints: latency-svc-xdswl [790.969683ms] Mar 25 14:34:05.010: INFO: Created: latency-svc-r2tn7 Mar 25 14:34:05.012: INFO: Got endpoints: latency-svc-r2tn7 [798.343223ms] Mar 25 14:34:05.072: INFO: Created: latency-svc-2m9tx Mar 25 14:34:05.084: INFO: Got endpoints: latency-svc-2m9tx [826.28755ms] Mar 25 14:34:05.084: INFO: Latencies: [55.853202ms 59.634492ms 96.030074ms 201.25822ms 213.460208ms 246.156698ms 282.565257ms 335.428384ms 372.380041ms 420.704317ms 490.210948ms 627.109324ms 639.268991ms 640.197498ms 640.65961ms 641.94089ms 643.237087ms 646.996779ms 649.515458ms 652.057078ms 654.856469ms 657.993157ms 664.498074ms 667.991822ms 672.566886ms 675.565654ms 675.878364ms 676.137648ms 680.38549ms 682.968302ms 683.453904ms 684.37096ms 688.517084ms 688.585449ms 692.081881ms 693.503135ms 693.628438ms 694.415821ms 694.657701ms 696.156136ms 696.278063ms 697.459184ms 700.006958ms 702.673337ms 703.324739ms 704.544651ms 705.019135ms 705.190996ms 709.845509ms 710.444692ms 710.446633ms 711.119711ms 711.531114ms 712.149957ms 717.747721ms 717.938426ms 718.313294ms 718.540843ms 719.369767ms 721.207182ms 721.580731ms 721.622126ms 721.79668ms 722.632217ms 724.119095ms 724.238178ms 727.895318ms 729.939989ms 730.083985ms 730.739545ms 731.5369ms 732.887751ms 733.074568ms 735.72204ms 737.303501ms 738.443774ms 741.015054ms 741.381393ms 742.72866ms 742.992545ms 746.901157ms 748.448674ms 748.587852ms 751.626345ms 753.521029ms 754.002662ms 754.06896ms 754.772377ms 756.451751ms 758.725329ms 758.930426ms 760.06914ms 760.127616ms 762.847823ms 763.416067ms 765.805096ms 767.817121ms 767.942017ms 768.689008ms 770.17146ms 770.922827ms 771.128402ms 772.420325ms 777.606015ms 778.070269ms 778.198477ms 778.401081ms 778.957549ms 779.148044ms 779.229572ms 780.113498ms 782.769194ms 783.787217ms 783.826509ms 783.951679ms 784.056893ms 785.834216ms 787.16557ms 788.731288ms 789.801878ms 790.230941ms 790.486755ms 790.952984ms 790.969683ms 792.097943ms 792.477926ms 793.919886ms 794.834758ms 796.039173ms 796.153331ms 797.592066ms 798.209101ms 798.343223ms 798.846751ms 799.56007ms 800.194001ms 800.521262ms 802.763035ms 805.281373ms 808.569679ms 809.231398ms 809.732242ms 810.924464ms 810.935323ms 813.20797ms 814.411895ms 814.521872ms 814.704915ms 816.879219ms 818.297904ms 820.188393ms 824.866633ms 825.449238ms 825.724384ms 825.797669ms 825.875092ms 826.000588ms 826.043261ms 826.28755ms 826.536553ms 826.929638ms 827.03288ms 828.187219ms 828.483402ms 832.847724ms 834.549405ms 834.694438ms 836.176997ms 837.075901ms 837.674166ms 837.993976ms 838.630131ms 841.027014ms 842.389889ms 844.040909ms 844.214034ms 845.922085ms 852.242722ms 857.452193ms 861.023946ms 864.297184ms 865.251109ms 867.066365ms 868.072716ms 873.872953ms 874.679091ms 880.217679ms 880.521406ms 884.615517ms 889.463155ms 892.479632ms 892.601879ms 897.618656ms 897.946365ms 898.85041ms 901.689065ms 905.144776ms 917.795072ms 919.251486ms 936.811067ms] Mar 25 14:34:05.085: INFO: 50 %ile: 770.922827ms Mar 25 14:34:05.085: INFO: 90 %ile: 864.297184ms Mar 25 14:34:05.085: INFO: 99 %ile: 919.251486ms Mar 25 14:34:05.085: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:34:05.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-4122" for this suite. Mar 25 14:34:25.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:34:25.196: INFO: namespace svc-latency-4122 deletion completed in 20.105147687s • [SLOW TEST:33.675 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:34:25.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs Mar 25 14:34:25.265: INFO: Waiting up to 5m0s for pod "pod-c93bda70-b600-44f7-b155-ba9caa785ce9" in namespace "emptydir-4475" to be "success or failure" Mar 25 14:34:25.285: INFO: Pod "pod-c93bda70-b600-44f7-b155-ba9caa785ce9": Phase="Pending", Reason="", readiness=false. Elapsed: 20.333321ms Mar 25 14:34:27.290: INFO: Pod "pod-c93bda70-b600-44f7-b155-ba9caa785ce9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025191398s Mar 25 14:34:29.295: INFO: Pod "pod-c93bda70-b600-44f7-b155-ba9caa785ce9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029976211s STEP: Saw pod success Mar 25 14:34:29.295: INFO: Pod "pod-c93bda70-b600-44f7-b155-ba9caa785ce9" satisfied condition "success or failure" Mar 25 14:34:29.298: INFO: Trying to get logs from node iruya-worker2 pod pod-c93bda70-b600-44f7-b155-ba9caa785ce9 container test-container: STEP: delete the pod Mar 25 14:34:29.376: INFO: Waiting for pod pod-c93bda70-b600-44f7-b155-ba9caa785ce9 to disappear Mar 25 14:34:29.382: INFO: Pod pod-c93bda70-b600-44f7-b155-ba9caa785ce9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:34:29.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4475" for this suite. Mar 25 14:34:35.398: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:34:35.473: INFO: namespace emptydir-4475 deletion completed in 6.087024537s • [SLOW TEST:10.277 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:34:35.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Mar 25 14:34:40.067: INFO: Successfully updated pod "annotationupdate4a372536-b5cf-4b2a-a0ce-a68b5de6f368" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:34:42.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4913" for this suite. Mar 25 14:35:04.108: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:35:04.192: INFO: namespace downward-api-4913 deletion completed in 22.101726265s • [SLOW TEST:28.719 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:35:04.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 25 14:35:04.251: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cecb6046-62a4-4f45-a82c-16bfeda8cb2a" in namespace "downward-api-8886" to be "success or failure" Mar 25 14:35:04.255: INFO: Pod "downwardapi-volume-cecb6046-62a4-4f45-a82c-16bfeda8cb2a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.47712ms Mar 25 14:35:06.258: INFO: Pod "downwardapi-volume-cecb6046-62a4-4f45-a82c-16bfeda8cb2a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007283959s Mar 25 14:35:08.263: INFO: Pod "downwardapi-volume-cecb6046-62a4-4f45-a82c-16bfeda8cb2a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011708016s STEP: Saw pod success Mar 25 14:35:08.263: INFO: Pod "downwardapi-volume-cecb6046-62a4-4f45-a82c-16bfeda8cb2a" satisfied condition "success or failure" Mar 25 14:35:08.266: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-cecb6046-62a4-4f45-a82c-16bfeda8cb2a container client-container: STEP: delete the pod Mar 25 14:35:08.301: INFO: Waiting for pod downwardapi-volume-cecb6046-62a4-4f45-a82c-16bfeda8cb2a to disappear Mar 25 14:35:08.305: INFO: Pod downwardapi-volume-cecb6046-62a4-4f45-a82c-16bfeda8cb2a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:35:08.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8886" for this suite. Mar 25 14:35:14.321: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:35:14.412: INFO: namespace downward-api-8886 deletion completed in 6.103037502s • [SLOW TEST:10.219 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:35:14.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-9 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 25 14:35:14.477: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 25 14:35:36.570: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.90:8080/dial?request=hostName&protocol=udp&host=10.244.1.89&port=8081&tries=1'] Namespace:pod-network-test-9 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 25 14:35:36.570: INFO: >>> kubeConfig: /root/.kube/config I0325 14:35:36.611071 6 log.go:172] (0xc0019ee420) (0xc00229f7c0) Create stream I0325 14:35:36.611110 6 log.go:172] (0xc0019ee420) (0xc00229f7c0) Stream added, broadcasting: 1 I0325 14:35:36.614564 6 log.go:172] (0xc0019ee420) Reply frame received for 1 I0325 14:35:36.614603 6 log.go:172] (0xc0019ee420) (0xc001573360) Create stream I0325 14:35:36.614615 6 log.go:172] (0xc0019ee420) (0xc001573360) Stream added, broadcasting: 3 I0325 14:35:36.615766 6 log.go:172] (0xc0019ee420) Reply frame received for 3 I0325 14:35:36.615819 6 log.go:172] (0xc0019ee420) (0xc003360aa0) Create stream I0325 14:35:36.615834 6 log.go:172] (0xc0019ee420) (0xc003360aa0) Stream added, broadcasting: 5 I0325 14:35:36.618286 6 log.go:172] (0xc0019ee420) Reply frame received for 5 I0325 14:35:36.699647 6 log.go:172] (0xc0019ee420) Data frame received for 3 I0325 14:35:36.699680 6 log.go:172] (0xc001573360) (3) Data frame handling I0325 14:35:36.699711 6 log.go:172] (0xc001573360) (3) Data frame sent I0325 14:35:36.700121 6 log.go:172] (0xc0019ee420) Data frame received for 3 I0325 14:35:36.700169 6 log.go:172] (0xc001573360) (3) Data frame handling I0325 14:35:36.700419 6 log.go:172] (0xc0019ee420) Data frame received for 5 I0325 14:35:36.700442 6 log.go:172] (0xc003360aa0) (5) Data frame handling I0325 14:35:36.702520 6 log.go:172] (0xc0019ee420) Data frame received for 1 I0325 14:35:36.702558 6 log.go:172] (0xc00229f7c0) (1) Data frame handling I0325 14:35:36.702580 6 log.go:172] (0xc00229f7c0) (1) Data frame sent I0325 14:35:36.702613 6 log.go:172] (0xc0019ee420) (0xc00229f7c0) Stream removed, broadcasting: 1 I0325 14:35:36.702772 6 log.go:172] (0xc0019ee420) (0xc00229f7c0) Stream removed, broadcasting: 1 I0325 14:35:36.702801 6 log.go:172] (0xc0019ee420) (0xc001573360) Stream removed, broadcasting: 3 I0325 14:35:36.702825 6 log.go:172] (0xc0019ee420) (0xc003360aa0) Stream removed, broadcasting: 5 I0325 14:35:36.702903 6 log.go:172] (0xc0019ee420) Go away received Mar 25 14:35:36.702: INFO: Waiting for endpoints: map[] Mar 25 14:35:36.706: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.90:8080/dial?request=hostName&protocol=udp&host=10.244.2.253&port=8081&tries=1'] Namespace:pod-network-test-9 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 25 14:35:36.706: INFO: >>> kubeConfig: /root/.kube/config I0325 14:35:36.735883 6 log.go:172] (0xc0010b2f20) (0xc0028c4820) Create stream I0325 14:35:36.735903 6 log.go:172] (0xc0010b2f20) (0xc0028c4820) Stream added, broadcasting: 1 I0325 14:35:36.739100 6 log.go:172] (0xc0010b2f20) Reply frame received for 1 I0325 14:35:36.739146 6 log.go:172] (0xc0010b2f20) (0xc001573a40) Create stream I0325 14:35:36.739164 6 log.go:172] (0xc0010b2f20) (0xc001573a40) Stream added, broadcasting: 3 I0325 14:35:36.740399 6 log.go:172] (0xc0010b2f20) Reply frame received for 3 I0325 14:35:36.740457 6 log.go:172] (0xc0010b2f20) (0xc001573ea0) Create stream I0325 14:35:36.740473 6 log.go:172] (0xc0010b2f20) (0xc001573ea0) Stream added, broadcasting: 5 I0325 14:35:36.741816 6 log.go:172] (0xc0010b2f20) Reply frame received for 5 I0325 14:35:36.809544 6 log.go:172] (0xc0010b2f20) Data frame received for 3 I0325 14:35:36.809573 6 log.go:172] (0xc001573a40) (3) Data frame handling I0325 14:35:36.809587 6 log.go:172] (0xc001573a40) (3) Data frame sent I0325 14:35:36.809998 6 log.go:172] (0xc0010b2f20) Data frame received for 3 I0325 14:35:36.810016 6 log.go:172] (0xc001573a40) (3) Data frame handling I0325 14:35:36.810159 6 log.go:172] (0xc0010b2f20) Data frame received for 5 I0325 14:35:36.810185 6 log.go:172] (0xc001573ea0) (5) Data frame handling I0325 14:35:36.812181 6 log.go:172] (0xc0010b2f20) Data frame received for 1 I0325 14:35:36.812214 6 log.go:172] (0xc0028c4820) (1) Data frame handling I0325 14:35:36.812237 6 log.go:172] (0xc0028c4820) (1) Data frame sent I0325 14:35:36.812257 6 log.go:172] (0xc0010b2f20) (0xc0028c4820) Stream removed, broadcasting: 1 I0325 14:35:36.812282 6 log.go:172] (0xc0010b2f20) Go away received I0325 14:35:36.812417 6 log.go:172] (0xc0010b2f20) (0xc0028c4820) Stream removed, broadcasting: 1 I0325 14:35:36.812440 6 log.go:172] (0xc0010b2f20) (0xc001573a40) Stream removed, broadcasting: 3 I0325 14:35:36.812456 6 log.go:172] (0xc0010b2f20) (0xc001573ea0) Stream removed, broadcasting: 5 Mar 25 14:35:36.812: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:35:36.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9" for this suite. Mar 25 14:36:00.833: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:36:00.923: INFO: namespace pod-network-test-9 deletion completed in 24.105982658s • [SLOW TEST:46.511 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:36:00.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-5gmm STEP: Creating a pod to test atomic-volume-subpath Mar 25 14:36:01.017: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-5gmm" in namespace "subpath-731" to be "success or failure" Mar 25 14:36:01.026: INFO: Pod "pod-subpath-test-projected-5gmm": Phase="Pending", Reason="", readiness=false. Elapsed: 8.836558ms Mar 25 14:36:03.030: INFO: Pod "pod-subpath-test-projected-5gmm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012793591s Mar 25 14:36:05.053: INFO: Pod "pod-subpath-test-projected-5gmm": Phase="Running", Reason="", readiness=true. Elapsed: 4.036385756s Mar 25 14:36:07.058: INFO: Pod "pod-subpath-test-projected-5gmm": Phase="Running", Reason="", readiness=true. Elapsed: 6.040696368s Mar 25 14:36:09.061: INFO: Pod "pod-subpath-test-projected-5gmm": Phase="Running", Reason="", readiness=true. Elapsed: 8.044353402s Mar 25 14:36:11.066: INFO: Pod "pod-subpath-test-projected-5gmm": Phase="Running", Reason="", readiness=true. Elapsed: 10.048733941s Mar 25 14:36:13.071: INFO: Pod "pod-subpath-test-projected-5gmm": Phase="Running", Reason="", readiness=true. Elapsed: 12.054316805s Mar 25 14:36:15.075: INFO: Pod "pod-subpath-test-projected-5gmm": Phase="Running", Reason="", readiness=true. Elapsed: 14.057932998s Mar 25 14:36:17.078: INFO: Pod "pod-subpath-test-projected-5gmm": Phase="Running", Reason="", readiness=true. Elapsed: 16.061637012s Mar 25 14:36:19.083: INFO: Pod "pod-subpath-test-projected-5gmm": Phase="Running", Reason="", readiness=true. Elapsed: 18.065849019s Mar 25 14:36:21.087: INFO: Pod "pod-subpath-test-projected-5gmm": Phase="Running", Reason="", readiness=true. Elapsed: 20.069837308s Mar 25 14:36:23.091: INFO: Pod "pod-subpath-test-projected-5gmm": Phase="Running", Reason="", readiness=true. Elapsed: 22.074107025s Mar 25 14:36:25.095: INFO: Pod "pod-subpath-test-projected-5gmm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.078210478s STEP: Saw pod success Mar 25 14:36:25.095: INFO: Pod "pod-subpath-test-projected-5gmm" satisfied condition "success or failure" Mar 25 14:36:25.098: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-projected-5gmm container test-container-subpath-projected-5gmm: STEP: delete the pod Mar 25 14:36:25.144: INFO: Waiting for pod pod-subpath-test-projected-5gmm to disappear Mar 25 14:36:25.167: INFO: Pod pod-subpath-test-projected-5gmm no longer exists STEP: Deleting pod pod-subpath-test-projected-5gmm Mar 25 14:36:25.167: INFO: Deleting pod "pod-subpath-test-projected-5gmm" in namespace "subpath-731" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:36:25.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-731" for this suite. Mar 25 14:36:31.187: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:36:31.285: INFO: namespace subpath-731 deletion completed in 6.111983005s • [SLOW TEST:30.362 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:36:31.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 25 14:36:31.383: INFO: Waiting up to 5m0s for pod "downwardapi-volume-05692b0e-e1c5-4d5d-a041-eff433e0775d" in namespace "downward-api-3075" to be "success or failure" Mar 25 14:36:31.402: INFO: Pod "downwardapi-volume-05692b0e-e1c5-4d5d-a041-eff433e0775d": Phase="Pending", Reason="", readiness=false. Elapsed: 18.184138ms Mar 25 14:36:33.425: INFO: Pod "downwardapi-volume-05692b0e-e1c5-4d5d-a041-eff433e0775d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041750291s Mar 25 14:36:35.429: INFO: Pod "downwardapi-volume-05692b0e-e1c5-4d5d-a041-eff433e0775d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045512084s STEP: Saw pod success Mar 25 14:36:35.429: INFO: Pod "downwardapi-volume-05692b0e-e1c5-4d5d-a041-eff433e0775d" satisfied condition "success or failure" Mar 25 14:36:35.432: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-05692b0e-e1c5-4d5d-a041-eff433e0775d container client-container: STEP: delete the pod Mar 25 14:36:35.449: INFO: Waiting for pod downwardapi-volume-05692b0e-e1c5-4d5d-a041-eff433e0775d to disappear Mar 25 14:36:35.478: INFO: Pod downwardapi-volume-05692b0e-e1c5-4d5d-a041-eff433e0775d no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:36:35.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3075" for this suite. Mar 25 14:36:41.499: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:36:41.582: INFO: namespace downward-api-3075 deletion completed in 6.099675875s • [SLOW TEST:10.296 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:36:41.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Mar 25 14:36:41.627: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. Mar 25 14:36:42.087: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Mar 25 14:36:44.250: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720743802, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720743802, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720743802, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720743802, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 14:36:46.889: INFO: Waited 626.402872ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:36:47.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-3381" for this suite. Mar 25 14:36:53.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:36:53.792: INFO: namespace aggregator-3381 deletion completed in 6.468079719s • [SLOW TEST:12.210 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:36:53.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Mar 25 14:36:53.841: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 25 14:36:53.860: INFO: Waiting for terminating namespaces to be deleted... Mar 25 14:36:53.862: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Mar 25 14:36:53.868: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Mar 25 14:36:53.868: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 14:36:53.868: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Mar 25 14:36:53.868: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 14:36:53.868: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Mar 25 14:36:53.874: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Mar 25 14:36:53.874: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 14:36:53.874: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Mar 25 14:36:53.874: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 14:36:53.874: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Mar 25 14:36:53.874: INFO: Container coredns ready: true, restart count 0 Mar 25 14:36:53.874: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Mar 25 14:36:53.874: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15ff92cf8843cb9b], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:36:54.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8707" for this suite. Mar 25 14:37:00.925: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:37:01.044: INFO: namespace sched-pred-8707 deletion completed in 6.136986249s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:7.252 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:37:01.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-f9040e6d-29af-41a3-8e89-e6f005a6b9cc STEP: Creating a pod to test consume secrets Mar 25 14:37:01.180: INFO: Waiting up to 5m0s for pod "pod-secrets-3563279a-0d6e-440e-af1b-f7693fd52c80" in namespace "secrets-5785" to be "success or failure" Mar 25 14:37:01.184: INFO: Pod "pod-secrets-3563279a-0d6e-440e-af1b-f7693fd52c80": Phase="Pending", Reason="", readiness=false. Elapsed: 4.397007ms Mar 25 14:37:03.189: INFO: Pod "pod-secrets-3563279a-0d6e-440e-af1b-f7693fd52c80": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009049083s Mar 25 14:37:05.193: INFO: Pod "pod-secrets-3563279a-0d6e-440e-af1b-f7693fd52c80": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013041035s STEP: Saw pod success Mar 25 14:37:05.193: INFO: Pod "pod-secrets-3563279a-0d6e-440e-af1b-f7693fd52c80" satisfied condition "success or failure" Mar 25 14:37:05.197: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-3563279a-0d6e-440e-af1b-f7693fd52c80 container secret-volume-test: STEP: delete the pod Mar 25 14:37:05.229: INFO: Waiting for pod pod-secrets-3563279a-0d6e-440e-af1b-f7693fd52c80 to disappear Mar 25 14:37:05.232: INFO: Pod pod-secrets-3563279a-0d6e-440e-af1b-f7693fd52c80 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:37:05.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5785" for this suite. Mar 25 14:37:11.247: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:37:11.342: INFO: namespace secrets-5785 deletion completed in 6.106333222s STEP: Destroying namespace "secret-namespace-7661" for this suite. Mar 25 14:37:17.355: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:37:17.450: INFO: namespace secret-namespace-7661 deletion completed in 6.107743184s • [SLOW TEST:16.406 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 25 14:37:17.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments Mar 25 14:37:17.550: INFO: Waiting up to 5m0s for pod "client-containers-25d3f582-21a1-424d-abfb-2e06ab7dfa9a" in namespace "containers-9945" to be "success or failure" Mar 25 14:37:17.566: INFO: Pod "client-containers-25d3f582-21a1-424d-abfb-2e06ab7dfa9a": Phase="Pending", Reason="", readiness=false. Elapsed: 15.887833ms Mar 25 14:37:19.570: INFO: Pod "client-containers-25d3f582-21a1-424d-abfb-2e06ab7dfa9a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019218612s Mar 25 14:37:21.574: INFO: Pod "client-containers-25d3f582-21a1-424d-abfb-2e06ab7dfa9a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023663283s STEP: Saw pod success Mar 25 14:37:21.574: INFO: Pod "client-containers-25d3f582-21a1-424d-abfb-2e06ab7dfa9a" satisfied condition "success or failure" Mar 25 14:37:21.577: INFO: Trying to get logs from node iruya-worker2 pod client-containers-25d3f582-21a1-424d-abfb-2e06ab7dfa9a container test-container: STEP: delete the pod Mar 25 14:37:21.598: INFO: Waiting for pod client-containers-25d3f582-21a1-424d-abfb-2e06ab7dfa9a to disappear Mar 25 14:37:21.602: INFO: Pod client-containers-25d3f582-21a1-424d-abfb-2e06ab7dfa9a no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 25 14:37:21.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9945" for this suite. Mar 25 14:37:27.642: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 25 14:37:27.722: INFO: namespace containers-9945 deletion completed in 6.116639399s • [SLOW TEST:10.271 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSMar 25 14:37:27.722: INFO: Running AfterSuite actions on all nodes Mar 25 14:37:27.722: INFO: Running AfterSuite actions on node 1 Mar 25 14:37:27.722: INFO: Skipping dumping logs from cluster Ran 215 of 4412 Specs in 6103.657 seconds SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped PASS